The Development of Speech Perception: The Transition from Speech Sounds to Spoken Words Edited by Judith C. Goodman and ...
26 downloads
872 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
The Development of Speech Perception: The Transition from Speech Sounds to Spoken Words Edited by Judith C. Goodman and Howard C. Nusbaum A Bradford Book The MIT Press Cambridge, Massachusetts London, England
title: author: publisher: isbn10 | asin: print isbn13: ebook isbn13: language: subject publication date: lcc: ddc: subject:
The Development of Speech Perception : The Transition From Speech Sounds to Spoken Words Goodman, Judith MIT Press 0262071541 9780262071543 9780585021256 English Language acquisition--Congresses, Speech perception-Congresses, Perceptual learning--Congresses. 1994 P118.D46 1994eb 401/.93 Language acquisition--Congresses, Speech perception-Congresses, Perceptual learning--Congresses. cover
© 1994 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the
publisher. This book was set in Times Roman by Asco Trade Typesetting Ltd., Hong Kong, and was printed and bound in the United States of America. Library of Congress of Cataloging-in-Publication Data The Development of speech perception: the transition from speech sounds to spoken words/ edited by Judith Goodman and Howard C. Nusbaum. p. cm.(Language, speech, and communication) Papers presented at the Workshop on Recognizing Spoken Language which was held June 1989, University of Chicago. "A Bradford book." Includes bibliographical references and index. ISBN 0-262-07154-1 1. Language acquisitionCongresses. 2. Speech perceptionCongresses. 3. Perceptual learningCongresses. I. Goodman, Judith, 1958 II. Nusbaum, Howard C. III. Workshop on Recognizing Spoken Language (1989: University of Chicago) IV. Series. P118.D46 1994 401'.93dc20
93-11391 CIP cover-0
Contents Preface
vii
Contributors
xi
Introduction
1
Chapter 1 Developing Theories of Speech Perception: Constraints from Developmental Data
3
Judith C. Goodman, Lisa Lee, and Jenny DeGroot Part I
Innate Sensory Mechanisms and Constraints on Learning
Chapter 2 Observations on Speech Perception, Its Development, and the Search for a Mechanism
35 37
Joanne L. Miller and Peter D. Eimas Chapter The Importance of Childhood to Language Acquisition: Evidence from American Sign 3 Language
57
Rachel I. Mayberry Part II
Perceptual Learning of Phonological Systems
Chapter 4 Cross-Language Speech Perception: Developmental Change Does Not Involve Loss Janet F. Werker
91 93
cover-1 Page vi Chapter Perceptual Learning of Nonnative Speech Contrasts: Implications for Theories of Speech 5 Perception
121
David B. Pisoni, Scott E. Lively, and John S. Logan Chapter The Emergence of Native-Language Phonological Influences in Infants: A Perceptual 6 Assimilation Model
167
Catherine T. Best Part III Interactions of Linguistic Levels: Influences on Perceptual Development
225
Chapter 7 Infant Speech Perception and the Development of the Mental Lexicon
227
Peter W. Jusczyk Chapter Sentential Processes in Early Child Language: Evidence from the Perception and Production 8 of Function Morphemes 271 LouAnn Gerken Chapter 9 Learning to Hear Speech as Spoken Language
299
Howard C. Nusbaum and Judith C. Goodman Index
339 page_vi Page vii
Preface Traditionally, theories of speech perception have sought to explain primarily the way adults recognize spoken language, seldom considering the problem of the way this ability develops. Of course, any theory of speech perception must account for performance by mature language users, but our goal in developing this book was to encourage researchers to consider the proposition that theories must also account for the development of linguistic processing and the changes that occur with maturation and experience. Although developmental questions have been addressed in speech research, this work initially focused on the way speech perception develops in prelinguistic infants. Several researchers have asked how innate abilities interact with perceptual experience over the course of the first year of life, but it is important to expand this focus to allow an examination of the role of developing linguistic knowledge and increasing experience using language in understanding the way speech perception develops. Perceptual processing may be modified when children begin to learn their native phonologies and lexicons, as well as higher levels of linguistic structure, and research on very young infants alone fails to capture this sort of development. Indeed, the chapters in this volume report that changes related to the acquisition of linguistic knowledge do occur. While the contributors to this volume do not all agree on the specific nature of the processes involved in speech perception or the way in which these processes develop, together, these chapters document the striking changes that take place, not only in early childhood but throughout life. In addition, the authors use these findings to make suggestions as to how theories of speech perception will need to be modified if they are to explain such changes.
This volume grew out of the Workshop on Recognizing Spoken Language that was held at the University of Chicago in June 1989. The goal page_vii Page viii of this workshop was to examine transitions in the perceptual processing of speech from infancy to adulthood. The workshop participants were scientists who have carried out speech research with infants, children, and/or adults. Their task was to consider in detail the theoretical implications of their research for a well-specified and complete theory of speech perception that could address issues concerning speech perception in children and adults, as well as the development of those abilities. In particular, the participants were invited to speculate about how their findings constrain the nature of the mechanisms and representations that mediate speech perception during infancy, childhood, and adulthood. This is a tall order, and these chapters contain a wealth of information on the development of perceptual processing, as well as constraints and prescriptions for the development of theories of perceptual processing. The findings reported here cover many critical issues for theory buildingfor example, how maturation and experience modify innate sensory mechanisms, how structural knowledge is acquired, whether young children represent linguistic information in the same way as adults, and how segment- and word-recognition processes differ among children and adults. In addition, they include proposals regarding the nature of the mechanisms behind the perception of linguistic units, the acquisition of early word patterns, and the development of the mental lexicon. This book differs from previous books on speech perception in several respects. First, it attempts to integrate research involving infants, young children, and adults. Although in recent years several books have considered a wide range of issues in speech perception, these books have not thoroughly addressed developmental issues; at best they have included only one or two chapters on speech perception in infants. Second, this book tries to explore systematically how adult perceptual abilities develop from early infant capabilities, focusing in particular on the nature of the transitional stages and the constraints they place on theories of speech perception. Other recent books on speech perception that have focused on a single theoretical issue have not addressed the transition from recognition of speech segments to recognition of spoken words. Finally, unlike other books that have addressed issues in perceptual development, this book also focuses on speech perception. We hope that researchers and students in the areas of psychology, linguistics, cognitive science, and speech and hearing will find this approach stimulating. We are deeply grateful to the authors who contributed to this volume. We appreciate their level of commitment, their willingness to go beyond their data to make theoretical speculations, and their patience during both page_viii Page ix the workshop and the preparation of the book. We are grateful to Dr. John Tangney and the Air Force Office on Speech Research (grant no. AFOSR 89-0389) and to the Council on Advanced Studies in the Humanities and Social Sciences at the University of Chicago for financial support to conduct the Workshop on Recognizing Spoken Language. Several graduate students made the preparation for the workshop much smoother; they include Kevin Broihier, Anne Farley, Jenny DeGroot, and Lisa Lee. In addition, several colleagues and students participated in the workshop discussions, enriching the experience for everybody. We thank Jenny DeGroot, Starkey Duncan, Susan Duncan, Anne Farley, Steve Goldinger, John Goldsmith, Beth Greene, Janellen Huttenlocher, Karen Landahl, Lisa Lee, Susan Levine, Jerre Levy, David McNeill, Todd Morin, Nancy Stein, and Michael Studdert-Kennedy for their participation. We wish to express special thanks to Anne Cutler for her valuable contributions to the workshop. Jennifer Jahn assisted in preparation of the manuscript. We express gratitude to Teri Mendelsohn, our editor at the MIT Press, for her patience and guidance. page_ix Page xi
Contributors Catherine T. Best Department of Psychology, Wesleyan University, Middletown, CT; Haskins Laboratory, New Haven, CT Jenny DeGroot Department of Psychology, University of Chicago, Chicago, IL Peter D. Eimas Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI LouAnn Gerken Department of Psychology, State University of New York at Buffalo, Buffalo, NY Judith C. Goodman Department of Psychology, University of California at San Diego, La Jolla, CA Peter W. Jusczyk Department of Psychology, State University of New York at Buffalo, Buffalo, NY Lisa Lee Department of Psychology, University of Chicago, Chicago, IL Scott E. Lively Department of Psychology, Indiana University, Bloomington, IN John S. Logan Department of Psychology, Carleton University, Ottawa, Ontario, CANADA Rachel I. Mayberry School of Human Communication Disorders, McGill University, Montreal, Quebec, CANADA Joanne L. Miller Department of Psychology, Northeastern University, Boston, MA Howard C. Nusbaum Department of Psychology, University of Chicago, Chicago, IL page_xi Page xii David B. Pisoni Department of Psychology, Indiana University, Bloomington, IN Janet F. Werker Department of Psychology, University of British Columbia, Vancouver, B.C., CANADA page_xii Page 1
Introduction page_1 Page 3
Chapter 1 Developing Theories of Speech Perception: Constraints from Developmental Data Judith C. Goodman, Lisa Lee, and Jenny DeGroot A tremendous proportion of research in speech perception has focused on a listener's ability to identify or discriminate phonetic contrasts. This focus on lower-level segmental perception until recently dominated research with both adults and infants (Eimas et al. 1971; Mattingly et al. 1971; Pisoni 1973; see also Aslin, Pisoni, and Jusczyk 1983 for a review of segmental perception abilities by infants). As a result of this focus and the sorts of data these studies provide, theoretical questions have addressed whether the mechanisms responsible for speech perception are best described as innate and speech specific (Liberman et al. 1967; Liberman and Mattingly 1985; Repp 1982) or as properties of the general auditory system (Lane 1965; Pastore 1981; Pisoni 1977).
The chapters in this volume suggest that either characterization alone is too narrow because they have concentrated on studying perception in stable periods rather than trying to explain developmental change. By focusing on questions regarding change in the processes of language perception across the lifespan, however, it can be seen that infants have innate perceptual biases that are shaped by subsequent experience. Further, it appears that many levels of linguistic structure provide informative constraints on one another (cf. Gerken, this volume; Jusczyk 1985; Katz, Baker, and MacNamara 1974; Menn 1983 for discussions of the role of word learning in phonological development and semantics and syntax in word learning) and that young listeners must learn about each level and how they may be related in language processing. In addition, these developmental lessons appear to be useful in explaining language processing and perceptual learning in mature listeners as well. Our goal in this chapter is to abstract from the set of papers in this volume broad themes that must be considered in explaining the development of speech perception and language understanding. These themes, on page_3 Page 4 the whole, are concerned with the nature of linguistic experience and its influences on perceptual development. Historically, research in speech perception dealt with how adults identify and discriminate phonetic information in the acoustic input. This focus on adults is not surprising because what investigators sought to explain was the end statethat is, how the mature listener perceived linguistic information. A challenge for these theories was to explain how listeners handle the lack of invariance in the speech stream: no one-to-one mapping exists between information in the acoustic waveform and the listener's percept. Many theorists proposed innate, speech-specific mechanisms to account for this fact (Liberman et al. 1967; Liberman and Mattingly 1985; Repp 1982). In order to evaluate that claim, developmental data was required, and the discrimination abilities of young infants were assessed. Still, it was not immediately apparent that any developmental change in speech perception occurs because much of the early work found that very young infants possess discrimination and identification abilities that are remarkably similar to those of adults (Eimas et al. 1971; Kuhl 1979; see Aslin, Pisoni, and Jusczyk 1983; Jusczyk 1981 for reviews). This path of research has resulted in a gap between our knowledge of the infant's early sensory abilities and the adult's processing of speech. We know that prelinguistic infants are sensitive to acoustic information which is linguistically relevant and that there are striking similarities between their sensitivities and those of adults on tasks that involve the discrimination of discrete, phonetic segments. But we also know that children come to represent linguistic unitsthat is, they not only discriminate phonetic contrasts but they develop, for example, phone categories and lexical knowledge as well. In other words, despite the perceptual parallels that exist between very young infants and adults, speech perception does undergo development, and a theoretical account of the transition from the infant's sensory capacities to the child's ability to identify words and perceptual learning by adults must be provided. The chapters in this volume detail the changes that occur and propose mechanisms by which innate prelinguistic discriminatory abilities may come to handle the representation and perception of linguistic structures. Further, many of the changes we see during early childhood appear to have parallels in the processing of speech input by adults. Hence, the mechanisms responsible for the development of speech perception may play a role in processing throughout the lifespan and, therefore, may be important factors in explaining speech perception. This is not to claim that no differences exist between children and adults. They do, and they page_4 Page 5 must be explained. Nonetheless, the similarities are very suggestive about what sorts of mechanisms affect both perceptual learning and the identification of linguistic units. Due to the recent wealth of information concerning the nature of developmental change in language processing, it should be possible to narrow the gap that exists between our theories of the innate processing mechanisms of infants and of adults' abilities to recognize discrete linguistic units. The parallels noted above highlight the possibility of developing a single, coherent theory of speech perception to account for the abilities of children and adults rather than having a collection of theories each accounting for the perceptual abilities of a single age group.
This has not generally been the case. Most theories of adult speech perception do not consider how perceptual mechanisms and linguistic representations develop (Elman and McClelland 1986; Liberman et al. 1967; Liberman and Mattingly 1985). However, an understanding of developmental changes in the nature of these mechanisms and representations could constrain theories of how adults use knowledge about structural properties of phonemes and words to mediate language recognition. Therefore, in constructing theories of speech perception, it is important to consider how maturation and experience modify innate sensory mechanisms, how structural knowledge is acquired, and whether young children represent linguistic information in the same way as adults. Similarly, studies of infant speech perception have seldom attempted to explain fully how the ability to recognize words develops from innate perceptual abilities (but see Jusczyk 1985; Studdert-Kennedy 1986). A complete theory of speech perception must describe how innate perceptual mechanisms come to support recognition of consonants and vowels as phonetic categories, as well as the role of phonetic categories in the acquisition of early word patterns and the development of the mental lexicon. Finally, theories of vocabulary acquisition in early childhood (Clark 1983; Markman 1991) and word recognition in middle childhood (Tyler and Marslen-Wilson 1981) are rarely linked to the infant's innate perceptual abilities or to lower-level segmental processing at any age. In short, there is a need to integrate the findings and conclusions of speech research from early perceptual encoding in infants to word recognition in adults. While a great deal of work remains before a theoretical integration of this sort can be accomplished, we hope to further a consideration of what a coherent theory might include by detailing factors that influence the development of speech perception. We are not attempting to provide page_5 Page 6 a theory, but we wish to highlight issues raised by the contributors to this volume that may shed light on the perceptual processing of speech throughout the lifespan. Three issues seem particularly important for building a theory of speech perception. The first of these issues concerns the relationship between levels of linguistic knowledge in processing. A language is a multileveled system for communication supporting the production and perception of phonemes, words, syntax, and paralinguistic information, such as prosody. Information at one level may serve to constrain learning and perception at another level. For example, children's phonological knowledge may emerge from their early lexical development (Menn 1983). Data concerning the processes involved in understanding linguistic information at one level of representation may constrain the types of processes that operate at other levels of representation. The chapters in this volume suggest that we should construe our notion of level of linguistic structure or knowledge quite broadly indeed. Hence we will consider the role of stress and prosody in the development of speech perception and the relationship between linguistic production and linguistic perception, as well as processing interactions between levels of linguistic structure, such as the lexical and phoneme levels. A consideration of data from both children and adults supports the conclusion that a theory of speech perception should integrate the findings of research concerning language processing across levels of perceptuolinguistic analysis. The second issue is concerned with the role of early linguistic experience. Early linguistic experience has two sorts of effects. First, a critical period may exist for language learning (Lenneberg 1967), and, second, knowledge of one language affects the learning and processing of a second language (Best, this volume; Best, McRoberts, and Sithole 1988; Lively, Pisoni, and Logan 1991; Logan, Lively, and Pisoni 1991; Pisoni, Lively, and Logan, this volume; Werker, this volume; Werker and Lalonde 1988; Werker and Tees 1983). Something seems to be special about early childhood with respect to language development. While some amount of perceptual learning is possible throughout the lifespan, Mayberry's work shows that, if children are not exposed to linguistic input during the early years, subtle deficits exist even after twenty years of language use (this volume; Mayberry and Eichen 1991; Mayberry and Fischer 1989). We will consider the nature of this critical period and the sorts of developmental mechanisms that might account for it. One possibility is that a critical learning period exists during which specialized neurological structures may be established for perceptual processing of language. A second page_6 Page 7 possibility is that it is easier to learn some aspects of linguistic structure in the absence of other higher-order knowledge
because the latter may focus one's attention at the wrong level of analysis (see Pisoni et al., this volume). Most people do, of course, learn a first language in early childhood, and the knowledge of that language influences their ability to learn and to process a second language. The chapters by Pisoni, Lively, and Logan; Werker; and Best show that limitations on the perceptual learning of a second language exist as a result of phonological knowledge of a first language. In addition, work by Cutler et al. (1989, 1992) and by Grosjean (1988, 1989) indicates that even bilinguals show limitations in learning two languages. The third issue concerns the role of attention in perceptual learning. Experience with language modifies perceptual processing both for children (Best, McRoberts, and Sithole 1988; Kuhl et al. 1992; Werker and Lalonde 1988; Werker and Tees 1983) and adults (Lively, Pisoni, and Logan 1991; Logan, Lively, and Pisoni 1991; Pisoni et al. 1982; Samuel 1977). Current theories of speech perception cannot account for the development of perceptual abilities, however, because they do not include a mechanism of change as part of the perceptual process. Many chapters in this volume suggest that dynamic mechanisms-that is, processing mechanisms that allow a listener to change what information is attended to according to input propertiesare critical for theories concerning the recognition and comprehension of spoken language (Best, this volume; Jusczyk, this volume; Mayberry, this volume; Nusbaum and Goodman; this volume; Pisoni, Lively, and Logan, this volume; Werker, this volume). Dynamic mechanisms will be an important component of a theory that accounts both for the way speech perception develops throughout childhood and for the way perceptual processes are modified by adults. Many authors in this volume argue that perceptual learning involves shifting one's attention to particularly informative aspects of the acoustic signal. In other words, the effect of experience with one's native language or, later, with a second language is to learn how to direct attention to the acoustic, segmental, lexical, and sentential properties of an utterance that work together to specify the linguistic interpretation of the utterance. Below we address the evidence related to each of these issues. Relationships between Levels of Linguistic Structure in Speech Perception and Language Understanding Theories of word recognition have commonly incorporated the use of information from multiple linguistic levels (Elman and McClelland 1986; page_7 Page 8 Morton 1969; Marslen-Wilson 1975, 1987; Marslen-Wilson and Tyler 1980; Marlsen-Wilson and Welsh 1978) though theories of speech perception seldom do (Liberman et al. 1967; Liberman and Mattingly 1985; Stevens and Blumstein 1981; Stevens and Halle 1967). However, several of the contributors to this volume, as well as other researchers, have demonstrated interactions between levels of linguistic structure during language processing. Below, we look at the interactions between phonemes, words, syntax, and prosody, as well as between perception and production. A single coherent theory that explains processing and acquisition at a number of linguistic levels for a wide variety of input contexts is needed to account for these findings. Interactions between Different Levels of Linguistic Structure In her chapter, Gerken notes that a common view about language acquisition is that children start with the smallest units and gradually learn about larger and larger units as they acquire language. This seems intuitively plausible. After all, if words are composed of phonemes, one must first learn phonemes, and if sentences are composed of words, then word learning must precede sentence learning. However, a great deal of research suggests that different levels of linguistic information are highly interactive both in language acquisition and in language processing by children. A number of examples illustrate this point. Gerken (this volume) suggests that syntactic information influences perception and learning at the lexical level in 2-year-olds. Recent work by McDonough and Goodman (1993) finds that 2-year-olds use semantic information provided by a familiar verb such as eats to assign meaning to an unfamiliar sound pattern such as rutabaga and that 2-year-olds' identification of an ambiguous sound pattern in a sentence is influenced by semantic context (Goodman 1993). Lexical knowledge influences phoneme perception by children: children's perception of ambiguous phonemes is influenced by lexical context (Hurlburt and Goodman 1992). In addition, changes in phoneme perception occur around 10 months of age (Werker and Lalonde 1988; Werker and Tees 1983), suggesting that interactions may occur between the development of phoneme perception and the acquisition of a child's earliest-comprehended lexical units. Finally, suprasegmental information influences perception of other units: HirshPasek et al. (1987; see also Kemler Nelson et al. 1989) have shown that suprasegmental
information may direct infants' attention to linguistic units such as clauses and phrases. These findings are not raised to make claims about temporal properties of interactive processing but to note page_8 Page 9 that at some stage in processing (perceptual or postperceptual decision) information at one level affects identification and acquisition at other levels of linguistic structure. How can these sorts of interactions in acquisition and processing be explained? In his chapter, Jusczyk provides one suggestion of how various levels of linguistic structure might interact to result in phonological learning. The model of word recognition and phonetic structure acquisition (WRAPSA) that Jusczyk presents suggests that children do not work up unidirectionally from smaller to larger units. Jusczyk theorizes that young infants store exemplars of the utterances they hear, perhaps represented in syllabic or larger units. With linguistic experience, they begin to weigh the properties of the utterance according to their relative importance in signaling meaningful distinctions. Segmental representations arise from the discovery of similarities and contrasts among these exemplars. Due to the temporal nature of speech (simply put, the listener hears the initial portions of an item first), this process may be biased toward segments in initial position. For example, utterances with initial segments that share acoustic characteristics may be classified together, forming the basis for distinguishing these segments from other segments. In her chapter, Best also speculates that infants' phone categories are not necessarily segmental in size but may involve larger units such as syllables and words. Phonological knowledge may arise from a refinement of these larger units. Best's view of what information is important in signaling meaningful distinctions between phone categories differs from that suggested by Jusczyk. In particular, she suggests that infants come to represent phone categories by learning the articulatory-gestural properties of speech. According to this view, both speech perception and production are guided by knowledge of articulatory gestures. Although the details of these models may not all be correct, the models attempt to deal with a void in theories of speech perception, namely the mechanisms by which infants hone in on the phonology of their native language. Whatever the nature of phoneme representations, an integration of levels of linguistic structure may be critical in learning the relevant categories because it provides important constraints on the identity of acoustic information. Thus, the acquisition of a lexicon might contribute to the categorization of speech because it is informative with respect to the correlations of distributional properties that signal meaningful distinctions (for example, phonemes distinguish words). The same kind of interactive constraint process may operate at other levels of linguistic structure. For example, suprasegmental information in page_9 Page 10 English may signal the onset of a word (Cutler et al. 1992). This issue is important in accounting for perception in adults as well as children. Pisoni, Lively, and Logan (this volume) show that perceptual learning occurs in adults and is affected by the position in a word where the phoneme occurs. Ganong (1980) found that lexical information affects phoneme perception in monolingual adults, and Grosjean (1988; Burki-Cohen, Grosjean, and Miller 1989) found that, under certain conditions, phoneme perception by adult bilinguals is affected by the language of the surrounding words. To note the role of higher-level information in perceptual learning does not preclude the simultaneous occurrence of within-level learning. For example, infants may develop vowel categories prior to any knowledge of word level information. Kuhl et al. (1992) tested infants from the United States and Sweden on their perception of native- and foreign-language vowel prototypes. Their results showed that by 6 months of agebefore they have acquired wordsinfants have become sensitive to vowels that are prototypical of their native language. These vowel categories may emerge not from the acquisition of words but from the way infants are predisposed to store information. Nonetheless, interactions between different levels of linguistic knowledge play an important role in other sorts of perceptual development, and any theory of speech perception must provide an account of these interactions. Stress and Prosody
Although language development is most often studied in terms of growing knowledge of segmental units of various sizes, such as phonemes, morphemes, and words (Bates, Bretherton, and Snyder 1988; Ferguson and Farwell 1975; Menyuk and Menn 1979; Walley, Smith, and Jusczyk 1986), the suprasegmental level of linguistic information also contributes to speech perception and its development. Suprasegmental information includes the patterns of stress and prosody that are characteristic of a language. A growing body of research suggests ways in which stress and prosody are important for language processing. Listeners use this information to interpret language and to learn more about the structure of speech. The prosodic structure of language can guide adult listeners in their perception of speech. Cutler (1976) showed that the stress contours of an utterance focus a listener's attention on the location in a sentence of important semantic information. Thus, prosodic information influences word identification by directing a listener's attention to particular items in an utterance. page_10 Page 11 More recently, Cutler and her colleagues (Cutler et al. 1986, 1989, 1992; Cutler and Norris 1988) have shown that prosody can play a role in lexical segmentation for speakers of languages in which it is a predictable cue to word boundaries. For example, English is a stress-timed language comprised of sequences of strong and weak syllables. Strong syllables are more likely to signal the beginning of a content word than are weak syllables. French has a very different prosodic structure; speech rhythm is syllable based. Cutler and her colleagues have found that English and French listeners adopt different segmentation strategies that reflect the rhythmic structure of their respective languages. Native English listeners segment the speech at strong-syllable boundaries, thus increasing the likelihood of finding and accessing a content word (Cutler and Norris 1988). Native French listeners, however, use a syllable-based strategy (Cutler et al. 1986, 1989). Thus, prosodic features affect perceptual processing. Further, since they affect processing in languagespecific ways, they must be learned. In addition to highlighting word boundaries and semantic information for adult listeners, prosodic variables may play a role in the development of language recognition and language production. Many researchers have provided evidence that stress patterns affect early lexical development (Blasdell and Jensen 1970; du Preez 1974; Gleitman and Wanner 1982; Slobin 1982): stressed words may stand out in a stream of fluent speech. In other words, they may be easier to segment from the speech stream. This hypothesis is supported by the fact that content words, such as object names, which tend to receive primary stress, tend to be learned early (Nelson 1973). Although it is plausible that children fail even to perceive words in an utterance that do not receive stress, it is more probable that, like adults, stress simply directs them to the important semantic information. Gerken's work (this volume) demonstrates that children do, in fact, perceive unstressed items. Although they apparently recognize these items, they often fail to produce them in their own utterances. But, even their omissions demonstrate their sensitivity to and knowledge of the prosodic patterns of English: when utterance complexity causes children to omit words from their speech, they follow the metrical pattern of English, producing words in strong positions and omitting weak syllables that follow them. This suggests that young children know a great deal about the prosodic patterns of sentences. In fact, even before children begin to understand spoken language, they are sensitive to the prosodic properties of speech that facilitate perception. In a series of experiments, Kemler Nelson et al. (1989) suggest how sen page_11 Page 12 tence prosody may provide cues to segmentation of linguistic units, such as clauses and phrases. Kemler Nelson et al. (1989), for example, found that 6-month-old infants are sensitive to the prosody of clausal units. The infants heard speech samples in which one-second silences were inserted either at clause boundaries or in the middle of clauses. The investigators hypothesized that the sentences with silence inserted at clause boundaries would sound more natural. The infants preferred the more natural utterances, suggesting that they are sensitive to the relationship between prosody and linguistic units. At nine months, infants preferred speech in which silences were inserted between syntactic phrases than speech in which silences were inserted within phrases. These findings held even when the phonemic content was filtered out of the
speech, leaving the prosodic contours. This suggests that, before they are a year old, infants gain sensitivity to linguistic units that are marked by prosodic structure. This sensitivity should help them to direct attention to linguistically relevant information when listening to fluent speech. These findings demonstrate that a theory of speech perception should incorporate prosodic influences. In development, prosody may focus infants' attention on relevant linguistic unitssee Fernald (1984) for the role of prosodic information in attention to language very early in infancy. Prosody could play an important role in semantic development as well. For example, stress may help to direct children's attention to important semantic information, facilitating processing of those items relative to unstressed items (Cutler and Swinney 1987). Furthermore, the segmentation work of Cutler (1976) and her colleagues (Cutler et al. 1989, 1992) demonstrates that prosodic factors affect processing throughout the lifespan and must be incorporated into theories of language understanding. Regularities in prosodic structure facilitate lexical segmentation and focus listeners' attention on important semantic information. The process by which the prosodic structure is learned has not been specified. One great mystery is the extent to which prosodic information of a second language can be learned. Some evidence suggests that, with brief exposure to an unfamiliar language, listeners can learn to use prosodic information to determine the constituents of an utterance (Wakefield, Doughtie, and Yom 1974). Other work has suggested that this ability exists independently of specific training in the foreign language (Pilon 1981). Cutler et al.'s (1992) work demonstrates that even bilinguals seem unable to maintain two prosodic structures, one for each of their lan page_12 Page 13 guages. Clearly, the acquisition of a prosodic structure, as well as its role in language learning and speech perception, must be further explored. The Relation between Language Production and Language Perception The notion that speech production plays a role in speech perception is not new. The motor theory of speech perception proposes that listeners perceive speech by recognizing the articulatory gestures that produced the signal and that this is carried out by language-specific specialized mechanisms (Liberman et al. 1967; Liberman and Mattingly 1985). Early accounts of motor theory focused on adult perception and did not address the development of speech perception abilities. However, speech production may play a role in the development of speech perception. Indeed, some discussion as to how a specialized, motor-based speech perception system might develop has been presented recently. Studdert-Kennedy (1986) suggests that language evolved in the left hemisphere of the brain to take advantage of the neural organization for manual coordination already centered in that hemisphere. For each child, the development of speech or sign language requires the development of a perceptuomotor link in the left hemisphere; experience in a language environment enables this neural development to occur. Another look at the relation between production and perception has been offered by Miller and Eimas (this volume). In their chapter in this volume, Miller and Eimas discuss how listeners might normalize changes in speaking rate. They examine infants' and adults' discrimination of phonemes that occur in syllables of various durations (a correlate of speaking rate). The authors suggest that a specialized mechanism for speech perception might explain the striking similarities in rate normalization in infants and adults. They note that other relationships between perception and production are possible but that there is not sufficient human neurophysiological data to evaluate the various possibilities. However, neurophysiological data from other species (namely, barn owls and bats) indicate that a specialized perceptual process may in fact be an emergent property of underlying neuronal behavior patterns. A similar sort of processing system could be true of human speech perception as well. The motor theory assumes that articulation-based mechanisms of perception are specialized for speech. Best (this volume) also argues that perception is intrinsically linked to knowledge of how speech is produced. Her view of acquisition and perception, however, is closely tied to Gibson's ecological theory of perception (1966, 1979; see also Fowler 1986, page_13 Page 14
1989, 1991), and she argues that children learn these relations as they learn about other nonlinguistic stimuli. Thus, like other types of perception, speech perception is said to involve the perception of distal objects or events. In the case of speech, the relevant distal events are the articulations that produced the speech signal. Language-specific speech perception develops as the child detects systematicity (sound-meaning correspondences) in the articulatory gesture patterns produced by mature speakers. The development of the child's production is guided by these perceived gestural patterns. Other work demonstrates that infants are sensitive to production factors of speech. Infants as young as 5 months of age can match a face producing a vowel with the appropriate auditory input (Kuhl and Meltzoff 1984). Thus, infants appear to be quite good at making intermodal connections between sources of perceptual information. A consequence of this ability could be that visual information (lip movement) constrains the interpretation of speech input (Massaro 1987; Massaro and Cohen 1983; McGurk and MacDonald 1976). If infants make a connection between their kinesthetic sensations in production and what they see during perception, this might facilitate their categorization of sounds. In other words, knowledge about articulatory gestures and their correspondence to auditory patterns provides another sort of constraint that may be important in perceptual learning and processing. The Role of Linguistic Experience Although perceptual learning occurs throughout the lifespan, something seems to be special about childhood. Adults can learn nonnative contrasts, but limitations on this learning appear to exist. For example, in their chapter Pisoni, Lively, and Logan report that Japanese listeners' ability to identify /r/ and /1/ differed according to word position, suggesting that nonnative listeners did not really learn a context-independent phoneme category. Although the structure of phoneme categories for native speakers of a language is not clear, such speakers do not show similar word-position effects in discrimination. These findings suggest that perhaps one cannot learn a second language to native proficiency if learning begins after some critical period. If there is a critical period during which language must be learned, what is it that is critical to language learning? In her contribution to this volume, Mayberry suggests that, if one is not exposed to linguistic input in the first few years of life, subtle but identifiable differences will exist in the way language is processed. She presents page_14 Page 15 evidence from deaf people who use American Sign Language (ASL). In one study, congenitally deaf ASL signers, who varied in age of acquisition from birth to 18 years, performed shadowing and recall tasks. Mayberry found significant differences that depended on the age of acquisition even when she controlled for the number of years of use. Analyses of errors in which signers substituted one word for another revealed interesting patterns that differentiated early and late learners. Early learners are more likely to make errors in which they substitute words with similar meaning in the shadowing and recall tasks. In contrast, late learners have a greater tendency to produce phonological errors, in which the intended sign is structurally but not semantically similar to the incorrect sign. Mayberry's interpretation is that late learners fail to learn an automatic encoding of phonological forms. Hence, they devote too much attention to deciphering low-level information in the sign at the expense of subsequent semantic analysis. Newport (1984, 1988) similarly argues that late ASL learners learn a frozen form and have difficulty decomposing the input into its components for both lexical and syntactic levels of analysis. Importantly, Mayberry shows that early experience with any language will facilitate language processing later in life. Mayberry's evidence shows that late learners of ASL who became deaf after acquiring spoken English or who had residual hearing that may have allowed them to perceive some speech during early childhood perform better and more like native signers than late learners who had little or no early linguistic experience. Her argument is that the group that had some linguistic experience early in life may still have subtle deficits in analyzing the phonological information of sign language but are better able to use higher-level constraints to fill in than are signers who had no systematic language input in early development. Thus, Mayberry's work shows the importance of early linguistic experience to normal language development. As noted earlier, Pisoni, Lively, and Logan found that one's native language exerts a lifelong effect that limits perceptual learning in adults. Further evidence of the role of early linguistic experience in later language processing is Cutler et al.
(1986) and Cutler and Norris's (1988) findings that the segmentation strategies adult listeners use in understanding speech are determined by their native language. These experiments, mentioned in the stress and prosody section above, provide evidence that monolingual English listeners use a stress-based segmentation strategy and monolingual French listeners use a syllable-based strategy to segment linguistic units in speech. Further, when presented with stimuli in their page_15 Page 16 nonnative language (French for English speakers and English for French speakers), they do not switch segmentation strategies (Cutler et al. 1989). The strategies listeners use appear to depend on the characteristics of their native language. The findings of Pisoni, Lively, and Logan (this volume) and Cutler et al. from work with monolinguals listening to nonnative languages (see also Best, this volume) could theoretically result from a critical period (e.g., second-language learning or processing took place after the critical period), but Cutler and her colleagues (1989, 1992) found that even bilinguals show limitations based on experience with one of their languages. Since bilinguals are natively fluent in both their languages, it is reasonable to expect that they may switch processing strategies depending on the language being perceived. But surprisingly, these listeners did not show such flexibility. These results suggest that there is a limit to the perceptual flexibility listeners have. Even listeners who learn two languages in childhood and seem natively fluent in both show the influence of their dominant language. This suggests that experience with one language exerts subtle limitations on the processing of another language independent of limitations set by a critical period. Further work is necessary to establish how findings relevant to critical periods and bilinguals are related, how they can be explained, and how each constrains theories of speech perception. How might one language limit learning and processing of a second language? This is, of course, a complex question for which there may not be any single answer. In her chapter, Best (see also Best, McRoberts, and Sithole 1988) offers one explanation for the difficulty in perceiving nonnative phonetic contrasts: her perceptual assimilation model captures the patterns of influence that a native language has on learning and on perception of another language. According to her model, difficulty in discriminating a nonnative contrast can be predicted according to the relationship between the native and the nonnative phonologies. For example, if each member of a nonnative contrast is similar to a different native phoneme, discrimination of the nonnative contrast should be very good. If instead both members of the nonnative contrast are similar to a single native phoneme, then discrimination abilities depend on the degree of similarity of each member of the nonnative contrast and the native phoneme. If members of the nonnative contrast are equally similar to the native phoneme, discrimination is hypothesized to be poor. However, if they differ in how similar they are, they should be relatively more diseriminable. Finally, if the characteristics of the nonnative contrast are page_16 Page 17 quite different from any native contrast, the nonnative items may be easily discriminated as nonspeech sounds. Best presents evidence for the perceptual assimilation model by examining the performance of native English speakers on the discrimination of different Zulu contrasts. Further evidence that perception of nonnative contrasts is guided by the listener's native language comes from learning studies. Werker (this volume; Werker and Tees 1983; Tees and Werker 1984) has found that native English listeners are quite good at learning to discriminate a Hindi voicing contrast but are poor at discriminating a retroflex-dental place contrast. This finding is easily accounted for by Best's framework (this volume; Best, McRoberts, and Sithole 1988). The voicing contrast is distinctive in English, and the Hindi contrast spans two different categories in English and is easily discriminated. In contrast, the retroflex-dental place contrast is not distinctive in English. According to the perceptual assimilation model, this contrast is very difficult for native English listeners to learn to perceive because they assimilate both of the nonnative tokens into a single native category. Thus, it is clear that one's early linguistic experiences have a lasting effect on speech perception and perceptual learning throughout life. Best's model explains what listeners do when they hear a nonnative contrastthey interpret it in terms of the phonological categories of their native language. But why can't second-language learners simply learn not to do this; why are they limited in learning to attend to the features that are phonemic in another language? One possibility is that, like the ASL learners studied by Mayberry (this volume; see also Newport 1984, 1988), they find it difficult to attend to
phonological distinctions in the input. They focus on the larger unit instead (e.g., a lexical item). When an older learner tries to parse the speech stream, he or she brings along cognitive abilities and world knowledge that are not available to the infant hearing speech in the first year of life. The older learner may attend more to semantic aspects of the input to get the gist of what a speaker is saying. Early phonological and lexical learning might also result in structural changes, that is, the way the brain responds may change as a function of learning. Thus, when learning later in life, brain structures and responses are different than when learning during the first year. In fact, Mills, Coffey, and Neville (1991) using ERP data find changes in brain responses as a consequence of learning words in late infancy. Children at 13-17 and 20 months of age heard words that were generally learned very early by children (among the ten first words) or learned relatively late. ERPs indicated differences in responses to comprehended and unknown words. page_17 Page 18 In addition, specialization begins to occur at this early age. At 13-17 months, the response differences are bilateral and widely distributed over areas of the brain anterior to the occipital regions. By 20 months, these differences are confined to the left hemisphere and to the temporal and parietal regions. At both age groups, language ability affects the extent and nature of the reported differences, suggesting that the changes in brain response and language processing are closely linked. Furthermore differences occur between possible English words and backwards English words at around 10 months (Mills, personal communication)an age that closely corresponds to Werker's and Best's findings of perceptual losssuggesting structural changes appear as children learn the phonotactics of their language. Thus, changes in neural representation of language emerge as a function of age and of linguistic ability. These structural changes during early childhood may contribute to the limitations that are evident in perceptual learning later in life. In sum, critical periods may emerge as a consequence of learning and structural changes associated with learning. One could speculate that, if areas of the brain that normally become specialized for language are not used early in life, those areas may be allocated to some other function. Just as one cannot unlearn native contrasts because structural changes in the brain have occurred, one cannot easily unlearn the other functions that have taken over those regions either. A second reason behind this need to learn early that phonological forms map to word meanings may be that it becomes more difficult to shift attention to lower-level linguistic input in a signal as the age of the language learner increases, focusing instead on frozen forms (cf. Newport 1990). People seem unable to parse wholes into their component parts unless they learn those parts early in life. Despite an adult's limitations in perceptual learning, these findings also demonstrate remarkable flexibility in learning a second language long past the onset of any purported critical period. The chapters in this volume by Pisoni, Lively, and Logan and Werker show that adults can, with training, learn to discriminate nonnative contrasts with a surprising rate of success. Just as the limitations described above are in need of a principled theoretical account, the flexibility that allows perceptual learning throughout the lifespan is critical to any complete theory of speech perception. How can we account for this learning? Many of the contributors to this volume note that attentional mechanisms may be crucial in explaining perceptual learning. These mechanisms may be important both page_18 Page 19 for infants learning their first language and for adults learning a second language as well. The Role of Attention in Speech Perception Attention as a Developmental Mechanism Listeners must learn just what acoustic input is relevant to meaningful distinctions in their native language and come to attend to those language-specific characteristics of the input during speech perception while ignoring other information. Thus, one could think of the development of speech perception as involving attentional shifts. In addition, adult perceptual processing may involve attentional shifts depending on contextual factors. As listeners gain experience with a
language, they learn which dimensions of the input are relevant to deciphering a message. The chapters by Pisoni, Lively, and Logan and Jusczyk turn to a metaphor provided by Nosofsky (1986, 1987) to capture this shift. With experience, one stretches out important dimensions and shrinks unimportant dimensions. Consequently, small differences along important dimensions are now magnified and easy to discriminate. Differences along unimportant dimensions, on the other hand, are made smaller and are unlikely to be detected. In other words, experience tells one what information is important and draws one's attention to those aspects of the input. What is important may vary depending on the available context and on the listener's knowledge about various linguistic levels. The result of learning what information is important is a means of identifying linguistic units by weighting the input components according to how reliable and valid they have been in a listener's past experience (cf. Bates and MacWhinney 1989 for a syntax-processing model that makes use of these constructs). As listeners analyze a sound pattern, they interpret the acoustic input with respect to these learned weights. Consequently, they attend differentially to various aspects of the input in order to arrive at a meaningful percept. This allows listeners to impose a linguistic structure on a speech signal that lacks acoustic-phonetic invariance. Different languages employ different information in making meaningful distinctions, so various aspects of the input will be weighted differently across languages. Hence, experienced users of different languages may interpret the same acoustic input differently. As one gains more experience with a languagethat is, as one has many experiences with information that is reliable and valid in signaling meaningful distinctions in one's languagethe weights used during analysis of the input should become page_19 Page 20 more entrenched and difficult to change (Bates and Elman 1992). Thus, it becomes increasingly difficult to learn a second language. Nonetheless, with effort, even entrenched weight systems can be altered: adult listeners can learn to attend to previously ignored phonetic information in the signal and use it to determine that two inputs differ. This approach suggests why what had looked like perceptual loss of sensory discrimination abilities (Eimas 1975; Strange and Jenkins 1978) is really perceptual reorganization of perceptual biases. Werker (this volume) and Pisoni, Lively, and Logan (this volume) note that the early training studies failed to find perceptual learning in adults who tried to categorize nonnative contrasts. According to the view presented above, their weight systems for attending to and identifying acoustic-phonetic input were entrenched. Pisoni, Lively, and Logan point out in their chapter that the training paradigms in those early studies often used synthetic input even though the discrimination tests involved natural input. Further, the training sets generally used word-initial contrasts, while the test sets used contrasts in several word positions. If subjects learned new weighting systems based on the training input, the new system should not strongly influence their perception of the test set because the characteristics of the test set differed substantially from the training input. As a result, training would not affect perceptual learning of nonnative contrasts. The failure of early training studies had been attributed to a loss of sensory mechanisms. Pisoni, Lively, and Logan provided listeners with large amounts of experience across the broad range of situations in which they would actually hear nonnative contrasts (many speakers and many contexts using natural speech) and found that listeners' abilities to perceive nonnative contrasts improved. Listeners may have learned to shift weights in relevant ways. In this sense, listeners reorganized their perceptual processing such that they now could stretch previously irrelevant dimensions of the input (see Pisoni, Lively, and Logan, this volume; Werker, this volume). One consequence of this approach is that similar cognitive mechanisms can be used to describe perceptual change in both children and adults (see also Pisoni, Lively, and Logan, this volume), thus bringing coherence to a theory of speech perception. Adults learn to focus attention on specific properties of the speech signal to make linguistic judgments as do children. This attentional shift occurs because the listener's language-specific knowledge changes. Thus, the processes by which a listener perceives speech might not differ greatly with age; rather, the knowledge a listener has to impose linguistic structure on the acoustic input changes (see also page_20 Page 21 Goodman 1993; Nusbaum and Goodman, this volume). Of course, this leaves open how one comes to learn which dimensions are important in a given language, that is, how a listener comes to set the weights. Answering this question is
a critical challenge for the coming years. A second consequence of this approach is that similar cognitive mechanisms can be used to describe processing across linguistic levels. In their chapter, Nusbaum and Goodman point out that, rather than positing one sort of mechanism to explain speech perception and different sorts of mechanisms to explain other kinds of language processing, such as grammatical acquisition or word recognition, an investigation of attentional mechanisms may highlight commonalities in processing across these domains. Listeners may attend to information across levels of linguistic processing in establishing the weighting system. As a result, they will draw on information from a variety of sources (acoustic-phonetic, semantic, syntactic, and talker) to identify a segment or a word, and information from one level of linguistic analysis may constrain identification of units at another level. In essence, the weighting representation builds in variations due to context and level of analysis. Similar claims of a process of constraint satisfaction have been made with respect to learning a grammar (Bates and MacWhinney 1989), to machine learning (Bates and Elman 1992), and to development of identification of complex visual forms (Aks and Enns 1992). In order to provide an account of perceptual processing, future research will have to explain how a listener learns to which cues to attend across linguistic levels and how listeners represent interlevel constraints. In short, work presented in this volume suggests that future research should explore the role that shifts in attention may play in perceptual learning in children and adults and in relating processing across linguistic levels. Based on linguistic experience, listeners appear to develop a weighting scheme that directs their attention differentially to various aspects of the acoustic input. The acoustic input, of course, includes information from many linguistic levels simultaneously, so that listeners distribute their attention to capture regularities across linguistic levels and thus develop a multilevel weighting scheme or distributed representation, as a consequence. Through experience with a language, this will result in increased knowledge about the structural properties of spoken language and how they interact (Nusbaum and Goodman, this volume): listeners assign weights to multiple sources of information to form a sort of contextual map. During speech perception, the focus and distribution of attention at any one point in time will depend on this knowledge or weight scheme. page_21 Page 22 Problems with Attentional Explanations There are two important problems with this speculation on the role of attention in the development of speech perception. First, perceptual learning is limited in ways that cannot be accounted for only in terms of influences by one's native language. For example, bilingual learners might have a dominant language. No mechanism is inherent in this attentional explanation to address those limitations. Second, and related to the first problem, the above account unfortunately does not make clear how one comes to learn which dimensions are reliable and valid in signaling meaningful distinctions, that is, the attentional explanation does not show how people learn what is important for perception or along what dimensions perceptual reorganization takes place when learning a second language. Addressing these issues, upon which we expand below, will be critical in accounting for the development of speech perception. The fact that perceptual learning later in life is less effective than early learning could be due in part to one's first language interfering with setting weights for a second language (hence the less-than-perfect discrimination performance on nonnative contrasts despite training (Pisoni, Lively, and Logan, this volume)). However, the problem is not solely due to the established weight system for a first language because late learning of a first language also seems to take a different path than early learning (Mayberry, this volume; Newport 1984, 1988, 1990). This suggests that, when an adult learns a second language, two factors may influence perceptual learning, structural knowledge of a native language and attention to a structural level different from that to which a child learner attends. The issue here is that, while attention to important information may enable the listener to structure the input in a way that facilitates recognition, we still need to explain why this apparent critical period exists and how it constrains later learning. One possibility is that neural maturation affects language learning later in life. The second possibility is that as other learning occurs, one's perception of the world changes, and consequently, language learning takes place in different machinery in an older learner than language learning in an infant. Newport (1990) has suggested that late learners are less able to analyze forms into morphological components. In her chapter, Mayberry (this volume) makes a similar claim with respect to phonology. These findings imply that late learners have difficulty in determining what information is important in language processing, that is, what information should be heavily weighted.
It is still more difficult to account for Cutler et al.'s (1989, 1992) findings concerning segmentation strategies of bilinguals. Those findings suggest page_22 Page 23 that the capacity to represent two languages is limited even when both languages are learned early. Unfortunately, we lack detailed information of learning in bilinguals, such as those in Cutler et al.'s study, so we cannot make strong inferences on why the weights of one language dominate the other. Clearly, this is a problem that awaits future empirical findings. This leads to our second problem. We need some theory to explain how listeners at any point along the developmental span learn which information is reliable and valid. Jusczyk (this volume) suggests that listeners store very specific representations and over time abstract out regularities and central tendencies. Indeed, Kuhl et al. (1992) find that infants appear to attend to central tendencies very earlythey recognize the prototypical vowels for their language at around 6 months of age. These vowel prototypes presumably emerge as a result of infants storing many exemplars of a vowel and learning the central tendencies of their language's vowel space from the acoustic information. Listeners do retain very specific information about acoustic input they hear, suggesting that they may indeed store exemplars. For example, Nygaard, Sommers, and Pisoni (1992) demonstrate that adults retain information concerning the speaker of spoken words. In other words, listeners may store not only particular words they hear on a list but the speaker characteristics as well. Kuhl's account also suggests that vowel categories are constructed from the bottom up, that is, infants' vowel categories may be independent of accompanying phonetic or semantic information. It is not clear that prototypes for consonant categories could develop this way. For example, Best and Jusczyk both propose in their chapters that children begin to develop a phonological system by first responding to the more global characteristics of the structure of speech that they hear in their environments and that they only gradually come to break down the speech signal into phonemelike segments. In other words, rather than a bottom-up building of phonetic features to words, the child (and adult learner) may move progressively from relatively large undifferentiated units to smaller, context-dependent units and, then, to phonological categories (Best, this volume; Jusczyk, this volume; see also Walley, Smith, and Jusczyk 1986). One implication of this is that higher-level context may provide important constraints in learning what is reliable and valid information in phoneme identification. It is interesting that late learners appear to store the larger units or the context-dependent units but seem to have difficulty in abstracting out the regularities and central tendencies necessary for learning abstract phoneme categories (Mayberry, this volume; Pisoni, Lively, page_23 Page 24 and Logan, this volume). Perhaps the difference between early and late learners is that children are able to abstract out regularities from these more global, contextbound chunks, while adults, who perhaps focus more on meaning, are unable to unpack this information. If children learn abstract phoneme categories, then the question remains as to how that learning takes place. Two possibilities bear future exploration. Kuhl et al.'s (1992) work suggests that prelinguistic infants form vowel prototypes and, thus, supports the possibility that listeners construct prototypes from the most frequent or central exemplars encountered in the acoustic input. However, vowels differ from consonants in duration and complexity, so consonant prototypes, if they do exist, might be based on more information than the bottom-up segmental input alone. Important constraining information might include not only acoustic qualities, but also information regarding the context of occurrence (that is, surrounding phonemes), the lexical context, and other characteristics of the input information. Jusczyk (this volume) has proposed that initial prototypes might be syllables. Further, when children learn words, they might represent lexical items holistically. These representations could ultimately support phoneme prototype representations as listeners reorganize sound patterns to facilitate discrimination between words. This system requires storage of a very large number of exemplars. Jusczyk shows that even infants can remember a phonetic category for two minutes and that this memory improves between 4 days and 2 months of age. While this period is very brief, perhaps by
later infancy they retain information about various linguistic units for much longer. These exemplars could be used to construct prototypical representations of linguistic entities. The second way a prototype could be established is that the listener updates his or her representation with each input. An initial prototype could be formed on the basis of a single instance. If subsequent input is identified as similar to some developing prototype on the basis of any acoustic or contextual information, that input will be used to update the stored category representation. Those features that are valid and reliable in identifying inputs for a particular language will be used to identify the speech and, therefore, to update the stored representation. In this way, the most constraining information across linguistic levels will be assigned the greatest weight over time. The strength of positing prototype representations of phoneme categories is that they readily explain why it is difficult to perceive nonnative contrasts. If listeners represent prototypes and consequently differentially page_24 Page 25 weight various acoustic dimensions, they will try to interpret it within the phonological system of their native language when they hear a speech sound from another language. Acoustic information that is similar to a native weighting scheme will be pulled into that prototype, a phenomenon labeled the perceptual magnet effect by Kuhl (1991). Hence, the nonnative sound will be identified as a native phoneme. However, with a great deal of experience listening to nonnative contrasts, particularly with training, new prototypes for the second language might be formed. The weakness of positing prototype representations of phoneme categories is that it is difficult to account for interlevel contextual effects. Consider, for example, Ganong's (1980) finding that lexical context influences phoneme perception. The same acoustic information is interpreted as different phonemes depending on context (e.g., a /d/ in the context of ash, but a /t/ in the context of-ask). Recent work (Hurlburt and Goodman 1992) finds that children may shift their responses to make them compatible with words even when the initial phoneme is clear natural speech. If listeners represent a fixed phoneme prototype, why does the context spur two different interpretations of the same acoustic information? Nusbaum and Goodman (see also Nusbaum and DeGroot 1990; Nusbaum and Henly 1992, in press) suggest in their chapter that, if the features that are important for identifying speech change with context, then perhaps listeners do not represent context-independent prototypes. Rather than learning a prototype representation, in distributing attention across the acoustic input and the language situation, one might develop a theory based upon a wide variety of context-dependent input (see also Medin 1989; Murphy and Medin 1985). The theories of phoneme categories may consist of propositions concerning information about the function of the category, an abstract specification of its form and how context operates on it, and its relationship to other linguistic knowledge. Indeed, listeners do represent context-dependent phonetic categories (Miller and Volaitis 1989; Volaitis and Miller 1992). They may develop contextdependent phoneme recognition processes based on multiple linguistic levels as well. Development of speech perception would require attention to constraining information across contexts to learn about the structure, nature, or function of linguistic categories. Conclusion The goal of this book is to examine how children develop the ability to perceive their native language. In so doing, perhaps light will be shed page_25 Page 26 on perceptual processing in mature listeners as well. The mechanisms by which we learn to interpret acoustic input are unclear. In fact, not much research has directly addressed the question of how perceptual categories are learned. In the past, perceptual processing in infants generally has been treated as independent from speech perception in adults. The dynamic mechanisms important in the transition from the infant's early sensory abilities to perception of linguistic categories were not of central interest. A second aspect of earlier research which may have obscured the nature of dynamic mechanisms is that much of the exploration of speech perception has examined processing within a single linguistic level. The constraints of the acoustic signal provided by interactions from other levels have not been as
thoroughly investigated. Considering these issues may seem to add noise and complexity to a question that has already proved difficult to answer. However, we hope that turning to these areas may add clarity by providing an account of how listeners interpret the language they hear regardless of the context in which it occurs. The chapters in this volume describe the nature of learning throughout the lifespanboth our flexibility and limitations and speculate about the mechanisms by which that learning occurs. The focus on learning has highlighted the need to incorporate dynamic mechanisms in processing, both to explain development and to explain processing across the wide array of contexts mature listeners regularly encounter. These chapters suggest that speech perception does indeed develop. Further, it develops in a bidirectional interplay with higher level lexical and phrasal structures. These findings reported in this volume suggest that it is a mistake to view speech perception as an isolated domain, running on its own autonomous principles and, thus, independent from other properties of language. Clearly, theories of the development of speech perception must incorporate these findings in order to account for how the infant's impressive innate perceptual abilities are modified by experience. What is more, a consideration of the theoretical mechanisms that account for these developmental findings invites speculation that similar mechanisms may also account for the flexibility demonstrated by adults identifying speech in its endlessly varying contexts. References Aks, D. J. and Enns, J. T. (1992). Visual search for direction of shading is influenced by apparent depth. Perception and Psychophysics, 52, 63-74. page_26 Page 27 Aslin, R. N., Pisoni, D. B., and Jusczyk, P. W. (1983). Auditory development and speech perception in infancy. In M. M. Haith and J. J. Campos (eds.), Carmichael's manual of child psychology, vol 2: Infancy and the biology of development (4th ed., pp. 573-687). New York: Wiley. Bates, E., Bretherton, I., and Snyder, L. (1988). From first words to grammar. Cambridge: Cambridge University Press. Bates, E. A. and Elman, J. L. (1992). Connectionism and the study of change (Center for Research in Language Technical Report, #9202). La Jolla, Ca.: University of California, San Diego. Bates, E. A. and MacWhinney, B. (1989). Functionalism and the competition model. In B. MacWhinney and E. Bates (eds.), The crosslinguistic study of sentence processing. New York: Cambridge University Press. Best, C. T., McRoberts, G. W., and Sithole, N. N. (1988). The phonological basis of perceptual loss for non-native contrasts: Maintenance of discrimination among Zulu clicks by English-speaking adults and infants. Journal of Experimental Psychology: Human Perception and Performance, 14, 345-360. Blasdell, R. and Jensen, P. (1970). Stress and word position as determinants of imitation in first language learners. Journal of Speech and Hearing Research, 13, 193-202. Burki-Cohen, J., Grosjean, F., and Miller, J. L. (1989). Base-language effects on word identification in bilingual speech: Evidence from categorical perception experiments. Language and Speech, 32, 355-71. Clark, E. (1983). Meanings and concepts. In J. Flavell and E. Markman (eds.) Handbook of child psychology, vol. 3 (P. Mussen, series ed.) (pp. 787-840). New York: Wiley. Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Cutler, A. and Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology. Human Perception and Performance, 14, 113-121. Cutler, A. and Swinney, D. A. (1987). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.
Cutler, A., Mehler, J., Norris, D., and Segui, J. (1986). The syllable's differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. Cutler, A., Mehler, J., Norris, D., and Segui, J. (1989). Limits on bilingualism. Nature, 340, 229-230. Cutler, A., Mehler, J., Norris, D., and Segui, J. (1992). The monolingual nature of speech segmentation by bilinguals. Cognitive Psychology, 24, 381-410. page_27 Page 28 du Preez, P. (1974). Units of information in the acquisition of language. Language and Speech, 17, 369-376. Eimas, P. D. (1975). Developmental studies in speech perception. In L. B. Cohen and P. Salapatek (eds.), Infant perception: From sensation to perception, vol. 2. New York: Academic Press. Eimas, P. D., Siqueland, E. R., Jusczyk, P., and Vigorito, J. (1971). Speech perception in infants. Science, 171, 303-306. Elman, J. and McClelland, J. (1986). Exploiting lawful variability in the speech wave. In J. S. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes. Hillsdale, N.J.: Erlbaum. Ferguson, C. A. and Farwell, C. B. (1975). Words and sounds in early language acquisition. Language, 51, 419-439. Fernald, A. (1984). The perceptual and affective salience of mothers' speech to infants. In L. Feagan, C. Garvey, and R. Golinkoff(eds.), The origins and growth of communication. Norwood, N.J.: Ablex. Fowler, C. A. (1986). An event approach to the study of speech perception from a direct-realist perspective. Journal of Phonetics, 14, 3-28. Fowler, C. A. (1989). Real objects of speech perception: A commentary on Diehl and Kluender. Ecological Psychology, 1, 145-160. Fowler, C. A. (1991). Sound-producing sources as objects of perception: Rate normalization and nonspeech perception. Journal of the Acoustical Society of America, 88, 1236-1249. Ganong, W. F. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception and Performance, 6, 110-125. Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin. Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Gleitman, L. R. and Wanner, E. (1982). Language acquisition: The state of the state of the art. In E. Wanner and L. R. Gleitman (eds.), Language acquisition: The state of the art. Cambridge: Cambridge University Press. Goodman, J. C. (1993). The development of context effects in spoken word recognition. Submitted for publication. Grosjean, F. (1988) Exploring the recognition of guest words in bilingual speech. Language and Cognitive Processes, 3, 233-274. Grosjean, F. (1989). Neurolinguists, beware! The bilingual is not two monolinguals in one person. Brain and Language, 36, 3-15. Hirsh-Pasek, K., Kemler Nelson, D. G., Jusczyk, P. W., Wright Cassidy, K., Druss, B., and Kennedy, L. (1987). Clauses are perceptual units for young infants. Cognition, 26, 269-286. page_28 Page 29
Hurlburt, M. S. and Goodman, J. C. (1992). The development of lexical effects on children's phoneme identifications. In J. J. Ohala, T. M. Nearey, B. L. Derwing, M. M. Hodge, and G. E. Wiebe (eds.), ICSLP 92 proceedings: 1992 international conference on spoken language processing (pp. 337-340). Banff, Alberta, Canada: University of Alberta. Jusczyk, P. W. (1981). Infant speech perception: A critical appraisal. In P. D. Eimas and J. L. Miller (eds.), Perspectives on the study of speech (pp. 113-164). Hillsdale, N.J.: Erlbaum. Jusczyk, P. (1985). On characterizing the development of speech perception. In J. Mehler and R. Fox (eds.), Neonate cognition: Beyond the blooming, buzzing confusion (pp. 199-229). Hillsdale, N.J.: Erlbaum. Katz, N., Baker, E., and MacNamara, J. (1974). What's in a name? A study of how children learn common and proper names. Child Development, 45, 469-473. Kemler Nelson, D. G., Hirsh-Pasek, K., Jusczyk, P.W., and Cassidy, K. W. (1989). How the prosodic cues in motherese might assist language learning. Journal of Child Language, 16, 55-68. Kuhl, P. K. (1979). Speech perception in early infancy: Perceptual constancy for spectrally dissimilar vowel categories. Journal of the Acoustical Society of America, 66, 374-408. Kuhl, P. K. (1991). Human adults and human infants show a ''perceptual magnet effect" for the prototypes of speech categories, monkeys do not. Perception and Psychophysics, 50, 93-107. Kuhl, P. K. and Meltzoff, A. (1984). The intermodal representation of speech in infants. Infant Behavior and Development, 7, 361-381. Kuhl, P. K., Williams, K. A., Lacerda, F., Stevens, K. N., and Lindblom, B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science, 255, 606-608. Lane, H. (1965). The motor theory of speech perception: A critical review. Psychological Review, 72, 275-309. Lenneberg, E. H. (1967). Biological foundations of language. New York: Wiley. Liberman, A. M. and Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21, 1-36. Liberman, A. M., Cooper, F. S., Shankweiler, D. P., and Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431-461. Lively, S. E., Pisoni, D. B., and Logan, J. S. (1991). Some effects of training Japanese listeners to identify English /r/ and /1/. In Y. Tohkura (ed.), Speech perception, production, and linguistic structure. Tokyo: OHM Publishing. Logan, J. S., Lively, S. E., and Pisoni, D. B. (1991). Training Japanese listeners to identify /r/ and /1/: A first report. Journal of the Acoustical Society of America, 89(2), 874-886. page_29 Page 30 McDonough, L. and Goodman, J. C. (1993). Toddlers use linguistic context to learn new words. Poster presented at the biennial meeting of the Society for Research in Child Development, New Orleans, La., March 1993. McGurk, H. and MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 225-251. Markman, E. (1991). The whole object, taxonomic and mutual exclusivity assumptions as initial constraints on word meanings. In J. Byrnes and S. Gelman (eds.), Perspectives on language and cognition: Interrelations in development. Cambridge: Cambridge University Press. Marslen-Wilson, W. D. (1975). Sentence perception as an interactive parallel process. Science, 189, 226-228. Marslen-Wilson, W. D. (1987). Functional parallelism in spoken word-recognition. In U. H. Frauenfelder and L. K. Tyler (eds.), Spoken word recognition. Cambridge, Mass.: MIT Press. Marslen-Wilson, W. D. and Tyler, L. K. (1980). The temporal structure of spoken language understanding. Cognition, 8,
1-71. Marslen-Wilson, W. D. and Welsh, A. (1978). Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology, 10, 29-63. Massaro, D. W. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J.: Erlbaum. Massaro, D. W. and Cohen, M. M. (1983). Evaluation and integration of visual and auditory information in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 9, 753-771. Mattingly, I. G., Liberman, A. M., Syrdal, A. K., and Halwes, T. (1971). Discrimination in speech and nonspeech modes. Cognitive Psychology, 2, 131-157. Mayberry, R. I. and Eichen, E. B. (1991). The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition. Journal of Memory and Language, 30, 486-512. Mayberry, R. I. and Fischer, S. D. (1989). Looking through phonological shape to sentence meaning: The bottleneck of non-native sign language processing. Memory and Cognition, 17, 740-754. Medin, D. L. (1989). Concepts and conceptual structure. American Psychologist, 44, 1469-1481. Menn, L. (1983). Development of articulatory, phonetic, and phonological capabilities. In B. Butterworth (ed.), Language production, vol 2. London: Academic Press. Menyuk, P. and Menn, L. (1979). Early strategies for the perception and production of words and sounds. In P. Fletcher and M. Garman (eds.), Language acquisition: Studies in first language development. Cambridge: Cambridge University Press. page_30 Page 31 Miller, J. L. and Volaitis, L. E. (1989). Effects of speaking rate on the perceived internal structure of phonetic categories. Perception and Psychophysics, 56, 505512. Mills, D. L., Coffey, S. A., and Neville, H. J. (1991). Language abilities and cerebral specializations in 10-20 month olds. Paper presented at the biennial meeting of the Society for Research in Child Development, Seattle, Wash., April 1991. Morton, J. (1969). Interaction of information in word recognition. Psychological Review, 87, 165-178. Murphy, G. L. and Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological Review, 92, 289316. Nelson, K. (1973). Structure and strategy in learning to talk. Monographs of the society for research in child development, 38 (2, Serial No. 149). Newport, E. L. (1984). Constraints on learning: Studies in the acquisition of American Sign Language. Papers and Reports on Child Language Development, 23, 1-22. Newport, E. L. (1988). Constraints on learning and their role in language acquisition: Studies of the acquisition of American Sign Language. Language Sciences, 10, 147-172. Newport, E. L. (1990). Maturational constraints on language learning. Cognitive Science, 14, 11-28. Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39-57. Nosofsky, R. M. (1987). Attention and learning processes in the identification and categorization of integral stimuli. Journal of Experimental Psychology: Learning, Memory and Cognition, 15, 700-708.
Nusbaum, H. C. and DeGroot, J. (1990). The role of syllables in speech perception. In M. S. Ziolkowski, M. Noske, and K. Deaton (eds.), Papers from the parasession on the syllable in phonetics and phonology. Chicago: Chicago Linguistic Society. Nusbaum, H. C. and Henly, A. S. (1992). Constraint satisfaction, attention, and speech perception: Implications for theories of word recognition. In M. E. H. Schouten (ed.), The auditory processing of speech: From sounds to words (pp. 339-348). Berlin: Mouton de Gruyter. Nusbaum, H. C. and Henly, A. S. (in press). Understanding speech perception from the perspective of cognitive psychology. In J. Charles-Luce, P. A. Luce, and J. R. Sawusch (eds.), Spoken language processing. Norwood, N.J.: Ablex. Nygaard, L. C., Sommers, M. S., and Pisoni, D. B. (1992). Effects of speaking rate and talker variability on the representation of spoken words in memory. In J. J. Ohala, T. M. Nearey, B. L. Derwing, M. M. Hodge, and G. E. Wiebe (eds.) ICSLP 92 proceedings: 1992 international conference on spoken language processing (pp. 337-340). Banff, Alberta, Canada: University of Alberta. page_31 Page 32 Pastore, R. E. (1981). Possible psychoacoustic factors in speech perception. In P. D. Eimas and J. L. Miller, (eds.), Perspectives on the study of speech. Hillsdale, N.J.: Erlbaum. Peters, A. M. (1983). Units of language acquisition. Cambridge: Cambridge University Press. Pilon, R. (1981). Segmentation of speech in a foreign language. Journal of Psycholinguistic Research, 10, 113-122. Pisoni, D. B. (1973). Auditory and phonetic memory codes in the discrimination of consonants and vowels. Perception and Psychophysics, 13, 253-260. Pisoni, D. B. (1977). Identification and discrimination of the relative onset time of two component tones: Implications for voicing perception in stops. Journal of the Acoustical Society of America, 61, 1352-1361. Pisoni, D. B., Aslin, R. N., Perey, A. J., and Hennessy, B. L. (1982). Some effects of laboratory training on identification and discrimination of voicing contrasts in stop consonants. Journal of Experimental Psychology: Human Perception and Performance, 8, 297-314. Repp, B. H. (1982). Phonetic trading relations and context effects: New experimental evidence for a speech mode of perception. Psychological Bulletin, 92, 81-110. Samuel, A. G. (1977). The effect of discrimination training on speech perception: Noncategorical perception. Perception and Psychophysics, 22, 321-330. Slobin, D. I. (1982). Universal and particular in the acquisition of language. In E. Wanner and L. R. Gleitman (eds.), Language acquisition: The state of the art. Cambridge: Cambridge University Press. Stevens, K. N. and Blumstein, S. E. (1981). The search for invariant acoustic correlates of phonetic features. In P. D. Eimas and J. L. Miller (eds.), Perspectives on the study of speech. Hillsdale, N.J.: Erlbaum. Stevens, K. N. and Halle, M. (1967). Remarks on analysis by synthesis and distinctive features. In W. Wathen-Dunn (ed.), Models for the perception of speech and visual form. Cambridge, Mass.: MIT Press. Strange, W. and Jenkins, J. (1978). Role of linguistic experience in the perception of speech. In R. D. Walk and H. L. Pick (eds.), Perception and experience. New York: Plenum Press. Studdert-Kennedy, M. (1986). Sources of variability in early speech development. In J. S. Perkell and D. H. Klatt (eds.). Invariance and variability in speech processes (pp. 58-76). Hillsdale, N.J.: Erlbaum. Tees, R. C. and Werker, J. F. (1984). Perceptual flexibility: Maintenance or recovery of the ability to discriminate nonnative speech sounds. Canadian Journal of Psychology, 38, 579-590.
Tyler, L. K. and Marslen-Wilson, W. D. (1981). Children's processing of spoken language. Journal of Verbal Learning and Verbal Behavior, 20, 400-416. page_32 Page 33 Volaitis, L. E. and Miller, J. L. (1992). Phonetic prototypes: Influence of place of articulation and speaking rate on the internal structure of voicing categories. Journal of the Acoustical Society of America, 92, 723-735. Wakefield, J. A., Doughtie, E. B., and Yom, B.-H. L. (1974). The identification of structural components of an unknown language. Journal of Psycholinguistic Research, 3, 261-269. Walley, A. C., Smith, L. B., and Jusczyk, P. W. (1986). The role of phonemes and syllables in the perceived similarity of speech sounds for children. Memory and Cognition, 14, 220-229. Werker, J. F. and Lalonde, C. E. (1988). Cross-language speech perception: Initial capabilities and developmental change. Developmental Psychology, 24, 672-683. Werker, J. F. and Tees, R. C. (1983). Developmental changes across childhood in the perception of non-native speech sounds. Canadian Journal of Psychology, 37, 278-286. page_33 Page 35
PART I Innate Sensory Mechanisms and Constraints on Learning page_35 Page 37
Chapter 2 Observations on Speech Perception, Its Development, and the Search for Mechanism Joanne L. Miller and Peter D. Eimas A fundamental issue in the field of speech perception is how the listener derives a phonetic representation from the acoustic signal of speech. This is not a simple matter. Considerable research over the past few decades has established rather convincingly that the mapping between the acoustic signal and the sequence of consonants and vowelsthe phonetic segmentsthat define the lexical items of the language is far from straightforward (e.g., Liberman et al. 1967; Liberman and Mattingly 1985; but see also Stevens and Blumstein 1981). The acoustic form of any given word typically varies substantially when spoken by different speakers, at different rates of speech, or with different emotional force, and the acoustic form of any given phonetic segment can vary dramatically as a function of the surrounding segments. A theory of speech perception must explicate the precise nature of the complex, yet systematic mapping between acoustic signal and phonetic structure, and it must describe the perceptual mechanisms that allow the listener to recover the phonetic structure of utterances during language processing. Furthermore, to be complete, the theory must give an account of the developmental course of the ability to perceive speech. Over the years, considerable progress has been made in describing both the mapping between acoustic and phonetic structure for adult listeners and the precursors of this mapping during early infancy. It is now clear that young infants come to the task of language acquisition with highly sophisticated abilities to process speech and that critical aspects of the mapping between acoustic and phonetic structures in adults find their roots in the mapping between acoustic and prelinguistic structures in infants. Progress on the issue of mechanism has been much slower in coming. Indeed, very
little is known about the nature of the perceptual mechanisms that allow the listener, whether adult or infant, to perceive speech. This is not due to the lack of attention to this problem since considerable page_37 Page 38 research over the past decades has been directed precisely to the issue of underlying mechanism. However, little real progress has been made, and the issue remains highly controversial. In this chapter, we illustrate this state of affairsthe overall success in describing the nature of the mapping between acoustic and phonetic structure in adults and the origins for this mapping in infancy, coupled with the relative lack of progress in discovering the nature of the perceptual mechanisms that underlie speech perception. We have organized our discussion in terms of three sections. First, we depict the context-dependent nature of the mapping between acoustic signal and phonetic structure in adult and infant listeners, using as a case study contextual variation due to a change in speaking rate. It is such context-dependency (i.e., lack of invariance between acoustic property and phonetic unit) that has fueled much of the debate surrounding underlying mechanism. Second, we consider two highly controversial and alternative theoretical accounts of the mechanism underlying the listener's ability to recover phonetic structure despite context-dependency, again using rate-dependent processing as a case study. Finally, we offer some observations on why the issue of mechanism has proved to be so intractable, and we speculate on the kinds of data that might lead to progress on this issue. The Phenomenon: Context-Dependent Speech Processing in Adults and Infants It is well established that listeners do not process speech in a strictly linear manner, acoustic segment by acoustic segment, with each acoustic segment associated in a one-to-one fashion with a given phonetic segment. Rather, a single segment of the acoustic signal typically contains information in parallel for more than one phonetic segment, and conversely, the information for a given phonetic segment is often distributed across more than one acoustic segment (Liberman et al. 1967). A consequence of this complex mapping is that speech perception is largely context-dependent. One form of this context-dependency is that later-occurring information in the speech signal often contributes to the processing of an earlier-occurring acoustic property. A case in point, and the example considered in this chapter, concerns the distinction in manner of articulation between syllable-initial /b/ and /w/, as in the syllables /ba/ and /wa/. A major distinguishing characteristic of /b/ and /w/ is the abruptness of the consonantal onset, with the onset of/b/ being considerably more abrupt than that of /w/. One parameter of page_38 Page 39 the consonantal onset is the duration of the initial formant transitions into the following vowel: /b/ is typically produced with short transitions and /w/ with long transitions. Moreover, it has been known for some time that listeners use transition duration to distinguish the two consonants (Liberman et al. 1956). With all other parameters set appropriately and held constant, listeners hear syllables with short transitions as beginning with /b/ and those with long transitions as beginning with /w/. Thus, listeners divide a continuum of syllables varying in transition duration into two categories, /b/ and /w/. Furthermore, as has been shown for other consonantal contrasts, discrimination of two stimuli drawn from the continuum is considerably better if the two stimuli belong to different categories (i.e., one is labeled /b/ and the other /w/), than if they belong to a single category (i.e., both are labeled /b/ or both /w/) (Miller 1980). Therefore, ample evidence exists that transition duration maps onto the phonetic categories /b/ and /w/. This mapping, however, is not invariant. Miller and Liberman (1979) showed that precisely which transition durations are mapped onto /b/ and which onto /w/ depends on the duration of the syllable, a property known to vary with changes in speaking rate. They created a set of /ba/-/wa/ speech series such that, within each series, the syllable varied in transition duration from short (16 msec) to long (64 msec), so as to range perceptually from /ba/ to /wa/.1Across series, the syllables differed in overall duration, which ranged from 80 to 296 msec. The change in syllable duration was accomplished by extending the steady-state vowel portion of the syllables. Subjects were presented stimuli in random order and were asked to categorize each as /ba/ or /wa/.
The findings, as shown in figure 2.1, were very clear. As the syllable became longer, the crossover point between predominately /b/ and predominately /w/ responses, that is, the location of the phonetic category boundary, shifted systematically toward a longer transition duration. In other words, listeners treated transition duration not in an absolute manner but in relation to the duration of the syllable.2We should note that evidence for such relational processing is not limited to the /b/-/w/ contrast but has been found for many other phonetic contrasts, which are specified by a variety of acoustic properties. Examples are a voicing contrast specified by voice-onset-time (VOT), a complex property known to distinguish voicing contrasts in many languages (Lisker and Abramson 1964, 1970); a single-geminate contrast specified by the duration of consonantal closure; and a vowel contrast specified by the duration of vocalic information (see Miller 1981 for review). page_39 Page 40
TRANSITION DURATION (msec) Figure 2.1 Effect of syllable duration on the perception of the distinction between /b/ and /w/, specified by transition duration. From J. L. Miller and A. M. Liberman (1979), "Some effects of later-occurring information on the perception of stop consonant and semivowel," Perception & Psychophysics 25, 457-465, published by The Psychonomic Society, Inc. Miller and Liberman interpreted the shift in boundary location as reflecting an adjustment on the part of the listener for a change in articulatory rate. As speakers slow their rate of speech, the overall syllable duration of /ba/ and /wa/ increases, as does the duration of the initial formant transitions (Miller and Baer 1983). Miller and Liberman suggested that, in their perceptual experiment, the increase in syllable duration specified a slower rate of articulation and that listeners adjusted accordingly by requiring a longer transition to specify /w/ as opposed to /b/. In other words, listeners processed the syllables in a rate-dependent fashion. A critical issue that immediately arises is the origin of this type of context dependency in speech perception. On the one hand, it could be that the context dependent phonetic categorizations of adults are the con page_40 Page 41 sequence of considerable experience with phonetic structures across variation in speaking rate and have developed over the course of language acquisition. On the other hand, these complex categorizations could find their basis in innately
given processes that are operative very early in life before language is acquired. Eimas and Miller (1980; Miller and Eimas 1983) investigated this issue by testing young infants, who were three to four months of age, on the perception of the /b/-/w/ contrast. It had been known since the early work of Eimas and his colleagues (Eimas et al. 1971) that young infants not only have the ability to make fine discriminations in acoustic properties that are phonetically relevant but that they process speech in terms of categories. These prelinguistic perceptual categories are presumably the precursors of adult phonetic categories. The basic evidence for infant categorization comes from discrimination experiments in which infants are tested on their ability to discriminate pairs of stimuli drawn from a stimulus continuum. Consider, for example, the Eimas et al. (1971) experiment. In this study, infants were tested on their ability to perceive a contrast in voicing between syllable-initial /b/ and /p/ in the syllables /ba/ and /pa/. The voicing distinction was specified by VOT. Eimas et al. found that, when infants were tested on pairs of stimuli, their ability to discriminate the members of the pair depended on the voicing category assigned to the stimuli by adult listeners. Two stimuli that were heard as /ba/ and /pa/ by adults could be readily discriminated by the infants, whereas two stimuli heard as both /ba/ or as both /pa/ by adults could not. Furthermore, this was true even though the two stimuli within a pair always differed from each other by a constant increment of VOT. This pattern of findings was taken as evidence that young infants perceive phonetically relevant acoustic information (VOT in this case) in terms of perceptual categories, categories which form the basis of adult phonetic categories. The phenomenon of infant categorization has been replicated many times for numerous phonetic contrasts (for a review, see Jusczyk 1986). We would expect that, as for the voicing contrast, infants would perceive stimuli from a given /ba/-/wa/ series in terms of perceptual categories, discriminating stimuli that cross the perceptual boundary and failing to discriminate stimuli that are drawn from a single category. The issue addressed by Eimas and Miller was whether the /ba/-/wa/ infant categories, like those of the adult, are specified not only by transition duration but by transition duration in relation to syllable duration. They examined this issue by testing perception of six syllables drawn from the stimulus set used by Miller and Liberman (1979), three syllables from the 80-msec page_41 Page 42 series and three from the 296-msec series. In each case, the three syllables had transition durations of 16, 40, and 64 msec. According to the Miller and Liberman data, adults perceive the 16-msec syllables from both series as /ba/ and the 64-msec syllables from both series as /wa/. However, they perceive the 40-msec syllable from the 80-msec series as /wa/, but the 40-msec syllable from the 296-msec series as /ba/. This is because the adult category boundary falls between 16 and 40 msec on the 80-msec series and between 40 and 64 msec on the 296-msec series. Infants were tested on four pairs of stimuli: the 16-40 pair from both series and the 40-64 pair from both series. If infants process transition duration information in a context-dependent manner as do adults, then they should show evidence of discriminating the 16-40 pair from the 80-msec series and the 40-64 pair from the 296-msec series but not the other two pairs. The testing procedure was a modification of the high-amplitude-sucking procedure first used by Eimas et al. (1971) to assess infant speech perception. Briefly, after a one-minute baseline reading of sucking responses was taken, one syllable of a given pair was presented to the infant contingent upon sucking behavior. This typically resulted in an initial increase in sucking responses, presumably due to the infant's proclivity to seek stimulation, followed by a subsequent decline in sucking rate, assumed to be due to the stimulus losing its informational (i.e., reinforcing) qualities. After a fixed sevenminute period of this contingency, the other stimulus of the pair was presented for four minutes, again contingent upon sucking responses. A renewed increase in sucking behavior was taken as evidence of discrimination and was presumably due to the infant's interest in novel stimulation since there was no such increase in control infants who heard the same syllable of a pair throughout the procedure. Four experimental groups of infants were tested, one on each of the four pairs described above, along with a group of control infants who heard one of the six syllables throughout the session. The data showed very clear evidence of context-dependent processing: only those contrasts that crossed the adult /b/-/w/ boundary were discriminated by the infants. Thus infants, like adults, process speech in a context-dependent manner with earlier-occurring information (transition duration) treated in relation to later-occurring information (overall syllable duration). Findings of this nature impose important constraints on a theory of speech perception. The data strongly suggest that the context-dependency inherent in the mapping between acoustic signal and phonetic structure in adults is not a consequence of learning a
language, but rather finds its origins in
page_42 Page 43
context-dependent perceptual processing in early infancy. Any viable theory must accommodate this finding. The Mechanism: Speech Specific or General Purpose? The perception of the /b/-/w/ distinction provides an example of both the context-dependent mapping known to be pervasive in adult speech perception and the precursors of this type of mapping in early infancy. The thorny issue, to which we now turn, is the nature of the perceptual mechanism that underlies the ability of adults and infants to process speech in such a context dependent manner. Over the years, the issue of mechanism has been discussed primarily in terms of a debate over whether the mechanisms underlying phonetic perception form a system specialized for the processing of speech or whether processing is accomplished by the general auditory system. One of the foremost exemplars of the former position is the motor theory of speech perception proffered by Liberman and his colleagues at Haskins Laboratories. The basic tenets of the theory are that there exists a specialized, modular processing system for phonetic perception, that the system operates in terms of principles of speech production, and that the system is innately specified (for a recent version of the theory, see Liberman and Mattingly 1985). The interpretation that was offered by Miller and Liberman to explain their results can be readily set within this framework. As noted above, they proposed that the shift in /b/-/w/ boundary location with a change in overall syllable duration reflected the listener's normalization for speaking rate. Specifically, a slowing of articulatory rate produces an increase in the overall duration of the syllable, as well as a lengthening of the initial formant transitions (Miller and Baer 1983). This results in the value of transition duration that optimally separates the transition-duration distributions for /b/ and /w/ shifting toward longer transitions with an increase in syllable duration. According to the motor theory, listeners have tacit knowledge of these consequences of a change in rate during production, and the perceptual mechanism makes use of this knowledge during perception to normalize appropriately for the changes in rate. The infant data of Eimas and Miller can also be easily accommodated within this framework because, as noted above, one assumption of the theory is that the specialized phonetic processor is innately given. However, there is a major alternative account of the /b/-/w/ boundary shift. It is based on the view that phonetic perception is not the result of a page_43 Page 44 specialized processing system, but rather derives from the operation of the general auditory system itself. Such a view has been proposed by a number of investigators and has been explicitly tested with respect to the /b/-/w/ contrast. Two main strategies have beed used, each of which has a long history in speech research. The first strategy has been to examine the response patterns of nonhuman animals to speech stimuli. To the extent that the animal response patterns parallel those of humans, it is argued that specialized processing mechanisms are not required to account for human perception (see Kuhl 1986). Recently, Stevens, Kuhl, and Padden (1988) used this approach to investigate whether the /b/-/w/-context effect derives from species-specific processing or more general auditory mechanisms. They did so by testing Japanese macaque monkeys for their ability to discriminate the same pairs of stimuli used by Eimas and Miller (1980) in their study of human infants, described above. Monkeys were tested on two pairs of syllables from the 80-msec /ba/-/wa/ series and two pairs from the 296-msec /ba//wa/ series, specifically those with transition durations of 16 and 40 msec and those with transition durations of 40 and 64 msec. The monkey data patterned similarly to those of the human infants: discrimination performance was relatively good for the 16-40 msec pair from the 80-msec series and for the 40-64 msec pair from the 296-msec series. These results were taken as evidence that the syllable-duration effect in humans may be due to auditory processing in general and not to speech-specific processing. Although the parallel between human and monkey data are indeed striking and certainly consistent with a general auditory account, it should nonetheless be cautioned that parallels in behavior across animal species do not
necessarily imply the same underlying mechanism. The second major strategy that has been used to investigate the mechanism issue involves creating nonspeech analogs to the speech patterns in question and then assessing whether perception of these analogs mirrors perception of the speech patterns in critical respects. To the extent that it does, the conclusion is drawn that the perception of the speech patterns, like that of the nonspeech analogs, is accomplished by auditory processes that apply equally well to speech and nonspeech. No specialized speech processing mechanism need be invoked. Pisoni, Carrell, and Gans (1983) tested this idea using two /ba/-/wa/ series, one with short syllables (80 msec) and the other with long syllables (295 msec). These were closely patterned after a subset of the stimuli employed by Miller and Liberman (1979) and, in a replication study, page_44 Page 45 showed the same basic boundary shift when tested on a group of listeners; that is, the category boundary was located at a longer transition duration for the 295-msec as compared to the 80-msec stimuli. To assess the mechanism issue, Pisoni, Carrell, and Gans also generated a set of sinewave stimuli that were patterned directly after the speech stimuli, but were not themselves heard as speech. The stimuli were created by replacing each of the three formants in the speech syllables with a sinewave located at the center frequency of the formant. The nonspeech stimuli were given to listeners to label as having an onset that was either abrupt (corresponding to the /b/-end of the series) or gradual (corresponding to the /w/end of the series). Listeners succeeded in dividing each series into two categories, abrupt and gradual. Of particular importance is that, as in the case of speech, the boundary for the 80-msec series was located at a shorter transition duration than that for the 295-msec series. And there was a further finding of interest, one that bears directly on a result obtained by Miller and Liberman (1979) that we have not discussed so far. In their paper, Miller and Liberman not only assessed the influence of overall syllable duration on the location of the /b/-/w/ boundary but also the role of syllable structure. The comparison of interest with respect to the Pisoni, Carrell, and Gans study involved series that were matched for overall syllable duration but differed in syllable structure. One series ranged from /ba/ to /wa/ and the other from /bad/ to /wad/. Although syllable duration was held constant, the boundary was located at a significantly shorter transition duration for the /bad/-/wad/ series than for the /ba/-/wa/ series. Thus, syllable structure, as well as duration, was important. According to a rate normalization account, this is because the /bad//wad/ syllables with three phonetic segments reflected a faster speaking rate than the equally long /ba/-/wa/ syllables with two segments, and listeners took this into account, setting the phonetic boundary at a shorter transition duration. However, Pisoni, Carrell, and Gans found the same result for nonspeech sinewave analogs of the /bad/-/wad/ series. For two nonspeech series that were matched for overall duration, the boundary was located at a shorter onset value for /bad//wad/ analogs than for /ba/-/wa/ analogs. On the basis of their findings, Pisoni, Carrell, and Gans argued that a specialized, speech-specific processing mechanism that normalizes for speaking rate is not required to explain the context-dependent nature of phonetic categorization for /b/ and /w/. Rather, these context effects arise through general auditory processes that influence the perceptual categori page_45 Page 46 zation of speech and nonspeech alike. Diehl and Walsh (1989), who have recently replicated the basic /ba/-/wa/ result with a new set of sinewave analogs, make the same general argument. Interestingly, Jusczyk and his colleagues (Jusczyk et al. 1983) have found that young infants, as well as adults, process nonspeech sinewave analogs in a context dependent, relational manner. More specifically, following the design of Eimas and Miller (1980), they tested infant discrimination on pairs of stimuli drawn from an 80- and a 295-msec sinewave series (such as used by Pisoni, Carrell, and Gans). Jusczyk et al. found that, for the 80-msec series, the infants discriminated stimuli with transition durations of 15 and 35 msec but not stimuli with transition durations of 35 and 55 msec. Conversely, for the 295-msec series, stimuli with transition durations of 35 and 55 msec were discriminated but not stimuli with transition durations of 15 and 35 msec. These results closely parallel those found for speech discrimination by Eimas and Miller.
The nonspeech studies on adults and infants can be taken as evidence that general auditory processes, and not specialized speech-specific processes, may underlie the /b/-/w/-context effect. According to this view, although such context dependent processing has the consequence of providing a means for listeners to perceive speech despite changes in the acoustic signal due to articulatory rate, the mechanisms responsible are not themselves specialized for this purpose. The same general argument would apply to the many other context effects that occur in speech. The interpretation of the nonspeech studies, however, is not entirely straightforward. A first problem is that similarities in the patterns of identification for speech and nonspeech stimuli do not necessarily mean that the same perceptual mechanism is invoked. Different underlying mechanisms (in this case, one for speech and the other for nonspeech) may have led to similar patterns of categorization for the speech stimuli and their nonspeech analogs (cf. Fowler 1990). A second problem is that, even if one mechanism were involved, strong parallels for speech and nonspeech could arise not because both types of signals undergo processing by the same general auditory system but because both undergo processing by mechanisms that are specialized for speech. It may be that the nonspeech stimuli are so close in critical respects to the speech stimuli that, even though they are not overtly heard as speech, they nonetheless engage the speech processorthe processor is fooled, as it were. This is a difficult problem to overcome because, to be a valid comparison in the first place, the nonspeech analogs must have critical features in common with the speech stimuli they are meant to mimic. page_46 Page 47 Thus, the animal and nonspeech data do not provide unequivocal evidence in support of a general auditory account of the /b/-/w/-context effect. Also, there is another factor that is problematic for an auditory account, one having to do with the relation between speech production and speech perception. Earlier we commented on the close match between the consequences of a change in rate during production and the adjustment for those changes in perception. Specifically, we noted that a slowing of the articulatory rate shifts the transition-duration distributions for /b/ and /w/ toward longer values. This results in the optimal value separating the distributions shifting toward longer transition durations (Miller and Baer 1983). The basic effect reported by Miller and Liberman (1979) is an analogous perceptual boundary shift. But there are two additional parallels between production and perception. First, the shift in the distributions toward longer transitions in the Miller and Baer production study is not only reflected in a shift in the /b/-/w/-perceptual boundary but also in a shift in those syllables listeners judge to be good exemplars of a given category. As the syllables become longer, a longer transition is required (Miller, O'Rourke, and Volaitis, in progress; cf. Miller and Volaitis 1989). Second, in the Miller and Baer study, as the syllables became longer, the transition-duration distributions not only moved toward longer durations but became wider. Analogously, we have found that, as syllables become longer, listeners perceive a wider range of transition-durations as relevant to the category (Miller, O'Rourke, and Volaitis, in progress). Thus, the correspondence between how a change in rate alters the acoustic signal and how listeners map the acoustic signal onto phonetic categories is quite striking. It is important to note that the close parallels between production and perception are naturally accommodated within the motor theoretic view of Liberman and his colleagues. On this view, the perceptual mechanism responsible for rate normalization is based on principles of articulation reflected in the way rate influences phonetically relevant acoustic properties. As a consequence, the changes wrought by rate are mirrored in the way in which listeners map contextdependent acoustic information onto phonetic categories and, thus, the close correspondence between production and perception. In contrast, there is no natural way within an auditory theory to account for this close correspondence (cf. Liberman and Mattingly 1985). It could of course be that the match between production and perception is coincidental. Or it may be that the articulatory system itself operates so as to match its output to language-independent characteristics of the auditory system, that is, auditory processing drives articu page_47 Page 48 lation. Yet another possibility, recently proffered by Diehl and Walsh (1989) in the context of the /b/-/w/ example, is that languages have come to use those linguistic contrasts for which auditory and articulatory constraints are well matched. The problem is that there is little direct evidence for these alternatives.
Where does this leave us? We have firm evidence that the speech signal varies substantially with changes in rate of articulation and that listeners somehow make the appropriate adjustments necessary to perceive the intended phonetic structure. Furthermore, the infant data strongly suggest that, whatever the mechanism responsible for such processing, it is operative in early infancy. However, the evidence at hand does not provide a compelling case for choosing between an account of underlying mechanism based on specialized processing and an account based on the operation of the general auditory system. As our above example shows, the controversy over mechanism remains unresolved, and this is true not only for the /b/-/w/ context effect but for speech processing in general. In the next section, we offer some observations on factors that have hindered development of theoretical accounts of speech perception and speculate on the kinds of data that may help resolve the issue. The Search for Mechanism In attempting to understand perceptual phenomena at the level of mechanism, especially those phenomena that have species-specific, biological significance, investigators typically seek answers to three questions: (1) what are the procedures and mechanisms by which environmental signals are transformed into representations that underlie adaptive behavior, (2) to what extent are the mechanisms modular in nature, and (3) to what extent is perception mediated by mechanisms that are species specific? As we have noted, the last two questions have been of major theoretical concern in speech perception since the earliest findings (e.g., Liberman 1957). Nonetheless, the evidence summarized above, as well as findings concerned, for instance, with categorical perception and duplex perception, has been less than conclusive. The question is why. Our thesis is that the ''failure" in theory construction in speech perception is primarily a consequence of the fact that the only detailed evidence we have for this enterprise is that obtained from psychophysical studies. Although such data are obviously necessary, they may not be sufficient in and of themselves to formulate a realistic theory or model that goes be page_48 Page 49 yond a simple redescription of perceptual phenomena. Indeed, it can be argued that such data are by their very nature not sufficient to resolve these theoretical issues, even the broad issues of modularity and species specificity. This view is succinctly stated by Rose: "The black boxes can be modelled to generate predictions as well as post hoc accounts. However, the problem with this type of abstract function-box modelling is that, whilst it may pass a theory test, it is likely to fail a reality test; an infinite number of models is always possible for any outcome, and failures can always be 'adjusted' by modifying parameters without changing basic design elements" (Rose 1980, i). Of course, this contention, which we take as true, does not necessarily preclude theories of human speech perception that are truly explanatory. Many cognitive scientists, including ourselves, hold that such theories are possible provided that they are constrained and made realistic by knowledge of the neurophysiology that underlies the perceptual events in question. Nevertheless, even this most basic of assumptions is found to be the target of disquieting discussions in the literature, which warn us that neurophysiological explanations of mental events will not be easily come by nor will they be the product of simple reductionism (e.g., Rose 1980). Also, as is true for models without neurophysiological underpinnings, the number of ways in which macroscopic events may be explained by microscopic mechanisms may be so large as to preclude ultimately finding the explanation we seek (Uttal 1990). There is the possibility, also noted by Uttal, that the whole, the macroscopic perceptual events, may be greater than what can be expected from the sum of the microscopic mechanisms. But, even if these arguments ultimately prove true, we have little recourse as scientists but to pursue what appears to be at this moment in history the better strategynamely, to constrain theory by what is known of the neurophysiology of language and speech perception. This constraint is necessary, even though we are immediately confronted with the difficulty that theories of human perceptual abilities must draw on neurophysiological data bases severely restricted by ethical considerations. This is apparent in speech perception in which the evidence from studies of neurologically impaired observers and even of electrically induced responses is insufficiently precise as yet to determine issues of modularity and species specificity, let alone specific mechanisms (cf. Miller and Jusczyk 1989). However, there are investigators (e.g., Suga 1984; Sussman 1989) who believe that the use of processing principles revealed by neurophysiological studies of complex, biologically important perceptual
page_49 Page 50
domains in nonhuman organisms may provide insight into possible mechanisms of speech perception; that is, such processing systems may prove useful as analogs. We illustrate this view by briefly examining the relevance to theories of speech perception of two neurophysiological descriptions of perceptual processing by nonhuman organisms. The first concerns sound localization in the barn owl (Tyto alba) (e.g., Konishi et al. 1988), and the second, echolocation in the mustached bat (Pteronotus parnellii rubiginosus) (e.g., Suga 1984).3 There is evidence that localization along the azimuth in the barn owl (to which we restrict our discussion) is a function of the interaural time difference (ITD) that exists between the times when a signal displaced to the left or right of the owl arrives at each ear. However, Konishi and his colleagues (e.g., Konishi et al. 1988) have found that neurons in the central nucleus do not actually respond to the ITD per se but rather to the interaural phase difference. This creates a condition of phase ambiguity: at different frequencies, the same phase difference results in different ITDs, and for a particular frequency, neuronal units are sensitive to more than a single phase difference. This ambiguity (context dependency) is resolved by the existence of an ensemble of neurons sensitive to phase differences. These neurons are arranged in frequency-sensitive layers in such a way that across the entire ensemble of isofrequency units maximal firing rates occur for a single ITD. In effect, ITD is an emergent property, derived from units that are sensitive to the combined features of frequency and interaural phase difference. The output of the ensemble, an ITD, is registered by units of the external nucleus that form a space-specific map. Sussman (1989) has argued that the existence of a functionally similar ensemble of neuronal units in the human auditory system, sensitive to both the onset frequency of the second-formant transition and the second-formant vowel target frequency, could in principle result in an emergent property corresponding to place of articulation for stop consonants. This would provide an account of how listeners perceive the intended place value of the consonant despite the fact that the form of the relevant acoustic information varies as a function of the following vowela classic case of contextdependent perception. With respect to the example used in the present paper, it is possible that ensembles of neuronal units that are sensitive to combinations of spectral and temporal information distributed throughout the syllable (including information about formant transitions, steady-state segments, and overall syllable duration) might yield page_50 Page 51 categorical representations for the /b/-/w/ distinction that accommodate variation in the rate of articulation. In a similar vein, Sussman (1986, 1988, 1989) has demonstrated how neurons, analogous to those that presumably regulate the echolocation of prey and obstacles in bats, can begin to accommodate the perception of vowels. Suga (1984, 1988) has demonstrated the existence of neurons that are sensitive to combinations of steady-state frequencies from the emitted ultrasonic signals and their returning echoes. Other units are sensitive to temporal differences between the frequency-modulated sweeps of the emitted signals and the corresponding information in the returning echoes. The former provides information about target velocity and target characteristics, whereas the latter provides information about target range and quite possibly also about target characteristics. Suga (1984, 1988) has noted how analogous neuronal units might provide a basis for vocalic and consonantal identification with respect to place and voicing. Expanding on this theme, Sussman (1986, 1988, 1989) has shown how similarly functioning neuronal units can provide a means for the categorization of vowels. In his scheme, there exists at a lower level of organization neuronal units sensitive to information that represents the center frequency of the first and second formants, the first and third formants, and the second and third formants. At a higher hierarchically organized level, units are assumed to exist that respond to the center frequency of each formant in combination with the geometric mean frequency of the three formants. In essence, this model instantiates an algorithm for the normalization of vowels (Miller, Engebretson, and Vemula 1980) and, in so doing, provides a neuronal representation for vowels despite considerable variation in formant frequencies. Again, it is possible to imagine that suitably arranged combinations of neurons could, in an analogous fashion, yield representations of /b/ and /w/ that are sensitive to variation in speaking rate.
The work described above provides a promising beginning to the utility of drawing on the neurophysiology of processing complex auditory events in nonhuman organisms for constraining theories of speech perception. This is not really surprising in that there is an argument to be made that nature has been frugal in the number of ways in which solutions to difficult problems have been achieved. Thus, knowledge of one domain may well illuminate our understanding of domains that on the surface appear quite removed. Such an approach, by attempting to model the neurophysiological bases of speech perception, may help resolve the long-standing page_51 Page 52 debate on whether the mechanisms of speech perception in humans are rooted in general auditory principles or form part of a specialized system that has evolved to permit the perception of phonetic structure. Nevertheless, before we can accept this general approach to the problem of theory construction in speech perception, it remains to be determined if we can obtain the necessary corroborative evidence showing that the neurophysiological principles of processing that operate in one or more domains of perception and in one or more species are in fact applicable to the perception of speech in human listeners. Improvements in noninvasive imaging techniques that yield information on the neural locus and time course of processing may provide the corroborative evidence that is needed. In addition, we must confront the problems that arise from any effort to offer a complete theory of speech perception, one that ties perception to brain and, in so doing, captures not only the means of analysis and integration of information but the systematicity of phonetics and ultimately the links between the perception and production of speech. A description of these processes at the level of brain undoubtedly involves millions of individual neuronal units, manyperhaps hundreds or even thousandsof neural networks, and the numerous interconnecting links between networks both within and between levels of analysis and synthesis. Given the vast numbers of neural structures involved, it seems necessary to us that, at some point in theory development, we must move beyond anatomical and neurophysiological findings and generalizations and utilize formalisms to account for the organization and coherency that exists in perception (cf. Eimas and Galaburda 1989). In sum, the success of this approach remains to be determined. But, if it is at all informative, perhaps we will find a means to make progress on the study of mechanisms underlying speech perception and their origins in early infancy. Acknowledgments Preparation of this chapter was completed in October 1991 and was supported by NIDCD Grant DC-00130 (JLM), NIH BRSG RR-07143 (JLM), and NICHHD Grant HD-05331 (PDE). Notes 1. As the transitions became longer, their slope became more shallow, and the onset of the syllable's amplitude became more gradual. Thus, the change in transition duration actually entailed a change in a set of properties contributing to page_52 Page 53 the abruptness of the syllable's onset. For ease of explication, the change in the set of properties is referred to as a change in transition-duration. 2. Shinn, Blumstein, and Jongman (1985) have reported that this syllable duration effect can be reduced or even eliminated if /ba/-/wa/ series are created that more closely approach natural speech than did the rather stylized stimuli used by Miller and Liberman (1979). However, recent evidence suggests that under certain conditions, the Shinn, Blumstein, and Jongman stimuli, like the Miller and Liberman stimuli, are processed with respect to syllable duration. Specifically, we have found that these stimuli yield a syllable-duration effect when presented in a background of babble noise (Miller and Wayland, in press). Thus, the original finding that listeners treat transition-duration in a contextdependent manner is not limited to the use of stylized stimuli. 3. Note that this is a very different use of nonhuman animals in theory construction from that discussed earlier, where animals are tested on their perception of human speech. In the present case, biologically significant perceptual systems of
animals are studied in their own right in order to gain insight into how perceptual systems (general or specific) might solve problems of context dependency. References Diehl, R. L. and Walsh, M. A. (1989). An auditory basis for the stimulus-length effect in the perception of stops and glides. Journal of the Acoustical Society of America, 85, 2154-2164. Eimas, P. D. and Galaburda, A. M. (1989). Some agenda items for a neurobiology of cognition: An introduction. In P. D. Eimas and A. M. Galaburda (eds.), Special Issue: Neurobiology of Cognition, Cognition, 33, 1-23. Eimas, P. D., and Miller, J. L. (1980). Contextual effects in infant speech perception. Science, 209, 1140-1141. Eimas, P. D., Siqueland, E. R., Jusczyk, P., and Vigorito, J. (1971). Speech perception in infants. Science, 171, 303-306. Fowler, C. A. (1990). Sound-producing sources as objects of perception: Rate normalization and nonspeech perception. Journal of the Acoustical Society of America, 88, 1236-1249. Jusczyk, P. Speech perception. (1986). In K. R. Boff, L.Kaufman, and J. P. Thomas (eds.), Handbook of perception and human performance (pp. 27-1-27-57). New York: Wiley. Jusczyk, P. W., Pisoni, D. B., Reed, M. A., Fernald, A., and Myers, M. (1983). Infants' discrimination of the duration of a rapid spectrum change in nonspeech signals. Science, 222, 175-177. Konishi, M., Takahashi, T. T., Wagner, H., Sullivan, W. E., and Carr, C. E. (1988). Neurophysiological and anatomical substrates of sound localization in the owl. In G. M. Edelman, W. E. Gall, and W. M. Cowan (eds.) Auditory function: Neurobiological bases of hearing (pp. 721-745). New York: Wiley. page_53 Page 54 Kuhl, P. K. (1986). Theoretical contributions of tests on animals to the special-mechanisms debate in speech. Experimental Biology, 45, 233-265. Liberman, A. M. (1957). Some results of research on speech perception. Journal of the Acoustical Society of America, 29, 117-123. Liberman, A. M., Cooper, F. S., Shankweiler, D. P., and Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431-461. Liberman, A. M., Delattre, P. C., Gerstman, L. J., and Cooper, F. S. (1956). Tempo of frequency change as a cue for distinguishing classes of speech sounds. Journal of Experimental Psychology, 52, 127-137. Liberman, A. M. and Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21, 1-36. Lisker, L. and Abramson, A. S. (1964). A cross-language study of voicing in initial stops: Acoustical measurements. Word, 20, 384-422. Lisker, L. and Abramson, A. S. (1970). The voicing dimension: Some experiments in comparative phonetics. In Proceedings of the sixth international congress of phonetic sciences, Prague, 1967 (pp. 563-567). Prague: Academia. Miller, J. D., Engebretson, A. M., and Vemula, N. R. (1980). Vowel normalization: Differences between vowels spoken by children, women, and men. Journal of the Acoustical Society of America, 68, S33. Miller, J. L. (1980). Contextual effects in the discrimination of stop consonant and semivowel. Perception and Psychophysics, 28, 93-95. Miller, J. L. (1981). Effects of speaking rate on segmental distinctions. In P. D. Eimas and J. L. Miller (eds.). Perspectives on the study of speech (pp. 39-74). Hillsdale, N.J.: Erlbaum.
Miller, J. L. and Baer, T. (1983). Some effects of speaking rate on the production of/b/ and /w/. Journal of the Acoustical Society of America, 73, 1751-1755. Miller, J. L. and Eimas, P. D. (1983). Studies on the categorization of speech by infants. Cognition, 13, 135-165. Miller, J. L. and Jusczyk, P. W. (1989). Seeking the neurobiological bases of speech perception. In P. D. Eimas and A. M. Galaburda (eds.), Special Issue: Neurobiology of Cognition, Cognition, 33, 111-137. Miller, J. L. and Liberman, A. M. (1979). Some effects of later-occurring information on the perception of stop consonant and semivowel. Perception and Psychophysics, 25, 457-465. Miller, J. L., and Volaitis, L. E. (1989). Effect of speaking rate on the perceptual structure of a phonetic category. Perception and Psychophysics, 46, 505-512. Miller, J. L., and Wayland, S. C. Limits on the limitations of context conditional effects in the perception of [b] and [w]. Perception & Psychophysics, in press. page_54 Page 55 Pisoni, D. B., Carrell, T. D., and Gans, S. J. (1983). Perception of the duration of rapid spectrum changes in speech and nonspeech signals. Perception and Psychophysics, 34, 314-322. Rose, S. P. R. (1980). Can the neurosciences explain the mind? Trends in Neurosciences, 3(5), i-iv. Shinn, P. C., Blumstein, S. E., and Jongman, A. (1985). Limitations of context conditioned effects in the perception of [b] and [w]. Perception and Psychophysics, 38, 397-407. Stevens, E. B., Kuhl, P. K., and Padden, D. M. (1988). Macaques show context effects in speech perception. Journal of the Acoustical Society of America, 84, S77. Stevens, K. N. and Blumstein, S. E. (1981). The search for invariant acoustic correlates of phonetic features. In P. D. Eimas and J. L. Miller (eds.), Perspectives on the study of speech (pp. 1-38). Hillsdale, N.J.: Erlbaum. Suga, N. (1984). The extent to which biosonar information is represented in the bat auditory cortex. In G. M. Edelman, W. E. Gall, and W. M. Cowan (eds.), Dynamic aspects of neocortical function (pp. 315-373). New York: Wiley. Suga, N. (1988). Auditory neuroethology and speech processing: Complex-Sound processing by combination-sensitive neurons. In G. M. Edelman, W. E. Gall, and W. M. Cowan (eds.), Auditory function: Neurobiological bases of hearing (pp. 679-720). New York: Wiley. Sussman, H. M. (1986). A neuronal model of vowel normalization and representation. Brain and Language, 28, 12-23. Sussman, H. M. (1988). The neurogenesis of phonology. In H. A. Whitaker (ed.), Phonological processes and brain mechanisms (pp. 1-23). New York: Springer-Verlag. Sussman, H. M. (1989). Neural coding of relational invariance in speech: Human language analogs to the barn owl. Psychological Review, 96, 631-642. Uttal, W. R. (1990). On some two-way barriers between models and mechanisms. Perception and Psychophysics, 48, 188-203. page_55 Page 57
Chapter 3 The Importance of Childhood to Language Acquisition: Evidence from American Sign Language
Rachel I. Mayberry The idea that childhood is a special time for learning is both very old and very common, first appearing in the literature over a hundred years ago (Columbo 1982). The idea that language acquisition is especially tied to childhood has a more recent history. In the 1950s, the pioneering neurosurgeon Wilder Penfield declared that "the brain of the child is plastic. The brain of the adult, however effective it may be in other directions, is usually inferior to that of the child as far as language is concerned" (Penfield 1959, 240). So impressed was Penfield with the language learning skills of children that he believed "parents can bring up their children to be fluently bilingual or trilingual if they make the right effort at the right time" (Penfield 1963, 128). The idea that language must be learned at precisely the right moments in development is known as the critical period for language acquisition. Eric Lenneberg was the first investigator to hypothesize in detail that language acquisition is a developmentally timelocked phenomenon. He speculated that "the primary acquisition of language is predicated upon a certain developmental stage which is quickly outgrown at the age of puberty" (Lenneberg 1967, 142). Lenneberg argued that language was easily acquired in childhood because the two hemispheres of the brain were not yet specialized for cognitive and linguistic function. He thought that language was difficult to learn after puberty due to the onset of cerebral lateralization. Lenneberg based his theory on several sets of clinical observations that underscored the importance of chronological age to language acquisition. First, he noted that children appeared to recover from aphasia due to brain damage but that adults did not. Second, he noted that the language acquisition of mentally retarded children seemed to stop abruptly at puberty, even if it was incomplete at the time. His third observation concerned the speech of deaf children. He observed that the effects of deafness page_57 Page 58 on speech intelligibility were highly age related: the older the child upon becoming deaf, the better the child's eventual speech skill. Subsequent research has failed to replicate all of Lenneberg's original claims (Columbo 1982). For example, cerebral lateralization is present at twenty-nine weeks gestation (Wada and Davis 1977) and functional at birth (Molfese, Freeman, and Palermo 1975). Thus, if childhood is critical to the language acquisition process, an absence of cerebral lateralization cannot be the cause. In addition, the recovery from aphasia shown by children, but not adults, is more apparent than real. Cooper and Flowers (1987), among others, have found that children who exhibit aphasia due to brain damage do not fully recover language skill. Rather, the language deficits caused by childhood brain damage continue into adulthood. Levine et al. (1987) explain the illusion. The younger the child at the onset of brain damage and the age of onset of aphasia, the greater the appearance of recovery. This is because the language ability of young children is not easily compared to that of adults. We do not expect young children to perform with the same degree of linguistic sophistication as adults and, therefore, mistakenly conclude that children, but not adults, recover normal language function subsequent to brain damage. Is the implication of these findings that Lenneberg was wrong and childhood is not critical to language acquisition? Is childhood simply coincidental to language acquisition? Another way to test the hypothesis is to look for circumstances in which the age of onset of language acquisition varies naturally. The task is then to determine whether variation in the age of onset of language acquisition predicts eventual language skill in later adulthood. The most commonly studied circumstance is second-language learning. Another, and rarer, circumstance is social isolation in childhood. A third, and heretofore overlooked, circumstance is the acquisition of sign language by individuals who are born deaf. In this chapter, I focus on a series of studies designed to illuminate the possible relationships between the age at which deaf individuals first acquire sign language and their ability to process it in later adulthood. The results show the effects of age of acquisition on sign language processing to be (1) robust and complex and (2) more pronounced for first- as compared to second-language acquisition. The results of these sign language studies are best understood in the context of the two more well-known circumstances of language acquisition begun at a late age, namely, second-language learning and childhood social isolation, which I discuss first. Together the various findings suggest that language acquisition in early
page_58 Page 59 life has very specific effects on language processing that are readily apparent in later adulthood, as I explain last. Spoken Language Development Second-Language Acquisition The most frequent situation in which the age of onset of language acquisition varies naturally is second-language learning. Research examining the relationship between the age of onset and the outcome of second-language learning is extensive and somewhat contradictory. Reviewing this complex literature, Krashen, Long, and Scarcella (1979) distill two generalizations to account for the data. First, given equal tutelage and exposure to a second language, adults are faster second-language learners than children over the short term. In the long run, however, children are more efficient language learners than adults in that they are more likely to achieve nativelike proficiency in second-language comprehension and production. Four studies are of special interest to us here because each documents a predictive relationship between age of onset of language acquisition and its long-range outcome. First, Oyama (1976, 1978) examined the English production and comprehension skills of native, Italian speakers living in New York City. She found a negative correlation between the age at which the immigrants arrived in the United States and their proficiency in English comprehension and production. Second, Coppieters (1987) studied the ability of adult second-language learners of French to select grammatical structures and paraphrase sentences. The unique feature of this study was that the second-language speakers were highly proficient, or ''near native." All were authors or professors who wrote regularly in their second language. Despite this high degree of second-language proficiency, however, the near natives performed two to three standard deviations below the mean of the natives. Third, Johnson and Newport (1989) found that the age at which Chinese and Korean speakers first learned to speak English predicted their ability to judge the grammaticality of English sentences such that the earlier learners showed better performance than the later ones. Finally, Flege (1987; Flege and Fletcher 1992) found that the ability to speak a language without an accent, or "like a native," declines precipitously after early childhood and declines afterward in a linear relation to age of acquisition until early adolescence. The above studies show that age of language acquisition predicts, to a substantial degree, the proficiency with which a second language page_59 Page 60 can eventually be understood and spoken. However, second-language learning may not be the best test of the critical period hypothesis of language acquisition. The age-of-acquisition effects associated with second-language learning may reflect the amount, or degree, of a first, or native, language an individual has acquired prior to learning the second one rather than age-of-acquisition effects per se. For example, the acquisition of a first language may "set" or "tune" language-processing skills such that learning to perceive and produce a subsequent and second language becomes more difficult the more first language that has been acquired. This interpretation is suggested by research examining the speech perception and production skills of infants and adults. For example, Werker (1989) finds that infants quickly lose the ability to discriminate speech sounds from foreign languages. DeBoysson-Bardies and Vihman (1991) find that the babbling of infants favors sounds from their mother tongue. Cutler et al. (1989; Cutler, this volume) further find that even speakers who are considered to be equally proficient in two languages show perceptual biases favoring one language over the other. The possible confound of knowing two languages is eliminated in the rare instance of social isolation in childhood. Childhood Social Isolation Cases of social isolation during childhood are germane to the critical period hypothesis because social isolation entails linguistic isolation too. However, the utility of such case studies is often limited for several reasons. First, the language skills of these unfortunate children is often not well described (Singh and Zingg 1966). In addition, social isolation often includes many other deprivations, such as malnutrition, physical restraint, and/or sensory deprivation, which greatly
confound the effects of acquiring language at a late age (Skuse 1988). Nevertheless, some cases of childhood social isolation have been well documented and provide some insights. For example, Koluchova (1972) studied twins whose stepmother locked them in a basement from the ages of 2 to 7 years. During the four years after their release, yearly IQ testing showed their verbal performance to improve from subnormal to normal levels. This case suggests that, if acquisition of a first language is delayed, the outcome can be positive if the acquisition begins in childhood. It is important to note that the twins were not completely bereft of language input or human contact during their isolation. Because they were 2 years old at the start of their isolation, they probably had begun to acquire language. And since they were in each other's company through page_60 Page 61 out their isolation, they may have communicated with each other. These factors may have facilitated their later success at language acquisition. The most dramatic case of childhood social isolation is the well-known case of Genie. Genie suffered such extreme abuse that she did not begin to acquire language until the age of 13. As Curtiss (1977) describes, Genie managed to acquire a good deal of English but her acquisition deviated from that of normal children in several ways. Her acquisition was quite slow, her vocabulary learning outstripped her syntactic learning, her comprehension outstripped her production, and she had pervasive problems with the English auxiliary system. Genie's case shows that, if acquisition of a native language is postponed until after childhood, its course is atypical. Her case further shows that any age limitations on first language acquisition are not absolute (Fromkin et al. 1974). Rather, a significant amount of first language can be acquired after childhood. Finally, Mason (1942) and Davis (1947) reported a case of social isolation that is especially relevant to the sign language studies I discuss below. Isabelle, who had normal hearing, and her mother, who was deaf, were shut away together for the first six years of Isabelle's life. When they escaped, Isabelle was observed to gesture but not to speak. Her spoken language was described as being completely normal only two years later. At first glance Isabelle's case appears to demonstrate the same principle as the Koluchova (1972) twins, namely, that the outcome of language acquisition begun after early childhood can be normal if the acquisition begins during childhood and if there is human contact in early childhood. There is a potentially important difference between the two cases, however. Isabelle and her mother probably communicated with one another in either sign language or an elaborated gesture system that approximated sign language.1 If so, this case suggests yet another principle. Isabelle's late, but speedy, acquisition of spoken language may have been due to her prior acquisition of a gestured language. In other words, any deleterious effects of acquiring language at a late age may be mitigated by having previously acquired a first language. Indeed, the results of the studies that I describe below show that this is an important principle of language acquisition. Sign Language Development The third circumstance in which the onset of language acquisition is often delayed until after early childhood is the acquisition of sign language by individuals who are born deaf. This situation may have been overlooked page_61 Page 62 by researchers due to confusion over the nature of sign language. Only recently have linguists viewed sign languages as natural languages (Klima and Bellugi 1979; Liddell 1980; Perlmutter 1991; Wilbur 1987). Historically, sign languages were characterized as being either elaborate pantomime or, alternatively, as gestured ciphers of spoken languagea form of writing speech in the air. Sign languages are neither. Sign languages are natural languages characterized by multiple levels of linguistic organization as are spoken languages, that is, sign languages have lexicons, morphological systems, phonological systems, syntax, and semantics.
Historically, a majority of American deaf signers first acquired sign language in circumstances that were not analogous to the acquisition of a spoken language by children with normal hearing. Many deaf signers first learned to sign at older ages in school dormitories and on playgrounds from deaf friends instead of in the nursery on the laps of their parents. This unusual situation is the product of two factors. First, 90% or more of congenitally deaf signers are born into normally hearing families in which no one knows or uses sign language (Rawlings and Jensema 1977; Schein and Delk 1974). Second, the majority of schools for deaf children actively banned gesture and sign language until recently. Oral educational policy was widespread in the United States until the mid-1960s. One aim of early oralism was to isolate deaf children from sign language (Lane 1984). Deaf children, their parents, and teachers were admonished never to gesture or sign out of fear that, if the children gestured or signed, they would never learn to speak. In fact, many of the subjects in our studies report vivid childhood memories of sundry punishments for communicating with their hands in grade school, such as having their hands smacked with rulers, having buckets of water thrown in their faces, having paper bags put over their heads, or being sent to the coat closet.2Because sign language was driven underground, that is, banned from any place in which hearing adults were present, deaf children could only learn it from each other (Padden and Humphries 1988). One consequence of this long-standing educational campaign against sign language is a significant heterogeneity in the age at which the population of older deaf signers was first able to acquire it. The primary goal of the studies I discuss here was to determine if there are any systematic relationships between the age at which sign language is first acquired by deaf individuals and their ability to process sign language in later adulthood.3Although the question is simple, the phenomenon is not, as will shortly become clear. page_62 Page 63 Studies of Sign Language Processing Effects of Experience Our first study was designed simply to determine whether prior experience with sign language affects sign language processing and, if so, which processing tasks are affected (Mayberry and Fischer 1989). Fifty-five deaf college students between the ages of 18 and 35 participated. All the subjects were born with severe to profound hearing impairments, and the age at which they first acquired some form of sign ranged from birth (native signers who first learned to sign from deaf parents beginning in infancy) to 18 years (nonnative signers who first learned to sign from deaf friends). Note that age of acquisition was confounded with length of sign experience in this study. Thus, the subjects who learned to sign at the oldest ages (e.g., 18 years) had also been using it for the least amount of time (about two years) and vice versa. Processing Tasks The subjects performed two processing tasks, immediate recall and shadowing. For the recall task, the subjects watched short videotaped sentences given in American Sign Language (ASL) and repeated the sentences immediately after viewing them. For the shadowing task, they watched short videotaped ASL sentences and repeated them while simultaneously watching the stimuli. The two tasks differ in several respects. The most notable difference is that the shadowing task divides attention so that the subject must both comprehend and produce the linguistic stimulus at the same time. The recall task does not divide attention, but it places greater demands on short-term memory because the subject cannot respond until the entire stimulus is finished and thus must remember the entire sentence, not just a word or phrase. Performance Accuracy Performance on both tasks declined linearly with decreasing experience, as figure 3.1 shows. The recall task was more difficult than the shadowing task for all subjects regardless of how much experience they had had, as figure 3.1 also shows. Thus, when subjects were required to remember the stimulus sentences in full, rather than parts of sentences, their performance declined. This finding suggests that the processing effects associated with linguistic experience (and age of acquisition, as we shall later see) are probably due to difficulties associated with language decoding and memory rather than with difficulties in language encoding. page_63 Page 64
Figure 3.1 Experiment 1: the performance accuracy of subjects grouped by age of acquisition (and years of experience) for the shadowing and recall tasks. Processing Errors Figure 3.2 shows that linguistic experience with sign language also had significant effects on the kinds of lexical mistakes the subjects made. The proportion of lexical substitutions the subjects made that were related to the meaning of the stimulus sentences for both recall and shadowing increased in association with increasing experience. Conversely, the proportion of lexical mistakes the subjects made that were related to the surface phonological structure of the stimulus signs independent of meaning decreased as experience increased. In other words, the longer the subject had used sign, the more likely he or she was to make a lexical substitution related to sentence meaning and, at the same time, the less likely he or she was to make a lexical substitution related solely to sign form and vice versa. Note that these error proclivities are proportions of total lexical substitutions and not sums. As figure 3.1 shows, the overall error rates (i.e., deletions and substitutions) increased with declining experience. Thus, the native learners made significantly fewer errors than less experienced learners and the linguistic character of their few lexical substitutions was radi page_64 Page 65
Figure 3.2 Experiment 1: mean proportion of lexical substitutions related to phonological shape or lexical meaning (semantic) produced by subjects while recalling sentences as a function of age of sign language acquisition (and years of experience). (Reprinted with permission from R. Mayberry and S. Fischer, Looking through Phonological shape to lexical meaning: The bottleneck of non-native sign language processing, Memory and Cognition, 17, 1989, 740-754, © Psychonomic Society Inc.) cally different from that of the numerous lexical substitutions made by the less experienced signers. What do these contrastive types of lexical errors look like in sign language? Semantic lexical substitutions were shadowing or recall mistakes that were clearly related to the meaning of the target lexical item and stimulus sentence but not to its phonological form, as figure 3.3a shows. For example, one stimulus sentence, as translated into English, was "I looked everywhere for my younger brother." A few subjects changed the target sign meaning younger to one translated as older, producing a response which translates as "I looked everywhere for my older brother." The salient characteristic of the semantic lexical change is that it preserves the domain of lexical meaning and does not alter the syntactic structure of the page_65 Page 66
Figure 3.3 Examples of semantic-lexical substitutions. The left panel shows the stimulus lexical item, and the right panel shows the lexical substitution produced by the subject. Note that the stimulus and substitution are unrelated in articulatory formation. (Illustration by Betty Raskin, © R. Mayberry) page_66 Page 67
stimulus sentence, although it does alter the intended meaning of the target sentence. The way in which these substitutions maintain the general meaning of the stimulus lexical item and sentence is clearly illustrated by the following example. One stimulus sentence translated into English was "I thought I heard something." Note that the statement is an unusual one for a deaf person to make. A few native-learning subjects changed the stimulus sign translated as heard to the sign translated as saw, producing the response translated as "I thought I saw something" (see figure 3.3b). The lexical substitution demonstrates that the subject understood the stimulus sentence perfectly, even though his or her response was not a verbatim rendition of the stimulus. Phonological lexical substitutions showed no relationship to the the stimulus lexical item and sentence at the semantic or syntactic levels. The salient characteristic of phonological lexical changes are that they show a clear phonological relationship to the target sign independent of lexical meaning and often independent of syntactic structure. For example, one stimulus sentence, translated in English, was "Rabbits waddle on short legs." One subject produced the sign translated as funny instead of the sign rabbit, producing a response which translates as "Funny waddle on short legs." At first glance, the lexical alteration seems either a comment on the stimulus sentence or a random and nonsensical error unrelated to the stimulus. Closer inspection reveals that the lexical substitution is indeed yoked to the linguistic structure of the stimulus but at the level of surface phonological form rather than word and sentence meaning, as figure 3.4a shows.
In the above example, the subject's lexical substitution is nearly identical to the stimulus lexical item. The two signs differ in one articulatory parameter, place of articulation. The mistake is thus a visual rhyme for the target, which is a property of phonological lexical substitutions. This rhyming property is clearly illustrated by the following example. One stimulus sentence was translated in English as "I ate too much turkey and potato at Thanksgiving dinner." One subject changed the target translated as and to the sign meaning sleep producing the response "I ate too much turkey sleep potato." The lexical substitution bears no semantic relationship to either the stimulus sentence or target lexical item. However, there is a striking phonological similarity between the two signs. The lexical substitution varies from the target in only one articulatory parameter, place of articulation (see figure 3.4b). page_67 Page 68
Figure 3.4 Examples of phonological-lexical substitutions. The left panel shows the stimulus lexical item, and the right panel shows the lexical substitution produced by the subject. Note that the stimulus and error are similar in articulatory formation. (Illustration by Betty Raskin, © R. Mayberry) page_68 Page 69
These phonological lexical substitutions, which occur during on and offline sign language processing, are neither neologistic gibberish nor random hand waving. Of course, random errors do occur in sign-processing tasks under some circumstances. For example, in an unpublished study, we asked subjects who had just completed a ten-week sign language course to perform a shadowing task in sign language. They were unsuccessful and produced gibberish; their
performance was best characterized as random hand waving. By contrast, the phonological lexical substitutions produced by more experienced nonnative signers are clearly rooted in, and in consonance with, ASL linguistic structure at the phonological level. Equally important, these substitutions are real words from the ASL lexicon, which is also a property of spontaneous speech errors (Dell 1986). Indeed, the phonological lexical substitution reflects considerable linguistic sophistication. What the substitution does not reflect, however, is sensitivity to the meaning of the target lexical item and the semantic context of the stimulus sentence. The lexical substitution patterns associated with sign language experience (see figure 3.2) suggest that comprehension increases as subjects accrue linguistic experience. Our next study was designed to determine if this is the case. Comprehension and Lexical Substitutions An alternative explanation for the the effects uncovered in the previous study is that the nonnative subjects were simply unfamiliar with ASL and were instead familiar with some kind of Englishlike signing. Although ASL is the natural language of the Deaf Community, its use in classrooms is rare. Rather, some version of Pidgin Sign English (PSE) is widely used in educational settings for deaf children.4We thus asked sixteen congenitally deaf signers, eight native learners and eight nonnative learners (whose age of acquisition ranged from 6 to 16 years) to shadow narratives given in PSE and ASL. We also asked the subjects comprehension questions after they completed the narrative-shadowing task (Mayberry and Fischer 1989). Performance Accuracy The native learners outperformed the nonnative learners on the narrative-shadowing tasks in both PSE and ASL, as figure 3.5 shows. Note that both groups shadowed the PSE narratives with greater accuracy than the ASL narratives. This finding may be due to the linguistic simplicity of PSE. PSE, unlike ASL, has sparse deriva page_69 Page 70
Figure 3.5 Experiment 2: mean proportion of sentences (in ASL and PSE narratives) shadowed without error for subjects grouped by age of acquisition (and years of experience). tional and inflectional morphology and, thus, may be easier to shadow because there is less linguistic structure to shadow. More importantly, the finding demonstrates that the processing effects uncovered in the previous study were not an artifact produced by some subjects being more familiar with PSE than ASL, that is, the processing effects associated with varying degrees of sign language experience are not an artifact of sign dialect but rather characterize sign language processing generally. Lexical Processing Substitutions The same lexical substitution phenomena arose in the narrative-shadowing tasks of the
second study as in the sentence-shadowing and sentence-recall tasks of the first study. Native learners made predominantly semantic lexical changes, but nonnative learners made predominantly phonological lexical substitutions, as figure 3.6 shows. Moreover, the linguistic nature of the signers' lexical substitutions predicted narrative comprehension. Semantic lexical page_70 Page 71
Figure 3.6 Experiment 2: mean proportion of lexical substitutions related to phonological shape or lexical and sentential meaning (semantic) produced by subjects while shadowing narratives, as a function of age of acquisition (and years of experience). (Reprinted with permission from R. Mayberry and S. Fischer, Looking through phonological shape to lexical meaning: The bottleneck of non-native sign language processing. Memory and Cognition, 17, 1989, 740-754, © Psychonomic Society Inc.) substitutions were highly and positively correlated with comprehension-question performance. Phonological lexical substitutions were highly and negatively correlated to comprehension-question performance (Mayberry and Fischer 1989). These findings supported our initial interpretation of the lexical substitution phenomenon. The effects of linguistic experience on sign language processing are not primarily due to difficulties with the encoding of sign language, but instead, reflect difficulties in the decoding processes of language, processes that undergird comprehension and memory. Is this linguistic-substitution phenomenon unique to sign language processing? page_71 Page 72 Linguistic Types of Substitutions in Spoken Language Processing Word Recognition and Association In several studies of spoken and written language processing, young children and less experienced learners have been observed to show a lexical response bias that favors phonological relationships. Older children and more experienced learners have been observed to show lexical response biases that favor semantic relationships. In word recognition studies, for example, when young children mistakenly recognize what they have read or heard, they tend to choose foils that are phonologically related to the stimulus words. By contrast, older children tend
to choose foils that are semantically related to the stimuli. These studies have been conducted in spoken and written English and written Japanese (Bach and Underwood 1970; Felzen and Anisfeld 1970; Toyota 1983). Similar response proclivities characterize children's performance on word-association tasks: younger children typically choose phonologically related words, whereas older ones tend to choose semantically related ones (Niccols 1987). Reading Aloud A linguistic-error shift has also been reported for the acquisition of reading skills across first and second languages that entails the same developmental relationship. Children who are beginning to read their native language tend to make oral-reading errors that are phonologically tied to the words they have mistakenly identified rather than word or sentence meaning. More experienced readers tend to make errors in lexical identification that are semantically tied to word and sentence context (Biemiller 1970). The phenomenon appears to reflect an initial stage in the development of language decoding skill and is not specifically age related in the following sense. When adolescents begin to read their second language, they also initially produce, when reading aloud, mistaken lexical identifications that show phonological, but not semantic, relationships to the written words they are learning to decode in print (Cziko 1980). Spontaneous Speech Vihman (1981) observed that a significant proportion of the spontaneous speech errors of young children who were acquiring Estonian, English, French, German, and Spanish were related to the phonological shapes of words and not the meanings. She noted that these errors showed an insensitivity to semantic or syntactic context, such as calling a zucchini a bikini (Stemberger 1989 also reports these kinds of substitutions). I too have observed numerous examples of this type of page_72 Page 73 lexical error in the spontaneous speech of preschool English speakers, such as producing duck for Doug, babysitter for baby sister, car for cart, bottles for bubbles, and minerals for millions. The spontaneous speech errors of young children, like the sign-processing errors of nonnative signers (see figure 3.4), capture our attention because they are incongruous. They are real words that sound like or look very much like the intended word. Although their phonological shapes are highly similar to the intended words, their meanings differ radically.5This kind of childhood spontaneous-speech error has even captured the attention of the cartoonist Bil Keane (see figure 3.7). Source of Phonological Lexical Errors The contrastive types of lexical substitutions produced by the native and nonnative signers in our studies are thus not unique. Rather, there is cross-linguistic, cross-task, and developmental evidence that a shift in the linguistic nature of lexical substitutions corresponds to language development through at least two stages: novice language processing and expert language processing. How might language processing differ at these two stages of development in language processing?
Figure 3.7 Bil Keane cartoon illustrating the phonologically based lexical substitutions
produced by young children during spontaneous speech. (Reprinted with special permission of King Features Syndicate.) page_73 Page 74
Perhaps in the beginning stage of language processing the learner primarily focuses his or her attention on the surface phonological properties of language. Attentional focus on the phonological structure of language may be a prerequisite for language acquisition. As language is acquired, and as the phonological structure of the language, as well as its morphological, syntactic, and semantic mappings, becomes increasingly familiar and well organized, the language learner's attentional focus may shift from the surface phonological properties of language to deeper levels of structure, semantics in particular. This shift in attentional focus from the surface phonological properties of language to its underlying and deeper meaning may be due to automatization of basic decoding processes, such as phonemic perception and lexical identification. Thus, the nonnative signer may produce phonological lexical substitutions in the course of language processing because he or she cannot easily (i.e., without cognitive effort) identify signs and retrieve lexical meaning. This would mean that the nonnative signer must pay more attention to the surface phonological structure of signs than the native signer. Focusing attention on the surface phonological structure of sign language would leave less attention available for other language processing tasks, such as remembering the meaning of signs already identified and integrating meaning across morphologic and syntactic structures. With less meaning stored in working memory, the nonnative signer may find it harder to guess what he or she missed, that is, to fill the lexical gaps. The presence of phonological lexical substitutions during language processing suggests that the nonnative learner has relatively more difficulty with the bottom up portions of language processing than does the native learner. The same may be true for the young child acquiring language, in contrast to the older child or adult. This interpretation fits the observations and introspections of nonnative speakers and signers. For example, students pursuing postgraduate degrees in a second language often report that they can understand the lecture if their task is simply to listen, but not if they must divide their already burdened attention by listening and simultaneously taking notes. A deaf subject in one of our studies framed the problem thus, ''I feel like I get it when I see it, but it disappears quickly and I don't seem able to remember it." Similar sentiments are often expressed by second-language speakers, who feel that they can "almost, but not quite, get it." By it these nonnative signers and speakers are referring to the meaning of the utterances they have just seen or heard. page_74 Page 75 Our studies demonstrate that linguistic experience with sign language, or a lack of it, has pronounced effects on sign language-processing skill that are not unique but that characterize language processing generally. The question remains as to whether or not the effects associated with linguistic experience disappear over time, that is, will the individual who first began to learn sign language after early childhood eventually develop processing skills similar to those of the native learner if given sufficient time and practice? Effects of Age of Acquisition The purpose of our third study was to determine if age of acquisition has long-lasting effects on sign language processing when the amount of experience is both lengthy and controlled (Mayberry and Eichen 1991). The study was further designed to determine if the effects were due to preexisting individual differences in short-term memory capacity that were independent of age of acquisition. Forty-nine congenitally deaf signers participated. They began to acquire sign at ages ranging from infancy (native signers) to childhood (5 to 8 years) and to early adolescence (9 to 13 years). The subjects performed two tasks, recall of long and complex ASL sentences (ten to twelve words in length) and recall of lists of signed digits (digit span). The subjects' previous experience with sign language ranged from 21 to 60 years with a mean of 42 years. Linguistic Characteristics of Recall Responses Age of acquisition showed significant effects on several aspects of the subjects' sign language performance. First, the tendency to give grammatically acceptable responses in ASL declined as
age of acquisition increased, as figure 3.8 shows. Second, the subjects' tendency to paraphrase the intended meaning of the stimulus sentences followed the same pattern: as age of acquisition increased, the similarity in sentence meaning between the the subjects' responses and the stimuli also increased. These findings demonstrate that age of acquisition is a predictive factor in the outcome of sign language acquisition, just as it is for spoken language. Our findings have been corroborated by the work of Newport (1984, 1990) who found age of acquisition to predict the accuracy with which deaf signers can comprehend and produce ASL morphology. In our work, we have further found that deaf signers who first acquired ASL in early childhood tend to alter bound morphology when recalling complex ASL sentences, but late learners tend to strip it (Mayberry and Eichen 1991). page_75 Page 76
Figure 3.8 Experiment 3: mean proportion of sentence recall responses grammatically acceptable for subjects grouped by age of acquisition (with years of experience controlled). Lexical Substitutions during Language Processing As our previous work predicted, performance on the sentence-recall task was associated with differential lexical substitution patterns, as seen in figure 3.9. The older the age of acquisition, the greater the proportion of lexical substitutions that were related solely to the surface phonological structure of the stimulus signs. Conversely, the younger the age of acquisition, the greater the proportion of lexical errors that were related to the meaning of the stimulus signs and sentences independent of phonological form. This finding suggests that the efficiency with which language can be processed is established at a young age and is difficult to achieve when language is acquired after early childhood. Short-Term Memory Capacity, Sign Production, and Cognitive Skills Age of acquisition did not affect all aspects of the subjects' recall performance. First, it showed no effects on short-term recall of signed digits: the sign digit span of the groups was similar. This means that the effects of age of page_76 Page 77
Figure 3.9 Experiment 3: mean proportion of lexical substitutions made in sentence recall related solely to phonological shape or lexical and sentential meaning (semantic) as a function of age of acquisition (with years of experience controlled) (Reprinted with permission from R. Mayberry and E. Eichen, The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition, Journal of Memory and Language, 30, 1991, 486-512, © Academic Press.) acquisition on language processing are not due to underlying and preexisting individual differences in working memory capacity. Second, age of acquisition showed no effects on the length of the subjects' responses as measured by sign (word) length. The late learners were just as verbose as the early learners. In fact, all the subjects tended to give responses that were similar in length to those of the stimulus sentences. Thus, the effects of age of acquisition are not due to parsing problems. Third, age of acquisition showed no effects on the rate at which the subjects articulated signs. The late learners produced signs just as quickly or slowly as the early learners. This finding means that the effects of age of acquisition on sign language processing are not due to underlying prob page_77 Page 78 lems in the fine-motor production and coordination required for fluent signing. Finally, these age of acquisition effects are probably not due to basic differences in cognitive skill among the groups. On a variety of nonverbal intelligence tests, the performance of the congenitally deaf population shows a normal range and distribution in comparison to the normally hearing population (Mayberry 1992). The Magnitude of the Effects An intriguing feature of the age of acquisition effects was that their magnitude appeared to be greater than that reported for spoken language. For example, Oyama (1978) reported a correlation of -.40 between age of acquisition and Italian immigrants' recall of English sentences. In our studies, which used an identical task, age of acquisition correlated -.62 with recall of ASL sentences. Coppieter's (1987) nearnative French speakers performed three standard deviations below the native mean. In our studies, the childhood learners of ASL performed four standard deviations below the native
mean, and the adolescent learners performed five to six standard deviations below the native mean on the sentence paraphrase measure. This difference in magnitude suggests that either sign language is especially difficult to acquire (perhaps because it employs the visual and manual sensory-motor channel rather than the auditory and oral one) or, alternatively, that the unusual circumstances in which sign language is acquired impedes the full acquisition of language in early childhood. Our fourth study was designed to test the latter explanation. Late Acquisition of Sign: A First or Second Language? Our observations of the subjects suggested that, among the individuals who first learned to sign after early childhood, those with significant residual hearing and speech skill performed the experimental tasks more easily than those without such skills. The reason might relate to whether or not the deaf signers were able to acquire a spoken language in early childhood prior to learning sign language. Possibly, deaf signers who acquired sign language after early childhood comprise two subgroups: (1) those for whom ASL is a second language learned after childhooda late second language, that is, bilingualsand (2) those for whom ASL is a first language learned after childhooda late first language, that is, monolinguals. The hypothesis is reasonable given the facts of deafness. The effects of deafness on auditory sensitivity are not uniform. Children with hearing page_78 Page 79 impairments are heterogeneous with respect to their ability to perceive speech. Small amounts of hearing in early childhood can lead to the spontaneous acquisition of spoken language, ranging from full language development to little or no development. For some deaf signers, therefore, ASL may more accurately be characterized as a second language. For other deaf signers, however, the scenario may have a more unfortunate outcome. Some signers with profound hearing losses may not have acquired any spoken language in early childhood. For these individuals, the acquisition of ASL after early childhood may best be characterized as a first language. The two types of post-childhood learners of ASL may process sign language differently. The late-first-language learners, unlike the late-second-language learners, may not have acquired some basic set of processing skills that are necessary to understand any language easily. Before speculating about the nature of these skills, it is necessary first to determine whether late-first- and late-second-language learners of ASL process it in similar or different ways. Auditory and Speech Perception Abilities As a preliminary test of the hypothesis, we measured the auditory sensitivity and speech discrimination skills of sixteen subjects who participated in the previous study. Their age of sign language acquisition ranged from infancy to 13 years. We reasoned that, if the subject had a modicum of auditory sensitivity, then he or she may have been able to use it in the service of spoken language acquisition during early childhood. We measured the subjects' pure-tone thresholds for the frequencies of 250, 500, 1,000 and 2,000 Hertz and speech discrimination skill for audition alone and audition plus speechreading. For the latter task, we used the NU-Chips (Elliott and Katz 1980). The stimuli for the speech discrimination tasks were fifty monosyllabic English words commonly in the vocabulary of young children. For all the tasks, the stimulus words were presented over headphones and amplified to a perceptible and comfortable level. For the auditory task, the experimenter indicated to the subject when a stimulus word had been spoken, and the subject then selected a picture depicting the stimulus word from among four alternatives. For the combined auditory and speechreading task, the subject watched and listened to the stimulus word and then selected the appropriate picture. The results confirmed the subjects' self-report as being profoundly deaf. Mean pure-tone threshold (for the better ear) was 100 dB with a median page_79 Page 80 of 110 dB (the intensity of conversational speech is around 60 dB). On the auditory speech discrimination task, most subjects performed below chance, with a mean of 5 words and a median of 0 words (chance being 12 words), as table 3.1
shows. When visual information was added to the task (speechreading), mean speech discrimination increased to 21 words, with a median of 15 words (see table 3.1). Although the subjects were very deaf, they showed a range of speech discrimination skill as expected. Speech discrimination skill was correlated with some measures of sign language processing.6Pure-tone threshold (hearing sensitivity) was negatively correlated with the grammaticality measure of the sign-processing task (r = -.41, p < .05). The finding means that the more hearing sensitivity the subject had, the more likely he or she was to give grammatical responses on the ASL recall task. Combined auditory and visual speech discrimination skill was positively correlated with the production of semantic-lexical substitutions on the sign-processing task (r = +.50, p < .05). The finding means that, the more English words the subject was able to identify by listening and watching, the more likely he or she was to make lexical substitutions of a semantic nature on the ASL recall task. These results suggest that language acquisition in early childhood may facilitate the acquisition of a second language after childhood, even when the two languages have completely different sensory and motor formsvisual and gestured in the case of ASL and auditory and spoken in the case of English. Perhaps deaf signers who were able to acquire English in childhood because they could perceive it use it in some way when they process their second language, ASL. If this explanation is accurate, then it explains why age of acquisition has greater effects on sign language than on spoken language. Many deaf signers acquire ASL after early childhood, and it is their first and only language, not their second. Perhaps language acquisition that occurs in early childhood affects some aspects of language-processing skill that un Table 3.1 Hearing sensitivity and speech discriminationa Measure
Mean MedianRange
Pure-tone average (PTA)
100 72-110 dB 110dB dB
Auditory speech discriminationb
20%
0%
Auditory-visual speech discrimination
88%
88% 36-100%
a. Experiment 4; N = 16 b. Chance = 25%
0-88%
page_80 Page 81
derlie any subsequent language processing, regardless of the linguistic details. We explored this hypothesis in greater depth in the fifth study. Late-First-Language Acquisition is Not Like Late-Second-Language Acquisition To test the hypothesis, we sought deaf signers with unique linguistic histories (Mayberry, in press). We recruited subjects who, instead of having been born deaf, lost their hearing in late childhood or early adolescence and subsequently assimilated into the Deaf Community because they were educated in the company of congenitally deaf children who signed. Such individuals are unquestionable cases of second-language acquisition of ASL because they acquired spoken language on schedule in early childhood because they had normal hearing throughout early childhood. Each postlingually deafened subject (second-language learner) was matched by chronological age (+5 years) to one subject in each of the following three groups of congenitally deaf subjects, presumably first-language learners of ASL: native learners (0 to 3 years), childhood learners (5 to 8 years), and late learners (9 to 13 years). As in the third study, the subjects recalled complex and long ASL sentences. First-Language Timing Hypothesis If age of acquisition has greater effects on the long-range outcome of first- as compared to second-language acquisition, which we call the first-language timing hypothesis (Mayberry, in press), then
deaf signers who acquired ASL after early childhood as a first language, that is, with little or no previously acquired language, should perform more poorly than deaf signers who acquired ASL at the same late age but as a second language, that is, after having completely acquired a first language in early life. Alternatively, if age of acquisition alone, independent of linguistic experience in early childhood, determines the eventual outcome of language acquisition, then deaf signers who acquired ASL at the same older ages should perform similarly on sign-processing tasks regardless of the linguistic details of their early childhoods. Linguistic Characteristics of Recall Responses For the three groups of first-language learners (congenitally deaf subjects), the tendency to give grammatically acceptable responses was associated with the age at which they first acquired ASL, as figure 3.10 shows. However, the same was not true for the second-language learners (postlingually deafened subjects), as figure 3.10 also shows. The second-language learners performed less well page_81 Page 82
Figure 3.10 Experiment 5: mean proportion of sentence-recall responses grammatically acceptable for subjects grouped by age of ASL acquisition and previous language acquisition (with years of experience controlled). than the native learners, but they also outperformed the adolescent learners. The tendency to paraphrase the intended meaning of the stimulus sentences followed a similar pattern. These findings show that linguistic experience in childhood is a crucial factor determining the skill with which sign language, and presumably any language, can be processed in later adulthood (Mayberry, in press). Lexical Processing Substitutions Lexical processing substitutions again coincided with childhood linguistic experience, as figure 3.11 shows. The proportion of lexical substitutions that were related to the meaning of the stimulus sentences decreased with increasing age of ASL acquisition across the three groups of first-language learners (congenitally deaf subjects) but not the second-language learners (postlingually deafened subjects). The reverse trend characterized the subjects' production of phonological-lexical substitutions. page_82 Page 83
Figure 3.11 Experiment 5: mean proportion of lexical substitutions made in sentence recall related solely to phonological shape or lexical and sentential meaning (semantic) as a function of age of acquisition and previous language acquisition (with years of experience controlled). Even though the late-first- and late-second-language learners acquired ASL at the same older ages (i.e., 8 to 15 years), their patterns of lexical processing substitutions were distinct (see figure 3.11). The late-second-language learners showed lexical substitution patterns akin to, but not identical with, those of the native learners. The late-second-language learners made mostly semantic lexical substitutions and some phonological ones. By contrast, the lexical substitution patterns of the late-first-language learners were unique: They made as many phonological-lexical errors as semantic ones, despite years and years of sign language experience. The finding suggests that language acquisition in early childhood determines the efficiency with which any language can be processed in later adulthood. What is the nature and locus of these effects in language processing? The various error and performance patterns of the five studies that I have summarized here suggest a possible explanation. page_83 Page 84 The Advantages of Acquiring Language in Early Childhood Several explanations have been proposed as to what it is that the critical period for language acquisition affects. The proposals vary in terms of which aspect of linguistic knowledge and performance is posited to be affected by childhood language acquisition. For example, Newport (1984) proposed that young children are superior to older children and adults in the ability to acquire morphology. Coppieters (1987) proposed that children are better able than adults to acquire complex word knowledge. Bley-Vorman (1989), among others, proposed that children are better able to acquire syntax than adults. Finally, Scovel (1989) proposed that children are better able than adults to acquire the phonetic skills of speech. Some support for each of these proposed explanations can be found in the sign language data that I have reported here. For example, childhood learners of ASL tend to alter or change bound morphology, while late learners tend to strip it. Childhood learners give mostly grammatically acceptable sentences, whereas late learners give as many grammatically unacceptable sentences as acceptable ones. Childhood learners make lexical substitutions grounded in sentence meaning, while late learners tend to make lexical substitutions derived from surface phonological form. Thus, the sign language data show effects of age of acquisition at all levels of linguistic structure. What can account for all these effects?
The apparently multiple effects of age of acquisition on sign language comprehension can be accounted for by an explanation rooted in a spreading-activation model of language processing (Dell 1986). Thus, one single problem could cascade into a multitude of interrelated difficulties (Mayberry and Eichen 1991). Perhaps language acquisition that occurs after early childhood produces controlled phonological processing, that is, nonautomatic and effortful phonological processing. Lexical identification is inefficient and effortful as a consequence. Slow and effortful lexical identification requires that surface phonological form be stored in working memory both until, and in lieu of, access to lexical meaning. Failure to identify lexical stems leads to uncertainty about morphological inflections and derivations. Uncertainty about lexical stems and grammatical morphology contributes to syntactic ambiguity, which in turn, makes it difficult to integrate the lexical meaning that has already been successfully decoded. Together all these processing difficulties yield a fragmented semantic context, which in turn, is of limited help in rectifying the portions of language missed in the first place. page_84 Page 85 Thus, early childhood may be essential for the development of facile and automatic phonological processing. How does this processing explanation account for the differential performance of the late-second-language learners who acquired ASL after early childhood and the late-first-language learners who acquired ASL at the same late age? Perhaps the late-first- and late-second-language learner experience the same basic problem, namely controlled and effortful phonological processing with all its attendant and subsequent difficulties. The late-second-language learner, but not the late-first-language learner, may implement some kind of phonological recoding strategy as a means of circumventing the deleterious effects associated with controlled phonological processing. Recoding words phonologically from the second language that have already been identified into the phonology of the first language capitalizes on a welldeveloped phonological system acquired in early childhood. This strategy would serve to boost working memory capacity for lexical meaning already accessed and thereby provide more semantic context to combat effortful lexical decoding. With more semantic context, the late-second-language learner, but not the late-first-language learner, is better able to fill the lexical, morphological, and grammatical gaps that occur during second-language processing for two additional and related reasons. First, the late-second-language learner may be more aware that something is missing due to expectations about linguistic structure and meaning derived from fully developed linguistic knowledge. In addition, the late-secondlanguage learner, but not the late-first-language learner, may be able to make reasonable guesses at a syntactic level as to what was missed. Simply stated, the late-second-language learner may have difficulties in pattern and word recognition which he or she can circumvent with good guessing ability. The late-first-language learner may have identical difficulties but poor guessing ability. Although this is not the only explanation of the phenomena that I have reported here, it is a possible one. To summarize, the study of American Sign Language reveals a great deal about the critical period for language acquisition. The results show, first, that the effects of the critical period for language acquisition are modality free: Sign language acquisition is constrained by a critical period to the same degree as is spoken language acquisitionthe effects are robust and permanent. Second, the effects of age of acquisition on sign language processing are apparent at every level of linguistic structure of ASL and, moreover, are apparent across sign dialects and a variety of processing tasks. Third, the effects of age of acquisition are not due to page_85 Page 86 underlying deficits in fine-motor production, memory capacity, or general cognitive skills. Finally, age of acquisition is especially critical for first-language acquisition. Lenneberg clearly believed that deafness provided some of the best evidence that childhood is essential to the outcome of language acquisition. He summarized the effects of postlingual deafness on language acquisition by saying that ''it seems as if even a short exposure to language, a brief moment during which the curtain has been lifted and oral communication established, is sufficient to give a child some foundation on which much language may be based" (Lenneberg 1967, 155). The five studies that I have summarized here show that Lenneberg was right. Deafness offers unique insights into the
relationship between childhood and language acquisition. Indeed, the insights are much deeper than even Lenneberg suspected: The sensory and motor fabric of early language experience is irrelevant to the unfolding of the language acquisition process. Acknowledgments The research reported here was primarily supported by the National Institute of Deafness and Other Communication Disorders. I am grateful to NIH for research support and to the many individuals whose help with this work was invaluable. First and foremost, I thank the subjects who graciously and patiently participated in these studies. I also thank Ellen B. Eichen for lending her sharp linguistic eye to volumes of data, Susan Goldin-Meadow for allowing me to join her remarkable lab for several years, Howard Nusbaum for thought-provoking discussions about the data, Drucilla Ronchen and Susan Tuchman Gerber for tireless and enthusiastic subject recruitment and testing, and Judy Goodman for carefully reading and commenting on an earlier version of this chapter. Finally, I thank Carl, Isaac, and Nathan for their marvelous insights into first- and second-language acquisition. Notes 1. See Goldin-Meadow and Mylander (1990) for a detailed description of the linguistic properties of the improvised gestures of deaf children who know no sign language. 2. These comments come from interviews conducted in the course of the Mayberry and Eichen (1991) study. The use of corporal punishment in education is clearly not unique to deaf children. However, these reports show that the teachers of deaf page_86 Page 87 children believed that spontaneous gesture was a form of serious misbehavior in need of correction. 3. For the experimental details of these studies, see the published reports. 4. Wilbur (1987) gives a detailed description of the structural differences between ASL and PSE. 5. An important difference between the child's error and the adult's is that the adult presumably knows the concepts represented by both the intended target word and the mistake, whereas the child may not. 6. This study is reported in detail here because it is unpublished elsewhere. References Bach, M. J. and Underwood, B. J. (1970). Developmental changes in memory attributes. Journal of Educational Psychology, 61, 292-296. Biemiller, A. (1970). The development of the use of graphic and contextual information as children learn to read. Reading Research Quarterly, 6, 75-96. Bley-Vroman, R. (1989). What is the logical problem of foreign language learning? In S. M. Gass and J. Schachter (eds.), Linguistic perspectives on second language acquisition (pp. 41-88). Cambridge: Cambridge University Press. Columbo, J. (1982). The critical period concept: Research, methodology and theoretical issues. Psychological Bulletin, 91, 260-275. Cooper, J. and Flowers, C. (1987). Children with a history of acquired aphasia: Residual language and academic impairments. Journal of Speech and Hearing Disorders, 52, 251-262. Coppieters, R. (1987). Competence differences between native and non-native speakers. Language, 63, 544-573. Curtiss, S. (1977). Genie.: A psycholinguistic study of a modern-day "wild child". New York: Academic Press.
Cutler, A., Mehler, J., Norris, D., and Segui, J. (1989). Limits on bilingualism. Nature, 340, 229-230. Cziko, G. (1980). Language competence and reading strategies: A comparison of first- and second-language oral reading errors. Language Learning, 30, 101-114. Davis, K. (1947). Final note on a case of extreme isolation. American Journal of Sociology, 52, 432-437. deBoysson-Bardies, B. and Vihman, M. (1991). Adaptation to language: Evidence from babbling and first words. Language, 67, 297-319. Dell, G. S. (1986). A spreading-activation theory of retrieval in sentence production. Psychological Review, 93, 283-321. Elliott, L. and Katz, D. (1980). The Northwestern University children's perception of speech test (NU-CHIPS). St. Louis: Auditec. page_87 Page 88 Felzen, E. and Anisfeld, M. (1970). Semantic and phonetic relations in the false recognition of words by third- and sixth-grade children. Developmental Psychology, 3, 163-168. Flege, J. E. (1987). A critical period for learning to pronounce foreign languages? Applied Linguistics, 8, 162-177. Flege, J. E. and Fletcher, K. L. (1992). At what age of learning (AOL) do foreign accents first become perceptible? Journal of the Acoustical Society of America, 91, 370-389. Fromkin, V., Krashen, S., Curtiss, S., Rigler, D., and Rigler, M. (1974). The development of language in Genie: A case of language acquisition beyond the "Critical Period." Brain and Language, 1, 81-107. Goldin-Meadow, S. and Mylander, C. (1990). Beyond the input given: The child's role in the acquisition of language. Language, 66, 323-335. Johnson, J. S. and Newport, E. L. (1989). Critical period effects in second-language learning: The influence of maturational state on the acquisition of English as a second-language. Cognitive Psychology, 21, 60-99. Klima, E. and Bellugi, U. (1979). The signs of language. Cambridge, Mass.: Harvard University Press. Koluchova, J. (1972). Severe deprivation in twins: A case study. Journal of Child Psychology and Psychiatry, 13, 107114. Krashen, S., Long, M., and Scarcella, R. (1979). Accounting for child-adult differences in second language rate and attainment. TESOL Quarterly, 13, 107-114. Lane, H. (1984). When the mind hears: A history of the deaf. New York: Random House. Lenneberg, E. H. (1967). Biological foundations of language. New York: Wiley. Levine, S. L., Huttenlocher, P., Banich, M. T., and Duda, E. (1987). Factors affecting cognitive functioning in hemiplegic children. Developmental Medicine and Child Neurology, 27, 27-35. Liddell, S. K. (1980). American Sign Language syntax. The Hague: Mouton. Mason, M. K. (1942). Learning to speak after six and one half years of silence. Journal of Speech and Hearing Disorders, 7, 295-304. Mayberry, R. 1. (1992). The cognitive development of deaf children: Recent insights. In S. Segalowitz and I. Rapin (eds.), Child neuropsychology, vol 7: Handbook of neuropsychology (F. Boller and J. Grafman, series eds.) (pp. 51-68). Amsterdam: Elsvier. Mayberry, R. I. (in press). First-language acquisition differs from second-language acquisition: The case of American Sign Language. Journal of Speech and Hearing Research. page_88
Page 89 Mayberry, R. I. and Eichen, E. B. (1991). The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition. Journal of Memory and Language, 30, 486-512. Mayberry, R. I. and Fischer, S. D. (1989). Looking through phonological shape to sentence meaning: The bottleneck of non-native sign language processing. Memory and Cognition, 17, 740-754. Molfese, D., Freeman, R., and Palermo, D. (1975). The ontogeny of brain lateralization for speech and nonspeech stimuli. Brain and Language, 2, 356-368. Newport, E. (1984). Constraints on learning: Studies in the acquisition of American Sign Language. Papers and Reports on Child Language Development, 23, 1-22. Newport, E. L. (1990). Maturational constraints on language learning. Cognitive Science, 14, 147-172. Niccols, A. (1987). The development of metalinguistic ability and its relation to reading. Paper presented at the meeting of the Society for Research in Child Development, Baltimore, Md., April 1987. Oyama, S. (1976). A sensitive period for the acquisition of a nonnative phonological system. Journal of Psycholinguistic Research, 5, 261-285. Oyama, S. (1978). The sensitive period and comprehension of speech. Working Papers on Bilingualism, 16, 1-17. Padden, C. and Humphries, T. (1988). Deaf in America: Voices from a culture. Cambridge, Mass.: Harvard University Press. Penfield, W. G. (1959). Speech and brain mechanims. Princeton, N.J.: Princeton University Press. Penfield, W. G. (1963). The second career, With other essays and addresses. Boston: Little, Brown. Perlmutter, D. M. (1991). The language of the deaf. The New York Review of Books, 28 March, 65-72. Rawlings, B. and Jensema, C. (1977). Two studies of the families of hearing impaired children (Research Bulletin, Series R. No. 5). Washington, D.C.: Gallaudet University, Office of Demographic Studies. Schein, J. and Delk, M. (1974). The deaf population of the United States. Silver Spring, Md.: National Association of the Deaf. Scovel, T. (1989). A time to speak: A psycholinguistic inquiry into the critical period for human speech. Cambridge, Mass.: Newbury House Publications. Singh, J. A. and Zingg, R. (1966). Wolf-children and feral man. New York: Arcohon Books/Harper and Row. Skuse, D. H. (1988). Extreme deprivation in early childhood. In D. Bishop and K. Mogford (eds.), Language development in exceptional circumstances (pp. 29-46). Edinburgh: Churchill Livingstone. page_89 Page 90 Stemberger, J. P. (1989). Speech errors in early child language production. Journal of Memory and Language, 28, 164188. Toyota, H. (1983). Effects of sentence context on memory attributes in children. Psychological Reports, 52, 243-246. Vihman, M. M. (1981). Phonology and the development of the lexicon: Evidence from children's errors. Journal of Child Language, 8, 239-264. Wada, J. and Davis, A. (1977). Fundamental nature of human infant's brain asymmetry. Canadian Journal of Neurological Science, 4, 203-207.
Werker, J. (1989). Becoming a native listener. American Scientist, 77, 54-59. Wilbur, R. B. (1987). American Sign Language: Linguistic and applied dimensions. Boston: College-Hill Press/Little, Brown. page_90 Page 91
PART II Perceptual Learning of Phonological Systems page_91 Page 93
Chapter 4 Cross-Language Speech Perception: Development Change Does Not Involve Loss Janet F. Werker There are profound changes found across the age range in cross-language speech perception performance, but an adequate understanding of how and why these developmental changes take place is still lacking. In this chapter, I review our research in cross-language speech perception, with emphasis on how it fits into the short history of research in this area. Three overlapping periods of this research will be differentiated. The first period includes the early seminal work indicating that adults sometimes have difficulty perceiving and producing nonnative phonetic contrasts but that young infants can apparently discriminate nonnative contrasts with ease. The research findings from this period culminated in the mistaken hypothesis that age-related differences in cross-language speech perception result from an absolute loss of perceptual discriminability due to lack of listening experience. The second period was characterized by a rising skepticism concerning the adequacy of an absolute loss explanation. Research conducted during this period indicated that adults can be trained to discriminate nonnative contrasts, that there are substantial differences in the ease with which nonnative contrasts can be discriminated, even without any training, and that there are significant differences in research findings depending upon the testing procedure employed. The third period, which represents most of the current research, is characterized by a firm understanding that loss is not an adequate explanation for age and experiential influences on cross-language speech perception and that more complex explanations need to be found. This search for more adequate explanations has led to a rich proliferation of theorymotivated research. page_93 Page 94 Developmental Changes in Sensitivity to Nonnative Contrasts When we speak to one another, language is processed with rapidity and ease. However, research in speech processing has shown that this ability is actually quite remarkable. The physical (acoustic) form of any particular utterance, word, or phoneme varies tremendously, depending on the individual speaker, the rate of speaking, and the context in which it is spoken. In order to process speech, the listener must be able to recover the underlying identity over and above this considerable variation. It is of considerable interest to identify the mechanisms that make possible such rapid and efficient processing of speech. Cross-language speech perception research allows a unique perspective on this question by identifying the perceptual abilities of the young infant prior to experience with any specific language and by charting
age-related changes in performance as a function of experience with a particular language. When we began investigating cross-language speech perception, the existing empirical research had indicated that young infants could discriminate both native and nonnative phonetic contrasts (Aslin et al. 1981; Lasky, Syrdal-Lasky, and Klein 1975; Streeter 1976; Trehub 1976), but that adults and children often have difficulty discriminating nonnative distinctions (Goto 1971; Lisker and Abramson 1970; MacKain, Best, and Strange 1981; Miyawaki et al. 1975; Sheldon and Strange 1982; Singh and Black 1966; Snow and Hoefnagel-Hohle 1978; Trehub 1976). On the basis of this fairly consistent pattern of findings, it was hypothesized by several researchers that infants are born with the ability to discriminate the universal set of phonetic distinctions and that this universal ability declines, or is lost, as a function of lack of specific listening experience (Eimas 1975; Strange and Jenkins 1978). We began our research endeavor in an effort to test the validity of this original hypothesis and, assuming it was true, to establish the point in development at which loss was first apparent. Basic replication research was a necessary first step for two reasons: (1) all of the existing research comparing infants and adults had involved the use of different types of procedures for the two age groups, raising the possibility that the differences in performance between infancy and adulthood stemmed from differences in procedural demands rather than perceptual capabilities, and (2) there were some inconsistencies in the infant data, leading to the possibility that infant phonetic perception might not be as universal as had been claimed. page_94 Page 95 The most serious concern stemming from the use of different procedures was that the procedures that were used with infants were typically more sensitive than those that had been used with adults. For example, the cross-language infant research involved testing infants in either a high-amplitude sucking (Streeter 1976; Trehub 1976), heart-rate deceleration (Lasky, Syrdal-Losky, and Klein 1975), or conditioned head-turn discrimination task (Aslin et al. 1981; Eilers, Wilson, and Moore 1979). However, many of the adult experiments involved testing subjects in identification tasks (e.g., Lisker and Abramson 1970) or either oddity- or AXB-discrimination tasks (e.g., MacKain, Best, and Strange 1981; Miyawaki et al. 1975). Identification-, oddity-, and AXB-discrimination tasks all have potentially greater memory demands than the more straightforward discrimination tasks used with infants (for a discussion of the differing demands in adult discrimination tasks, see Carney, Widin, and Viemeister 1977; for a discussion of differential demands in infant tasks, see Eilers and Oller 1988; Jusczyk 1985; Kuhl 1985). It was, therefore, possible that the apparent advantage of young infants was simply an artifact of the more sensitive testing procedures. Thus, the first step was to compare cross-language sensitivity in infants and adults by testing both age groups with the same stimuli and using a similar procedure. To resolve this problem, we adopted a method of testing which can be implemented in very similar forms with infants (5 1/2 months or older), children, and adults. The procedure used with infants is called the head turn procedure (for a description of this procedure, see Kuhl 1980). Basically, this is a category change task in which the subject has to monitor a continuous background of syllables from one phonetic category (e.g., /ba/) and signal when the stimuli change to a contrasting phonetic category (e.g., /da/). Adults and children signal detection of this change by pressing a button. Correct button presses are reinforced with the presentation of a flashing light for older children and adults or an electronically activated animal for younger children. Incorrect button presses are not reinforced, and misses are not signalled. The procedure differs only slightly for infants. Infants are conditioned to turn their head toward the sound source when they detect a change in the speech sound. Correct head turns are reinforced with electronically activated animals that become illuminated inside a smoked plexiglass box. As is the case with children and adults, incorrect head turns are not reinforced (for details of this procedure, see Kuhl 1980; for our early implementation, see Werker et al. 1981). page_95 Page 96 The second potential problem, inconsistencies in the infant data, revolved primarily around contradictory claims with regard to whether young, English-learning infants can discriminate the non-English lead boundary in voice onset time (VOT). In the original infant speech perception experiment conducted by Eimas et al. (1971), it appeared that young
infants had the same VOT boundaries as English-speaking adults and that there was no peak at the prevoicing VOT boundary. Similar results were obtained by Butterfield and Cairns (1974), and by Eilers, Wilson, and Moore (1979). In contrast, Aslin et al. (1981) revealed that English-learning infants do have a sensitivity even to the lead boundary in VOT if tested in a sensitive, staircase procedure. This latter study made it clear that young, English-learning infants can discriminate both native and nonnative VOT contrasts. It also raised the possibility that some contrasts might be perceptually more difficult for young infants than other contrasts, regardless of their status as native or nonnative. To address the potential interpretative problems arising from variations in ease of discriminability for different contrasts, we decided to measure age-related changes in nonnative speech discrimination using nonnative contrasts that could be expected to vary on their ease of discriminability using linguistic and acoustic criteria. In an early experiment, we compared English-learning infants, aged 6-8 months; English-speaking adults; and Hindi-speaking adults on their ability to discriminate the English- and Hindi-voiced bilabial versus alveolar contrast /ba/-/da/ plus two non-English Hindi contrasts, selected to vary on their potential difficulty (for details see Werker et al. 1981). The Hindi place-of-articulation distinction between retroflex and dental voiceless stop consonants /Ta/-/ta/ was selected as a potentially difficult non-English contrast as it is rare across the world's languages and has a restricted distribution in those languages in which it does occur (Stevens and Blumstein 1978). The Hindi voicing distinction between breathy voiced and voiceless unaspirated dental stops /dha/-/tha/ was selected as a potentially easier contrast as it is more common both across and within languages. Also, there was reason to believe that the acoustic cues differentiating the two phones in the retroflex/dental contrast are acoustically less salient than those in the voicing contrast (Ladefoged 1982; Ladefoged and Bhaskararo 1983; Stevens and Blumstein 1978; see Werker et al. 1981 for an analysis of the stimuli). All stimuli were produced by a native Hindi speaker. Several exemplars of each phoneme were recorded, and eight from each category were se page_96 Page 97 lected. Final stimulus selections were based on similarity in intensity, duration, fundamental frequency, and intonation contour. The results from this early study were as predicted. Virtually all subjects in all groups could discriminate the English /ba/-/da/ contrast. Also, the 6-8 month-old English-learning infants and the Hindi-speaking adults could easily discriminate both Hindi contrasts. However, significantly fewer English-speaking adults could discriminate the Hindi contrasts than either Hindi adults or English-learning infants. English adults had particular trouble with the difficult retroflex/dental place-of-articulation distinction (Werker et al. 1981). Compared to the 100% of Hindi adults who could discriminate both Hindi contrasts, only 40% of the English-speaking adults could discriminate the (potentially easy) Hindi voicing contrast and only 10% could discriminate the retroflex/dental distinction. Following a short training procedure (only twenty-five trials), 70% of the English-speaking adults could discriminate the Hindi voicing contrast, but this training did not improve performance on the retroflex/ dental distinction. The results from this early experiment clarified several points. First, there is an effect of experience on speech perception in that Hindi adult listeners did significantly better than English adult listeners at discriminating both non-English Hindi speech contrasts. The finding that the effect of experience was more pronounced for the retroflex/dental than for the voicing contrast made it clear that some nonnative contrasts are perceptually easier than others. Finally, it was evident that, when using the same procedure, there are significant differences between infants and adults on their ability to discriminate nonnative speech contrasts. Thus these results confirmed and extended the existing data pattern with respect to cross-language speech perception. Infants of 6-8 months of age and Hindi-speaking adults could discriminate both Hindi contrasts with ease, while adult English listeners had difficulty, particularly with the retroflex/dental contrast. In an attempt to ascertain when in development the change in nonnative sensitivities might first be apparent, we subsequently tested English-speaking children, ages 12, 8, and 4, using the button-press version of the head turn task. Because we viewed the developmental change in cross-language speech perception as mediated by some sort of loss in sensory sensitivity, we expected the decline to be evident around puberty, the age at which Lenneberg (1967) had claimed the critical period closed for acquisition of an accent-free second language (see also Snow and Hoefnagel-Hohle 1977).
page_97 Page 98 To our surprise, the results indicated that children ages 12, 8, and even 4 performed as poorly as English-speaking adults on the Hindi non-English contrasts (Werker and Tees 1983). In fact, the 4-year-old children performed more poorly than the older children and English adults on the easier non-English contrast, the Hindi voicing contrast. This effect was evident even though the 4-year-olds could easily discriminate the English contrast in this procedure and even though Hindi-learning children of this age can also discriminate both of these contrasts when tested in this procedure. These results thus indicated quite clearly that a developmental change is evident before 4 years of age. It was the poor performance of the 4-year-olds on the Hindi voicing contrast that led us to suspect that the developmental change might be caused by some sort of attentional or perceptual reorganization rather than a sensory loss. This suspicion arose from the fact that 4-year-olds have been shown to be very rigid rule followers in other domains when they have just recently figured out the rules (Carter and Patterson 1982) and may similarly be very rigid rule followers when they first figure out the phonological rules of their native language. Thus, the first seeds of doubt were sown in our minds with respect to the explanation that we and others had given for developmental changes in cross-language speech perception. Nevertheless, before turning to that question, we continued our exploration of when in development there is first evidence of performance decrements in nonnative speech perception tasks. The next series of studies was begun with two goals in mind. The first goal was to chart developmental changes in crosslanguage speech perception between infancy and 4 years of age. The second goal was to make sure that the findings with respect to the retroflex/dental distinction would generalize to other nonnative contrasts. In this endeavor, we sought to find another nonnative contrast that would be perceptually difficult for English listeners. It is important to note here that the search was not easy. Many nonnative contrasts are almost immediately discriminable to English adult listeners. However, since we were not interested in pursuing the developmental course of perception of easy non-English contrasts, we continued to search for a more difficult contrast. In this endeavor, we selected a Native American language from the Pacific Northwest because many of these languages have very different phonologies from English. In particular, the phoneme inventory involves an extended series of consonants produced in the back part of the vocal tract, further back than the velar place of articulation used in English. page_98 Page 99 We selected an Interior Salish language called Thompson, or more accurately, Nthlakampx. This language is spoken by approximately 200 speakers who live around Merritt, British Columbia. Two elders of the community who were known to be excellent speakers and who had some linguistics training (through a program at the University of Victoria designed to help Native Canadian people record and preserve their languages) served as informants. With the help of these speakers, we came up with a list of Nthlakampx words involving minimal pair contrasts. We then recorded the native speakers pronouncing several of these words, followed by a pronunciation of the first consonant and vowel. This method of recording was necessary since there is not an accepted orthography for the language, making it an impossible task to write the CV syllable and making it a nonintuitive task to ask a native speaker to pronounce the CV syllable in isolation. The next step was to listen carefully to the several recordings. We then selected the syllables that we1found to be most difficult to discriminate. The final contrast selected involved a glottalized velar versus a glottalized uvular stop, phoneticized by the native informants as /k`i/-/q`i/. As noted in our original paper, the vowels vary somewhat freely in this language, thus several steps were taken to ensure that this consonant contrast could not be detected on the basis of different vowel color (see Werker and Tees 1984a). English-speaking adults were then compared to both Nthlakampx-speaking adults and English-learning infants on their ability to discriminate the glottalized velar/uvular contrast. As predicted, Nthlakampx-speaking adults and Englishlearning infants aged 6-8 months could discriminate this contrast, but English-speaking adults showed difficulty (only about 30% could discriminate this contrast). Following this replication, we began a series of studies attempting to ascertain when in development the decline occurred. Pilot tests involving children between the ages of 8 months and 4 years indicated that changes were taking place around 1 year of life. The pilot work was then followed with a series of cross-sectional and longitudinal studies
with infants. In the first study, English-learning infants aged 6-8, 8-10, and 10-12 months were compared on their ability to discriminate the Hindi retroflex/ dental and the Nthlakampx glottalized velar/uvular contrasts (Werker and Tees 1984a). Before being tested on the non-English contrasts, the infants were required to first show they could perform in the Head Turn procedure on the English /ba/-/da/ distinction. All infants were then given twenty-five trials on which to reach discrimination criterion on the non page_99 Page 100 English contrast. Before concluding that any infant who failed to reach criterion in that number of trials really could not discriminate the contrast and had not just lost interest in the procedure, all such infants were subsequently retested on the English /ba/-/da/ distinction. The data was only retained as meaningful from infants who subsequently passed the /ba/-/da/ test. The results indicated that almost all of the infants aged 6-8 months could discriminate both non-English contrasts, but among the infants aged 10-12 months, only 2 out of 10 could discriminate the retroflex/ dental contrast and only 1 out of 10, the velar/uvular (Werker and Tees 1984a). The infants aged 8- 10 months of age showed an intermediate pattern of performance. This pattern was replicated in a longitudinal study in which a small group of six infants were tested at twomonth intervals (Werker and Tees 1984a). Using the same procedure, a few infants who were learning Hindi and/or Nthlakampx as a native language were shown to be able to discriminate the contrast from their language-learning environment when they reached 11 months of age (Werker and Tees 1984a). More recently, the results with respect to the English-learning infants were replicated in the same procedure using synthetically produced voiced retroflex/dental stimuli (Werker and Lalonde 1988). English-learning infants aged 6-8 months were shown to be able to discriminate multiple, synthetically produced tokens according to the adult Hindi retroflex/ dental boundary but not according to an arbitrary boundary location that does not correspond to any adult phonetic category. This result with the younger infants showed that infant sensitivity to nonnative phonetic categories is categorical-like and is related to the phonetic relevance of the contrast in question. When given the same number of testing trials, English-learning infants aged 11-13 months were not able to discriminate the stimuli according to either the arbitrary boundary location or the Hindi retroflex/dental boundary but were, of course, able to discriminate them according to the English ba/da boundary. Also, Best and McRoberts (1989; Best, this volume) have replicated the developmental change between 6 and 12 months of age for English-learning infants tested on the Nthlakampx glottalized velar/uvular contrast, /k`i/-/q`i/, using a habituation/dishabituation testing procedure rather than the head turn procedure used in our previous work. Taken together, these replications with new contrasts and different testing procedures provide strong confirmation that the developmental change in nonnative speech perception evident within the first year of life is related to listening experience. page_100 Page 101 Sensitivity to Nonnative Contrasts During the second period in cross-language speech-perception work, researchers became increasingly skeptical about the appropriateness of the strong claim that experiential effects are equally apparent for all nonnative contrasts and that experiential effects are permanent. Research made it clear that the effect of experience does not equally affect all nonnative contrasts as adults can discriminate many nonnative contrasts with little difficulty (see, for example, Eilers, Wilson, and Moore 1979). Similarly, our research showed that the Hindi voicing contrast is easier than the place-ofarticulation contrast for English listeners (Werker et al. 1981). The finding that minimal training can improve performance on at least some nonnative contrasts strengthened the possibility that experiential influences might not be permanent. Questions still remained as to whether it is possible to train adults to discriminate all nonnative distinctions, whether some training methods are more effective than others, and whether some nonnative contrasts are untrainable. A careful consideration of the historical context in which the work was done helps trace how the understanding of these
issues has changed over the years. One of the most influential early training studies was conducted by Winnifred Strange (1972). In this study, discrimination training was shown to have limited effectiveness on English adults' ability to discriminate non-English VOT contrasts. Because this finding was consistent with the expectations generated by the seminal work of Lisker and Abramson (1967; Abramson and Lisker 1970), it was largely unchallenged by the research community for a number of years. Subsequently, however, a number of investigators have reported findings indicating that English speakers can be fairly easily trained to discriminate the non-English lead boundary in VOT. For example, Carney, Widin, and Viemeister (1977) were able to train English adults to discriminate synthetic stimuli differing in VOT at a number of arbitrary points along a continuum using a same/ different (AX) procedure with a short ISI and feedback. Moreover, Pisoni et al. (1982) reported that English adults' ability to discriminate the lead boundary in VOT could be improved by simple labeling of prototypical stimuli from three VOT categories. Such minimal training even generalized to VOT distinctions at a novel place of articulation (McClaskey, Pisoni, and Carrell 1983). This kind of research allowed the field to move away from the question of whether or not training works to the more complicated question of why some training procedures are more effective than others. The training page_101 Page 102 procedure employed by Strange in the original 1972 study involved primarily discrimination training, whereas that employed by Pisoni and his colleagues (1982) required subjects to label the stimuli. Perhaps, as suggested by both Jamieson and Morosan (1989) and by Pisoni, Lively, and Logan (this volume), training procedures involving labeling are more effective at facilitating linguistically relevant perception. Consistent with this notion, Strange and Dittman (1984) reported little success at training Japanese adults to discriminate /r/ from /1/, and MacKain, Best, and Strange (1981) found that only extensive, naturalistic second-language learning was effective at improving Japanese adults' ability to discriminate the English /ra/-/la/ distinction. However, Logan, Lively, and Pisoni (1991) report that, in some training procedures, Japanese adults can learn to discriminate the English /r/-/l/ distinction. In this study, Logan and colleagues trained Japanese listeners on the English /r/-/l/ distinction over a period of three weeks, using a two-choice labeling procedure. They ensured that the subjects were trained in a variety of contexts (different position in syllable) and that they were exposed to multiple speakers. Thus, training conditions were set up to facilitate generalization. It is useful to note that, even when training is quite successful, adults may fail to achieve nativelike levels of performance (see Polka 1991). For example, in their recent study, Logan, Lively, and Pisoni (1991) note that, although the overall change in performance between the pre and the posttest sessions was highly significant, the overall amount of improvement was less than 10%, even though training continued for three weeks. Also, the relative success of training varied according to position in the syllable (it was not at all successful for clusters in initial positions), and training generalized more to similar than to dissimilar testing contexts. Finally, even after this amount of training, it is questionable as to whether Japanese adults were performing as well as their English-speaking counterparts. There is some recent research showing that nonnative listeners show more difficulty perceiving even relatively easy phones than do native listeners under certain testing conditions. In recent work, Takata and Nabelek (1990) compared native English speakers to native Japanese speakers who are fluent in English on their performance in the modified Rhyme Test. Results indicated that, although the two groups performed similarly under quiet testing conditions, the native Japanese speakers performed significantly more poorly than the native English speakers in conditions of noise and/or reverberation. Not surprisingly, one of the more common errors for native Japanese listeners was an r/l confusion. page_102 Page 103 Thus, there is no doubt that linguistic experience has a profound effect on speech perception, but there is also no doubt that the effects of experience can be ameliorated in certain training and testing situations. Perhaps the most useful way to present the results of cross-language training studies is to report not only the amount of improvement between pre and posttest training but also to report the results of analyses comparing post-training performance levels to those of native speakers (see Polka 1991).
It is also useful to assess the long-term effectiveness of training. For example, in their work, MacKain, Best, and Strange (1981) evaluated the effectiveness of second-language training on the long-term consequences of discrimination. In related work, our early research showed that training immediately facilitated performance on the Hindi voicing contrast /dha/-/tha/ (Werker et al. 1981) and was not successful at facilitating performance on the retroflex/dental contrast, /ta//Ta/. In a subsequent study, we found that more extensive discrimination training (500 trials) did significantly facilitate performance of at least some English speakers on the retroflex/dental contrast. However, the effect of training had disappeared when subjects returned to the lab a few weeks later (Tees and Werker 1984). In this experiment, training was clearly not as sophisticated as that used in other work (Logan, Liverly, and Pisoni 1988; Strange and Dittman 1984). The training trials simply involved feedback at regular intervals during testing in the button-press category change procedure. It is quite possible that an alternative training procedure would have been more effective at improving long-term performance.2 The focus on loss as an explanation for the attenuation of discrimination performance resulted in the ignoring an important fact, namely, poor discrimination performance was rarely all-or-none even without any training. Some residual auditory discrimination skill remains. Thus, it seemed possible that listeners have multiple means to the same ends. In previous work, it had been shown that adults may use both auditory and phonemic processing in their attempts to discriminate sounds (see also Repp 1984). To examine whether adults would show a similar sensitivity to nonnative perception even without training, we tested English adults on the Hindi retroflex/dental and the Nthlakampx-glottalized velar/uvular contrasts in a more sensitive procedure. Because we were interested in assessing sensitivity, we tested adult English speakers in a same/different (AX) discrimination task (Carney, Widin, and Viemeister 1977). Using the AX procedure, we found that adult subjects can discriminate both contrasts at a 500 but not a 1500 msec ISI (Werker and Tees 1984b). page_103 Page 104 In a subsequent study using just the Hindi retroflex/dental stimuli, we tested subjects for five blocks of trials in one of three ISI conditions, 1500, 500, and 250 msec (Werker and Logan 1985). Again, the results revealed sensitivity to the nonnative phonetic contrasts in the shorter ISI conditions. In fact, there was even evidence that subjects can discriminate nonphonetic acoustic cues within either the retroflex or dental category at the 500 msec ISI (Werker and Logan 1985). In an attempt to make sense out of this pattern of findings, we proposed that subjects can use one of three different processing strategiesphonemic, phonetic, and acousticdepending on the interstimulus interval. When tested with an ISI over 500 msec, subjects appeared to use a phonemic processing strategy and were unable to discriminate the nonnative contrast. Thus, when the ISI is long, subjects seem unable to discriminate two stimuli unless they can assign them distinct linguistic labels. At shorter ISIs, subjects showed evidence of using both a phonetic and an acoustic strategy. Evidence for a phonetic strategy was provided by subjects who could discriminate retroflex from dental exemplars but could not discriminate among the several exemplars within either phonetic category. Evidence of acoustic processing was provided by subjects who could discriminate between the several retroflex or the several dental exemplars. These findings indicate that adult listeners can discriminate between tokens on the basis of phonetic and acoustic information if the task requires it but that the most readily available strategy is to perceive speech stimuli in terms of native-language phonemic categories. This pattern of results has been replicated using a new contextual manipulation and using synthetic rather than naturally produced retroflex and dental tokens (Morosan and Werker, in preparation). The synthetic tokens were constructed by varying the starting frequency of the second and third formats in equal steps, thus ensuring that the acoustic variability within categories was equivalent to that between categories (for stimulus descriptions, see Werker and Lalonde 1988). The contextual manipulation in this study involved varying the kinds of pairings used in the stimulus set rather than manipulating ISI or the number of trials as we had done in our previous work. There were three contextual conditions in this study, with ten subjects in each condition. In the phonemic contextual condition, there were four kinds of pairings: phonemically different (bilabial/dental), phonetically different (retroflex/dental), acoustically different (two different bilabial, dental, or retroflex stimuli), and physically identical (the same token paired with itself). There were equal numbers of one- and two-step pair
page_104 Page 105 ings among the phonemically different, phonetically different, and acoustically different trials. In the phonetic contextual condition (phonemic in other languages but not in the listener's language), the pairings that are phonemic to English listeners were eliminated. In the acoustic contextual condition, only the within-phonetic category (acoustically different) and physically identical pairings were presented. All subjects received two blocks of ninety-six trials each. Responses to phonemically, phonetically, and acoustically different pairing types were converted to A' scores using performance on the physically identical pairings to estimate false alarm rate. As expected, subjects performed nearly perfectly on the phonemically different pairings in the first contextual condition. Of more theoretical interest is the relative performance on phonetically different and acoustically different pairings across the contextual manipulations. If the contextual manipulation of pairing type affects speech perception, it could have an effect in at least three different ways. One possibility would be that subjects attend primarily to the largest acoustic difference present within a contrast type and ignore other smaller acoustic differences: subjects would attend to the two-step pairings in each condition and ignore the one-step and physically identical pairings. Thus, according to this prediction, the proportion of different responses to phonetically different and acoustically different pairings would not change across the three testing conditions, it would simply always reflect a bias in favor of larger acoustic differences irrespective of phonetic status. There was no consistent support for this prediction. A second possibility is that subjects attend to the most easily accessible linguistic difference. According to this prediction, subjects would show above-chance discrimination of only phonemically different pairings in the first contextual condition, above-chance discrimination of only phonetically different pairings in the second contextual condition, and would only show above-chance performance on acoustically different pairings in the third contextual condition when neither phonemically or phonetically different pairings were present. The data pattern did not fit this prediction. The third possibility is that the presence of phonemically different pairings would enlist the linguistic mode. In this case, the perception of phonetically different pairings would be facilitated in the first contextual condition relative to the other two conditions due to the presence of pairings that have functional phonological status in English in the first contextual condition. In the subsequent two contextual conditions, with no pairings present that have functional phonological status in English, page_105 Page 106 the English listeners would process both the phonetically different and acoustically different pairings according to a nonlinguistic, acoustic strategy. As shown in figure 4.1, the results were consistent with this prediction. These results support the hypothesis that the presence of phonemically different pairings engages a linguistic mode of processing, thus facilitating sensitivity to pairs of stimuli that linguistically straddle phonetically relevant boundaries. These results replicate the previous finding that indicates that adults can use at least three different processing strategiesphonemic, phonetic, and acousticdepending upon testing conditions (Werker and Logan 1985; Mann 1986). It also clarifies, however, that the phonemic mode has the most privileged status. In summary, I have characterized the second period of research in cross-language speech perception as reflecting a growing realization that developmental changes do not result in permanent loss and that careful attention to the testing situation is necessary to understand disparate results. Theory-Guided Research in Cross-Language Speech Perception More recently, much of the research in cross-language speech perception has moved beyond simple demonstrations of whether subjects can or cannot discriminate particular nonnative contrasts to an attempt to test,
Figure 4.1 Average A' scores to the acoustically different and phonetically different pairings as a function of the number and kind of pairings in the contextual condition. page_106 Page 107
within a theoretical framework, specific predictions, such as ease of discriminability and ease of training. For example, several researchers have attempted to make theoretically based predictions specifying which nonnative contrasts will be easy or difficult to discriminate (Best, McRoberts, and Sithole 1988; Burnham 1986; Strange 1986). Burnham (1986) suggested that there might be both fragile and robust nonnative distinctions. Fragile refers to phonetic contrasts that are both rare across the worlds' languages and, of particular importance, are acoustically quite similar. Burnham hypothesized these types of contrasts to be most vulnerable to early lossaround 10-12 months of agein nonnative listeners. Robust refers to contrasts that are widely distributed across the world's languages and are acoustically less similar. Burnham hypothesized that nonnative listeners would not show a measurable decline in their performance on these contrasts until around 4-5 years of age. Because Burnham viewed acoustic salience as the most important dimension of fragile contrasts, he reasoned that early loss of fragile contrasts would result from sensory tuning (e.g., Aslin and Pisoni 1980; Gottlieb 1981). He posited a different mechanism for later loss of robust contrasts. Specifically, he argued that, with the advent of metaphonological abilities, sensitivity to even robust contrasts would decline because the distinctions have no functional value in the individual's working phonological system. Although the experimental data that Burnham presented in support of late loss for robust contrasts was less than convincing, the idea that some types of contrasts may simply be less vulnerable to loss at any age than others was immediately seized upon by the research community as an important idea.3Also, his suggestion that more than one explanation for developmental changes in cross-language speech perception might be required is of interest. Catherine Best and her colleagues have taken the strong stand that phonological status alone should predict whether a contrast is discriminable or not to a nonnative listener. Because this research is covered in depth in the chapter by Best (this volume), it will only be touched on here. Basically, Best, McRoberts, and Sithole have proposed that there are at least four kinds of nonnative contrasts in terms of phonological status: (1) assimilable, (2) nonassimilable, (3) category goodness, and (4) two category. Assimilable nonnative contrasts are those in which each member of the contrast can be assimilated to an intermediate phone in the native language. These kinds of nonnative contrasts should be the most difficult page_107 Page 108 to discriminate. Nonassimilable contrasts include phones that do not even sound at all like any possible phone from the native language. Because these phones do not invoke phonological processing, they should continue to be discriminable
by basic auditory processes throughout the life, and as such, these nonnative contrasts are predicted to be the most easily discriminable. Category goodness refers to a nonnative contrast whose members can each be assimilated to an intermediate phoneme in the native language, as in assimilable, but one which will stand out as clearly a better instance of that category than the other. Two category refers to a nonnative contrast that consists of two nonnative phones, each of which is assimilable to a contrasting phonemic category in the native language. The third type is predicted to be intermediate in difficulty to purely assimilable and two-category nonnative phones; the last type is predicted to be easiest. Research to date is consistent with these predictions. In a series of studies, Best and colleagues have shown that Zulu click contrasts that are not at all assimilable to English are easily discriminated by English-speaking subjects of all ages, including 12- to 14-month-old English-learning infants (Best, McRoberts, and Sithole 1988). In subsequent work, Best and colleagues have tested both adults and infants of different ages as to their ability to discriminate other non-English contrasts. In the case of adults, the relative difficulty of discrimination can be fairly well predicted by the phonological status of the contrasts (Best 1989). However, the results with respect to infants are a bit more confusing (Best, this volume; Best and McRoberts 1989; Best et al. 1990). Infants of 6-8 months of age clearly perform better than infants of 10-12 months of age on all but the nonassimilable contrasts, but beyond that, the pattern of performance does not follow that predicted by Best's phonological assimilation model, suggesting that, in the older infants, perception is not yet organized by the same phonological constraints as it is in adults. In recent work, Polka (1991, 1992) has highlighted at least three independent factors that need to be considered when making predictions concerning the discriminability of nonnative contrasts among adults. These are functional phonetic status (phonemic contrast), substantive phonetic status (phonetic variation), and acoustic differences (the absolute amount of measurable acoustic differences between members of a nonnative contrast irrespective of phonetic status) (see also Best, this volume). She has argued that all three of these factors need to be considered in assessing the discriminability of a nonnative contrast for subjects of any age. page_108 Page 109 In one study, English- and Farsi-speaking adults were compared as to their ability to discriminate the Nthlakampx glottalized velar/uvular contrast used in our previous work (Polka 1992). Glottalized stops do not have substantive phonetic status (they do not occur) in either English or Farsi. However, the Farsi language does include a velar/uvular functional phonetic (phonemic) contrast between velar and uvular for nonglottalized stops. If perception of nonnative contrasts is predicted by their match to the system of phonemic possibilities in the language irrespective of their phonetic substantiation, then Farsi speakers should find this contrast easier than English speakers. On the other hand, if the phonemic contrast has to be supported in the identical phonetic environment in a language, then Farsi speakers will be no better than English speakers. Subjects were tested in an AX procedure with a long (1500 msec) ISI. Although there were no significant differences between the English and Farsi speakers in overall performance, there were substantial individual differences. The Farsi speakers who could hear the Nthlakampx stimuli as peculiar Farsi sounds did better on the glottalized velar/uvular contrast than the Farsi speakers who did not recognize the stimuli as similar to Farsi. Polka and colleagues have extended this line of investigation to testing English adults on their ability to discriminate the Hindi retroflex/dental contrast as instantiated in four categories of voicing. This contrast does not have functional phonetic status in English. In this particular case, both substantive phonetic characteristics and a metric of acoustic discriminability were found to be important , providing at least some support for the notion that each contributes to discriminability among adults (Polka 1991, 1992). Flege's work on speech perception and production among second-language learners constitutes another example of theoretically motivated research in this area. His goal has been to identify the extent to which nonnative contrasts share acoustic and/or articulatory cues with native distinctions and whether such overlap predicts ease of acquisition of productive and perceptual abilities (for a review, see Flege 1992). In this endeavor, Flege tests specific theoretical predictions about the kinds of errors second-language learners will make in producing and perceiving L2 (second language) phones. He has shown that, when first acquiring a new language, subjects more rapidly attempt to pronounce phones that are similar to those used in their native language, although they make mistakes and pronounce the new phone as if it were identical to the similar phone in the native language. Adult
second-language learn
page_109 Page 110
ers will avoid even attempting unfamiliar sounding phones in the early stages of second-language acquisition (Flege 1987; see also Wode 1977, 1992). However, at a subsequent point in the acquisition process, they continue to mispronounce the similar phones, presumably because they continue to assimilate them to native categories, but they become better at correctly pronouncing the dissimilar phones. The data are still somewhat ambiguous as to whether the ultimate ability to pronounce accurately dissimilar phones stems from the establishment of a new underlying phonetic category or whether it stems from the use of the same underlying phonetic representation with the application of different language-specific realization rules (Flege 1992). Of interest, Flege has clear data to suggest that young L2 learners can set up new underlying representations for either similar or dissimilar L2 phones. His research is now directed at resolving the ambiguity of the explanation with respect to adults and to extend systematically the model to perception studies. His work is characterized by a systematicity that should ultimately allow us to resolve the ambiguity of the extent to which notions of similarity and assimilability can be most usefully understood with reference to acoustic, articulatory, or phonological properties. We recently outlined several different approaches that can be used to explain experiential influences on cross-language speech perception (Werker 1991; Werker and Pegg 1992). These included perceptual tuning, cognitive mediation, phonological processing, modular recalibration, self-organizing systems, and articulatory mediation.4Note that most of these postulated mechanisms involve processes other than sensory loss. Our current research is designed to attempt to evaluate the relative utility of these various kinds of explanations and ultimately to determine if a single process or combination of several different processes is required to understand age- and experience-related influences on crosslanguage speech perception (Lalonde and Werker 1990, under review; Werker and Pegg 1992). To date, we have been concentrating our efforts on testing the viability of the cognitive mediation and phonological processing alternatives. We have recent data to suggest that the ability to restructure perceptual categories for visual information on the basis of correlated attributes (Younger and Cohen 1983) emerges in tandem with the developmental reorganization in cross-language speech perception (Lalonde and Werker 1990, under review), raising the possibility that both rely on the same underlying cognitive prerequisites. This kind of data may be consistent with the attentional explanations for developmental changes in speech page_110 Page 111 perception postulated by Jusczyk (1992, this volume). However, we are not entirely convinced that cognitive mediation necessarily implies that speech perception is accomplished by general purpose cognitive machinery. Thus, we have also continued our investigation of the relationship between developmental changes in cross-language speech perception and the development of other specific linguistic abilities. Our research testing the viability of the phonological processing alternative was originally motivated by the three-factor model of adult speech perception we proposed (Werker and Logan 1985). We reasoned that if adults can use phonemic, phonetic, or acoustic processes in perceiving speech, then it should be possible to trace to developmental emergence of these three factors and that the emergence of the phonemic factor should signal the onset of phonological processing. Toward this end, we conducted two sets of experiments which will be reviewed below. In the first set of experiments, Lalonde and I replicated and extended the Werker and Tees (1984a) finding of a developmental reorganization between 6 and 12 months of age, but we substituted a synthetically produced /ba/-/da/-/Da/ continuum for the natural stimuli used in our previous work. The synthetic stimuli were necessary to control for the confound between physical similarity and phonetic status inherent in natural stimuli. English-learning infants aged 6-8 months were compared to English-learning infants aged 10-12 months as to their ability to discriminate six stimuli from either side of three locations along this continuum. The first location was labeled Common because it contrasted three bilabial /ba/ with three dental /da/ stimulia phonetic contrast common to both English and Hindi adults. The second location was labeled Hindi-only because it contrasted three dental /da/ with three retroflex /Da/ stimuli as judged by Hindi-speaking adults (see experiment 1, Werker and Lalonde 1988). The third location was labeled Neither as it required
subjects to treat a dental and two retroflex stimuli as equivalent to one another and as different from three additional retroflex stimuli. Thus, infants were asked to discriminate the stimuli according to a location that has no phonetic relevance. The results are shown in figure 4.2. As can be seen, both groups of infants were able to discriminate the common contrast, only the 6- to 8-month-old infants could discriminate the Hindi-only contrast, and infants in both age groups were unable to discriminate the Neither contrast. In terms of the three factors (phonemic, phonetic, and acoustic) proposed in our previous work, we interpreted these results as revealing a phonetic, or at least, phonetically relevant, factor in the performance of the infants page_111 Page 112
Figure 4.2 Percentage of correct responses by group for each kind of pairing. aged 6-8 months since they were able to discriminate the non-English Hindi-only contrast, and a phonemic factor as evident in the sensitivity to only the common contrast among the infants aged 1 1-13 months of age. There was no evidence for an acoustic factor in this study. On the basis of previous work by Aslin and colleagues (Aslin etal. 1981), it is suspected such evidence would be apparent if infants were tested in a more sensitive, staircase procedure. In other words, it seems logical to expect that infants will also have general acoustic-processing sensitivities that might be evident under certain experimental testing conditions. In fact, it is possible that infants aged 11-13 months might even show continued sensitivity to the non-English Hindi-only contrast if tested under adequately sensitive conditions. Further research is needed to test both of the above predictions. Nevertheless, on the basis of the results from the Werker and Lalonde (1988) study, there is solid evidence for performance in accord with both pho page_112 Page 113 netic and phonemic factors at different points in infancy. The phonetic factor is evident in the performance of the infants aged 6-8 months, and the phonemic factor, in the performance of the infants aged 10-12 months.
Because we were attempting to understand the infant data in terms of the three-factor model outlined with adults and had consequently labeled the performance of 10- to 12-month-old infants as phonemic, we postulated that infants of 10-12 months of age should have the beginnings of phonemic oppositions in the representation of lexical items in their receptive vocabularies. We thus began a series of experiments designed to test this possibility (for details, see Werker and Baldwin 1991; Werker and Pegg 1992). However, two years of experimentation failed to reveal any convincing evidence that infants represent words in such fine detail by this young age. In fact, the first age at which we have replicable evidence of sufficient phonetic detail to enable minimal pair contrasts in the receptive lexicon is 19 months (Werker and Baldwin 1991). Thus, it is probably inaccurate to refer to the reorganization at 10-12 months of age as involving phonemic processing since ''phonemic" implies the ability to use phonetic detail to contrast meaning. Therefore, we now feel it is probably more accurate to characterize the performance of the infants aged 10-12 months as involving language-specific phonetic sensitivities. On the basis of this theory-motivated research, we have now modified the previously proposed three-factor model of speech perception (Werker and Logan 1985) and replaced it with a four-factor model, acoustic, broad-based phonetic, language-specific phonetic, and phonemic factors (for a more complete description, see Werker and Pegg 1992). As elaborated in Werker and Pegg, we are convinced that there is evidence of at least two factors in infancy. One is the language-specific phonetic perception seen by 10-12 months of age, and the other is the broad-based phonetic sensitivity of the younger infant. To date, there is no clear evidence showing if there is an independent acoustic factor in infant perception, but as mentioned above, there is reason to believe such evidence might be found. We feel the previous work by Werker and Logan (1985) and by Morosan and Werker (in preparation) provides clear evidence for at least three factors in adult speech perception: acoustic, broad-based phonetic, and phonemic. To date, however, there is no data that might differentiate language-specific phonetic processing from phonemic processing among adults. page_113 Page 114 Finally, it is interesting to note that the study of cross-language speech perception has now extended far beyond the study of phonetic and phonological perception, to investigations of the perception of global prosody (Mehler et al. 1988), phrasal and clausal structure (Jusczyk 1989), and phonotactic rules (Jusczyk, Friederici, and Wessels, in press). Much of this work is motivated by a realization that the understanding of phonetic perception is intimately tied up with an understanding of how words and segments are extracted from ongoing speech (e.g., Jusczyk, this volume; Mehler, Dupous, and Segui 1990; Pisoni and Luce 1987). Conclusion In this chapter, I have attempted to review our work in cross-language speech perception within a historical perspective. I hope it is apparent from this selective review that cross-language speech perception represents a dynamic research area in which the ideas have grown and changed over the years. In an attempt to illustrate just how much our thinking has progressed in this research area, three periods in cross-language speech perception research have been identified. The first period was characterized by the view that language experience is necessary to maintain the ability to discriminate nonnative contrasts and that, without such experience, the ability to discriminate phonetic contrasts will be lost. This was, of course, an overly simplistic view: even in his original statements, Gottlieb (1976) pointed out that maintenance at the behavioral level may not be the same thing as maintenance at the neuronal level (see Walley, Pisoni, and Aslin 1981; or see Aslin, Pisoni, and Jusczyk 1983 for an elaboration of how attunement theory can allow for either sensory or attentional mechanisms). In the second period, it became apparent that there are substantial differences between nonnative contrasts in their discriminability and that subjects can discriminate even difficult contrasts if given adequate training or if tested in a sensitive enough procedure. This led to a realization that sensory loss was not an adequate explanation and that, at the very least, developmental changes in cross-language speech perception should be characterized as involving a reorganization in perceptual biases rather than a loss in absolute discriminatory abilities (Werker and Tees 1984a). Current work in cross-language speech perception is increasingly theory motivated (see MacKain 1988). As noted above, researchers are now designing experiments to test specific predictions generated from different page_114
Page 115 theoretical perspectives. This development reflects an increasing sophistication in the field, and will, we can hope, lead to a more adequate understanding of developmental changes in cross-language speech perception. Notes 1. Judgments of difficulty were arrived at in several steps. First, I listened to the syllables and selected several that I found difficult to discriminate. I then played those to a group of faculty members in linguistics at UBC and selected the contrasts that everyone but the trained phoneticians was unable to discriminate. I then digitized and analyzed those several syllables at Haskins Laboratoris and played them to colleagues and students at Haskins. The final set of syllables selected was judged to be the most difficult by those listeners. 2. It is of interest to note that English-speaking adults who had early exposure to Hindi during the first couple of years of their life but no subsequent systematic exposure did significantly better than the native English speakers on this contrast. Also, all effects of training were permanent with this group. 3. Notice that this whole area of research moves beyond that investigated in our early work. We did not attempt to identify what kinds of contrasts might be difficult or easy; we simply tried to find difficult nonnative contrasts in order to test our hypotheses. 4. For recent research consistent with an articulatory mediation hypothesis, see de Boysson-Bardies et al. 1992. References Abramson, A. S. and Lisker, L. (1970). Discriminability along the voicing continuum: Cross-Language tests. In Proceedings of the Sixth International Congress of Phonetic Sciences, Prague, 1967 (pp. 569-573). Prague: Academia. Aslin, R. N. and Pisoni, D. B. (1980). Some developmental processes in speech perception. In G. H. Yeni-Komshian, J. F. Kavanagh, and C. A. Ferguson (eds.), Child Phonology, vol. 2: Perception. New York: Academic Press. Aslin, R. N., Pisoni, D. B., Hennessy, B. L., and Perey, A. J. (1981). Discrimination of voice onset time by human infants: New findings and implications for the effect of early experience. Child Development, 52, 1135-1145. Aslin, R. N., Pisoni, D. B., and Jusczyk, P. W. (1983). Auditory development and speech perception in infancy. In M. M. Haith, and J. J. Campos (eds.), Handbook of child psychology, vol. 2: Infancy and developmental psychobiology (pp. 573-687). New York: Wiley. Best, C. T. (1989). Phonologic and phonetic factors in the influence of the language environment on speech perception. Paper read at the International Conference on Event Perception and Action, Miami University, Miami, Ohio, July 1989. Best, C. W. and McRoberts, G. W. (1989). Phonological influences in infants' discrimination of two nonnative speech contrasts. Paper read at the Society for Research in Child Development, Kansas City, Mo., April 1989. page_115 Page 116 Best, C. T., McRoberts, G. W., and Sithole, N. N. (1988). The phonological basis of perceptual loss for nonnative contrasts: Maintenance of discrimination among Zulu clicks by English-speaking adults and infants. Journal of Experimental Psychology. Human Perception and Performance, 14, 345-60. Best, C. T., McRoberts, G. W., Goodell, E., Womder, J. S., Insabella, G., Klatt, L., Luke, S. and Silver, J. (1990). Infant and adult perception of nonnative speech contrasts differing in relation to the listener's native phonology. Paper presented at the International Conference on Infant Studies. Montreal, Canada. April 19-22. Burnham, D. K. (1986). Developmental loss of speech perception: Exposure to and experience with a first language. Applied Linguistics, 7, 201-240. Butterfield, E. and Cairns, G. (1974). Whether infants perceive linguistically is uncertain, and if they did, its practical
importance would be equivocal. In R. Schiefelbusch and L. Lloyds (eds.), Language perspectives: Acquisition, retardation, and intervention. Baltimore: University Park Press. Carney, A. E., Widin, G. P., and Viemeister, N. F. (1977). Noncategorical perception of stop consonants differing in VOT. Journal of the Acoustical Society of America, 62, 961-970. Carter, D. B. and Patterson, C. J. (1982). Sex-roles as social conventions: The development of children's conception of sex-role stereotypes. Developmental Psychology, 18, 812-824. de Boysson-Bardies, B., Vihman, M. M., Roug-Hellichius, L., Durand, C., Landberg, I., and Arao, F. (1992). Material evidence of infant selection from the target language: A cross-linguistic phonetic study. In C. A. Ferguson, L. Menn, and C. Stoel-Gammon (eds.), Phonological development: Models, research, and implications (pp. 369-392). Parkton, Md.: York Press. Eilers, R. E. and Oller, D. K. (1988). Precursors to speech: What is innate and what is acquired? In R. Vasta (ed), Annals of Child Development, vol. 5 (pp. 1 -32). JAI Press, Inc., London, England. Eilers, R. E., Wilson, W. R., and Moore, J. M. (1979). Speech perception in the language innocent and the language wise: The perception of VOT. Journal of Child Language, 6, 1-18. Eimas, P. D. (1975). Developmental studies in speech perception. In L. B. Cohen and P. Salapatek (eds.), Infant perception :from sensation to perception, vol. 2. New York: Academic Press. Eimas, P. D., Siqueland, E. R., Jusczyk, P., and Vigorito, J. (1971). Speech perception in infants. Science, 171, 303-306. Flege, J. E. (1987). The production on "new" and "similar" phones in a foreign language: Evidence for the effect of equivalence classification. Journal of Phonetics, 15, 47-65. page_116 Page 117 Flege, J. E. (1992). Speech learning in a second-language. In C. A, Ferguson, L. Menn, and C. Stoel-Gammon (eds.), Phonological development: Models, research, and implications. Parkton, Md.: York Press. Goto, H. (1971). Auditory perception by normal Japanese adults of the sounds "L" and"R". Neuropsychologia, 9, 317323. Gottlieb, G. (1976). The roles of experience in the development of behavior and the nervous system. In G. Gottlieb (ed.), Studies on the development of behavior and the nervous system, vol. 3. New York: Academic Press. Gottlieb, G. (1981). Roles of early experience in species-specific perceptual development. In R. N. Aslin, J. R. Alberts, and M. R. Petersen (eds.), Development of perception, vol. 1. New York: Academic Press. Jamieson, D. G. and Morosan, D. E. (1989). Training new nonnative speech contrasts: A comparison of the prototype and perceptual fading techniques. Canadian Journal of Psychology, 43, 88-96. Jusczyk, P. W. (1985). The high amplitude sucking technique as a methodological tool in speech perception research. In G. Gottlieb and N. A. Krasnegor (eds.), Measurement of audition and vision in the first year of postnatal life: A methodological overview (pp. 195-221). Norwood, N.J.: Ablex. Jusczyk, P. W. (1989). Perception of cues to clausal units in native and nonnative languages. Paper presented at Society for Research in Child Development, Kansas City, Mo., April 1989. Jusczyk, P. W., Friederici, A. D., and Wessels, J. (in press). Infants' sensitivity to the sound patterns of native language words. Journal of Memory and Language. Kuhl, P. K. (1980). Perceptual constancy for speech sound categories in early infancy. In G. Yeni-Komshian, J. F. Kavanagh, and C. A. Ferguson (eds.), Child phonology, vol. 2: Perception (pp. 41-66). New York: Academic Press. Kuhl, P. K. (1985). Methods in the study of infant speech perception. In G. Gottlieb and N. Krasnegor (eds.),
Measurement of audition and vision in the first year of postnatal life: A methodological overview (pp. 223-251). Norwood, N.J.: Ablex. Ladefoged, P. (1982). A course in phonetics. New York: Harcourt Brace Jovanovich. Ladefoged, P. and Bhaskararao, P. (1983). Non-quantal aspects of consonant production: A study of retroflex consonants. Journal of Phonetics, 11, 291-302. Lalonde, C. E. and Werker, J. F. (1990). Infants' performance on an A-not-B task predicts cross-language speech perception and visual categorization skills. Poster presented at the International Conference of Infant Studies. Montreal, Quebec, April 1990. Lalonde, C. E. and Werker, J. F. (under review). Cognitive influences on cross-language speech perception in infancy. page_117 Page 118 Lasky, R. E., Syrdal-Lasky, A., and Klein, R. E. (1975). VOT discrimination by four to six and a half month old infants from Spanish environments. Journal of Experimental Child Psychology, 20, 215-225. Lenneberg, E. H. (1967). Biological foundations of language. New York: Wiley. Lisker, L. and Abramson, A. S. (1970). The voicing dimension: Some experiements in comparative phonetics. In Proceedings of the 6th international congress of phonetic sciences, Prague, 1967 (pp. 563-567). Prague: Academia. Logan, J. S., Lively, S. E., and Pisoni, D. B. (1991). Training Japanese listeners to identify /r/ and /1/: A first report. Journal of the Acoustical Society of America, 89, 874-886. MacKain, K. S. (1988). Filling the gap between speech and language. In M. D. Smith and J. Locke (eds.), The emergent lexicon. The child's development of a linguistic vocabulary. New York: Academic Press. MacKain, K. S., Best, C. T., and Strange, W. (1981). Categorical perception of English /r/ and /1/ by Japanese bilinguals. Applied Psycholinguistics, 2, 368-390. Mann, V. A. (1986). Distinguishing universal and language-dependent levels of speech perception: Evidence from Japanese listeners' perception of English "1" and "r." Cognition, 24, 169-196. McLasky, C. L., Pisoni, D. P., and Carrell, T. D. (1983). Transfer of training of a new linguistic contrast in voicing. Perception and Psychophysics, 34, 323-330. Mehler, J., Dupous, E., and Segui, J. (1990). Constraining models of lexical access: The onset of word recognition. In G. Altman (ed.), Cognitive models of speech processing. Cambridge, Mass.: MIT Press. Mehler, J., Juczyk, P. W., Lambertz, G., Halstead, N., Bertoncini, J., and Amiel-Tison, C. (1988). A precursor of language acquisition in young infants. Cognition, 29, 143-178. Miyawaki, K., Strange, W., Verbrugge, R., Liberman, A. M., Jenkins, J. J., and Fujimura, O. (1975). An effect of linguistic experience: The discrimination of [r] and [1] by native speakers of Japanese and English. Perception and Psychophysics, 18, 331-340. Morosan, D. and Werker, J. F. (in preparation.) Phonemic, phonetic, and acoustic factors in adult speech perception: Contextual influences. Pisoni, D. B. and Luce, P. (1987). Acoustic-phonetic representations in word recognition. Cognition, 25, 21-52. Pisoni, D. B., Aslin, R. N., Perey, A. J., and Hennessy, B. L. (1982). Some effects of laboratory training on indentification and discrimination of voicing contrasts in stop consonants. Journal of Experimental Psychology: Human Perception and Performance, 8, 297-314. Polka, L. (1991). Cross-Language speech perception in adults: Phonemic, phonetic, and acoustic contributions. Journal
of the Acoustical Society of America, 89, 2961-2977.
page_118 Page 119
Polka, L. (1992). Characterizing the influence of native language experience on adult speech perception. Perception and Psychophysics, 52, 37-52. Repp, B. H. (1984). Categorical perception. Issues, methods, findings. In N. Lass (ed.), Speech and Language: Advances in Basic Research and Practice, Vol. 10. Academic Press: New York. Sheldon, A. and Strange, W. (1982). The acquisition of /r/ and /1/ by Japanese learners of English: Evidence that speech production can precede speech perception. Applied Psycholinguistics, 3, 243-261. Singh, S. and Black, J. W. (1966). Study of twenty-six intervocalic consonants as spoken and recognized by four language groups. Journal of the Acoustical Society of America, 39, 371-387. Snow, C. E. and Hoefnagel-Hohle, M (1978). The critical period for language acquisition: Evidence from second language learning. Child Development, 49, 1114-1128. Stevens, K. N. and Blumstein, S. E. (1978). Quantal aspects of consonant production and perception: A study of retroflex stop consonants. Journal of Phonetics, 3, 215-233. Strange, W. (1972). The effects of training on the perception of synthetic speech sounds: Voice onset time. Unpublished doctoral dissertation, University of Minnesota. Strange, W. (1986). Speech input and the development of speech perception. In J. F. Kavanagh (ed.), Otitis media and child development. Parkton, Md.: Yorkton Press. Strange, W. and Dittman, S. (1984). Effects of discrimination training on the perception of /r-l/ by Japanese adults learning English. Perception and Psychophysics, 36, 131-145. Strange, W. and Jenkins, J. (1978). Role of linguistic experience in the perception of speech. In R. D. Walk and H. L. Pick (eds.), Perception and experience. New York: Plenum Press. Streeter, L. A. (1976). Language perception of two-month old infants shows effects of both innate mechanisms and experience. Nature, 259, 39-41. Takata, Y. and Nabelek, A. K. (1990). English consonant recognition in noise and in reverberation by Japanese and American listeners. Journal of the Acoustical Society of America, 88(2), 663-666. Tees R. C., and Werker, J. F. (1984). Perceptual flexibility: Maintenance of recovery of the ability to discriminate nonnative speech sounds. Canadian Journal Psychology, 38, 579-590. Trehub, S. E. (1976). The discrimination of foreign speech constrasts by infants and adults. Child Development, 47, 466472. page_119 Page 120 Walley, A. C., Pisoni, D. P. , and Aslin, R. N. (1981). The role of early experience in the development of speech perception. In R. N. Aslin, J. R. Alberts, and M. R. Petersen (eds.) Development of perception; psychobiological perspectives, vol 1. New York: Academic Press. Werker, J. F. (1989). Becoming a native listener. American Scientist, 77, 54-59. Werker, J. F. (1991). The ontogeny of speech perception. In I.G. Mattingly and M. Studdert-Kennedy (eds.), Speech perception and modularity. Hillsdale, N.J.: Erlbaum.
Werker, J. F. and Baldwin, D. (1991) Speech perception and lexical comprehension. Poster presented at the Society for Research in Child Development, Seattle, Wa., April 1991. Werker, J. F. and Lalonde, C. E. (1988). Cross-language speech perception: Initial capabilities and developmental change. Developmental Psychology, 24(5), 672-683. Werker, J. F. and Logan, J. S. (1985). Cross-language evidence for three factors in speech perception. Perception and Psychophysics, 37, 35-44. Werker, J. F. and Pegg, J. E. (1992). Speech perception and phonological acquisition. In C. E. Ferguson, L. Menn, and C. Stoel-Gammon (eds.), Phonological development: Models, research, and implications. Parkton, Md.: York Press. Werker, J. F. and Tees, R. C. (1983). Developmental changes across childhood in the perception of non-native speech sounds. Canadian Journal of Psychology, 37, 278-286. Werker, J. F. and Tees, R. C. (1984a). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7, 49-63. Werker, J. F. and Tees, R. C. (1984b). Phonemic and phonetic factors in adult cross-language speech perception. Journal of the Acoustical Society of America, 75, 11866-1878. Werker, J. F., Gilbert J. H. V., Humphrey, K., and Tees, R. C. (1981). Developmental aspects of cross-language speech perception. Child Development, 52, 349-353. Wode, H. (1977). The L2 acquisition of/r/. Phonetica, 34, 200-217. Wode, H. (1992). Categorical perception and segmental coding in the ontogeny of sound systems: A universal approach. In C. E. Ferguson, L. Menn, and C. Stoel-Gammon (eds.), Phonological development. Models, research, and implications. Parkton, Md.: York Press. Younger, B. A. and Cohen, L. (1983). Infant perception of correlations among attributes. Child Development, 54, 858867. page_120 Page 121
Chapter 5 Perceptual Learning of Nonnative Speech Contrasts: Implications for Theories of Speech Perception David B. Pisoni, Scott E. Lively, and John S. Logan For many years, there has been a consensus among investigators working in the field of speech perception that the linguistic environment exerts a very profound and often quite permanent effect on an individual's ability to identify and discriminate speech sounds. The first report of categorical perception by Liberman and his colleagues at Haskins Laboratories (Liberman et al. 1957) and the subsequent cross-language studies of voicing by Lisker and Abramson (1964, 1970) provided very convincing evidence for the importance of perceptual learning in speech perception. By perceptual learning we mean the process by which a listener comes to identify and discriminate stimuli differently through practice or experience (Aslin and Pisoni 1980). These initial studies, and many others since, have demonstrated that the effects of perceptual learning are long lasting and often produce seemingly irreversible changes in the speech perception abilities of adults. Indeed, most attempts to selectively modify speech perception abilities using short-term laboratory training techniques have been generally unsuccessful (Strange and Dittman 1984; Strange and Jenkins 1978). The failure of these earlier training studies to produce robust changes in speech perception has been interpreted by some researchers through the 1970s and early 1980s as support for the proposal that, during development, the underlying neural mechanisms used in speech perception become very finely tuned to only the distinctive sound contrasts used in the native linguistic environment. Furthermore, these mechanisms cannot be selectively modified, or retuned, very easily in mature adults (Eimas 1975, 1978; Strange
and Jenkins 1978). This chapter is concerned with several general issues surrounding perceptual learning in speech perception. While our major interest centers on the learning of nonnative speech contrasts by mature adults, much of what we have to say is also relevant to other issues dealing with current page_121 Page 122 theoretical accounts of speech perception and perceptual development. Central to our discussion is a concern for the nature of the perceptual changes that take place when the sound system of a language is acquired during development. In particular, we are interested in what happens to a listener's perceptual abilities when he or she acquires a native language. What happens to a listener's ability to identify and discriminate speech contrasts that are not present in the languagelearning environment? Are the listener's perceptual abilities permanently lost because the neural mechanisms have atrophied due to lack of stimulation during development, or are they simply realigned and only temporarily modified due to changes in selective attention? Despite the existence of several recent studies in the published literature demonstrating that, under certain experimental conditions, listeners can be trained to perceive and discriminate very fine phonetic details, many researchers continue to maintain the view that the effects of linguistic experience on speech perception are difficult, if not impossible, to modify in a short period of time. The statements below should give the reader a sense of the pervasiveness of these views: Thus, for adults learning a foreign language, modification of phonetic perception appears to be slow and effortful, and is characterized by considerable variability among individuals. (Strange and Dittman, 1984, 132) These difficulties with nonnative speech contrasts may indicate that certain distinctions are extremely difficult for adults to learn, or even that adults cannot learn to make certain distinctions in a linguistically meaningful manner. (Jamieson and Morosan, 1986, 206) The issue of loss versus relearning is important in theorizing about speech perception for a number of reasons. First, the question frames the study of speech perception within a developmental perspective. Rather than studying the mature linguistic system of the adult listener, the loss versus relearning issue stresses the importance of focusing on the beginning language user and the time course of learning. Second, studying speech perception within a developmental framework has the additional advantage of allowing us to ask questions that address issues of neural flexibility or inflexibility versus attentional constraints. Finally, studying the issue of loss versus relearning is important because it constrains current models of speech perception. Understanding the permanence of loss or the difficulty of reacquistion restricts theorizing about the ultimate flexibility of the human perceptual system. page_122 Page 123 Role of Early Experience in Perceptual Development Most current theories of speech perception are vague, making it difficult to generate specific testable experimental hypotheses (see Klatt 1988; Pisoni 1978; Pisoni and Luce 1987). A detailed examination of these theories reveals that none currently incorporate mechanisms to deal with developmental change or the effects of the linguistic environment on speech perception. Almost all contemporary theories of speech perception are concerned with the mature adult listener, who is presumably in the end-stage of development. Jusczyk (1985, 1986) and Studdert-Kennedy (1986, 1987) provide notable exceptions to this general rule. The typical focus on adult listeners is an unfortunate state of affairs. Theories of speech perception should not only characterize the perceptual abilities of the mature listener, but should also provide some principled account of how these abilities developed and how they are selectively modified by the linguistic environment. The results described below make it very clear that current theories of speech perception will have to be modified to incorporate principles of developmental and attentional change to account for how mature listeners can acquire new nonnative linguistic contrasts. To place our work in a developmental framework, first we consider some possible interactions between genetic and experiential factors in perceptual development. These ideas were initially formulated by Aslin and Pisoni (1980) in an
attempt to deal with the ontogeny of infant speech perception. An examination of the literature on infant speech perception revealed a complex set of interactions among genetic and experiential factors during development. A simple dichotomy between nativist and empiricist views of development was inadequate to account for these interactions, a point that has also been made by researchers working on the development of the visual and auditory systems (Gottlieb 1981). To deal with these interactions, Aslin and Pisoni (1980) proposed number of possible roles that early experience could play in the development of speech perception. These alternatives are shown in Figure 5.1. First, according to a general universal theory, a perceptual ability may be present at birth but may require certain specific types of early experience to maintain the integrity of that ability. The absence or degradation of the requisite early experience can result in either a partial or complete loss of the perceptual ability, a loss that may be irreversible despite subsequent experience at a later point in development. Hubel and Wiesel's work (1965) on the development of the cat's visual system is one example of this point of view. Eimas and Miller's research on feature detectors for page_123 Page 124
Figure 5.1 Illustration of the major roles that early experience can play in modifying the perception of speech-sound contrasts. Three general classes of perceptual theories are shown here: universal theory, attunement theory, and perceptual learning theory. From Aslin and Pisoni 1980. speech cues provides another example of the universal theory position (Eimas and Miller 1978). Second, according to an attunement theory position, a perceptual ability may be only partially developed at birth and may require specific types of early experience to facilitate its further development. The absence of early experience with these critical stimuli could result either in the absence of any further development or a loss of that ability as compared to its level at birth. Gottlieb's work with ducklings (1981) and their preferred calls provides some evidence for this position. Third, according to perceptual learning theory a perceptual ability may be absent at birth, and its development may depend on a process of induction based on specific early experiences the organism has in the environment. The presence of a particular ability would depend, to a large extent, on the presence of a particular type of early experience. Thus, specific kinds of early experience are necessary for the subsequent development and maintenance of a particular preference or tendency. Finally, early experience may exert no role at all on the development of a perceptual ability. The ability may be either present or absent at birth,
page_124
Page 125 and it may remain, decline, or improve in the absence of any specific type of early experience. The absence of experiential effects is difficult to identify and often leads to unwarranted conclusions, especially ones that assume that an induction process might be operative. For example, it has been common for researchers to argue that, if an ability is absent at birth, the ability must have been learned (see Eilers, Gavins, and Wilson 1979). In terms of the conceptual framework outlined by Aslin and Pisoni, this could be an instance of induction. However, it is also possible that the ability simply unfolded developmentally, according to a genetically specified maturational schedulea schedule that required no particular type of early experience in the environment. This unfolding of an ability may be thought of as adhering to the general class of maturational theories of development. The complexity of these numerous alternativesmaintenance, facilitation, induction, and maturationand their possible interactions suggests caution in drawing any strong conclusions about the developmental course of specific perceptual abilities. In the context of speech perception, we believe that several conclusions in the literature about the role of early experience may have been premature, especially given the recent findings described below (see Pisoni et al. 1982). In order to clarify the roles of early experience in the development of speech perception and to put these ideas into a broader theoretical context, we briefly consider four general classes of theories concerning perceptual development: (1) universal theory, (2) attunement theory, (3) perceptual learning theory, and (4) maturational theory. Universal theory assumes that newborn infants are capable of discriminating all the possible phonetic contrasts that may be used in any natural language. The theory claims that infants come equipped with broadly tuned perceptual and attentional mechanisms. Early experience serves to maintain the ability to discriminate phonetically relevant distinctions, that is, those distinctions actually used in the language-learning environment of the infant. Furthermore, the absence of exposure to contrasts that are not phonetically distinctive results in a selective loss of the ability to discriminate those contrasts. The mechanisms responsible for this loss of sensitivity may be neural, attentional, or both. Universal theory makes several predictions concerning the reacquisition of lost discriminative abilities in mature adults. For example, if a child is exposed to phonetic contrasts that are not phonologically distinctive in his or her native language due to allophonic variation, universal theory predicts the quick recovery of these contrasts due to the page_125 Page 126 continuous operation of underlying perceptual and attentional mechanisms throughout development. A lack of exposure during development for some contrasts may produce a loss or attenuation in discriminative ability that cannot be overcome by later training or exposure. In either case, it becomes important to determine if the loss has a sensoryperceptual basis or if it is due primarily to changes in selective attention. Attunement theory assumes that, at birth, all infants are capable of discriminating at least some of the possible phonetic contrasts present in the world's languages but that the infant's discriminative capacities are incompletely developed, quite broadly tuned, or both. According to this view, early experience functions to align and/or sharpen these partially developed discriminative abilities. Phonetically relevant contrasts in the language-learning environment become more finely tuned with experience, while phonetically irrelevant contrasts either remain broadly tuned or become attenuated in the absence of specific environmental stimulation. In contrast with the other two views, perceptual learning theory assumes that the ability to discriminate any particular phonetic contrast is highly dependent on specific early experience with that sound contrast in the language-learning environment. The rate of development could be very fast or very slow depending on the psychophysical discriminability of the distinguishing acoustic attributes relative to other phonetic contrasts, the relative importance of the contrast during early life, and the attentional state of the infant. According to this view, phonetically irrelevant contrasts would never be initially discriminated better than the phonetically relevant ones. Finally, maturational theory assumes that the ability to discriminate a particular phonetic contrast is independent of any specific early experience and that the ability simply unfolds according to a predetermined developmental schedule. According to this view, all possible phonetic contrasts would be discriminated equally well irrespective of the languagelearning environment. However, the age at which specific phonetic contrasts are discriminated is dependent on the developmental level of the underlying sensory mechanisms. For example, if young infants did not show sensitivity to high frequencies until later in development, one would not initially expect them to discriminate phonetic contrasts that
were differentiated on the basis of high-frequency information. These four theories make specific predictions about the developmental course of speech perception and the underlying perceptual abilities of mature listeners. It is important to point out that probably no single page_126 Page 127 theory will uniquely account for the development of all phonetic contrasts. Rather, some hybrid of the theories will probably provide the best overall description of the perception of specific classes of speech sounds. This view of parallel developmental processes appears to be well supported by current data (Best, McRoberts, and Sithole 1988; Tees and Werker 1984; Werker 1989; Werker and Lalonde 1988; Werker and Tees 1984). Considering the potentially complex interactions between genetic and experiential factors in perceptual development described above, an important long-term goal becomes the systematic investigation of the development of as many phonetic contrasts as possible. This may lead to a greater understanding of the underlying perceptual mechanisms involved in speech perception and the way in which they are modified by early experience (Best, McRoberts, and Sithole 1988; Werker 1989). In the sections below, we consider two phonetic contrasts that have occupied the attention of speech researchers. The first contrast is the voicing distinction in syllable-initial stop consonants. The second contrast is the distinction between /r/ and /1/. Both phonetic contrasts have played an important role in recent theorizing about the effects of early experience on speech perception. Furthermore, both contrasts have been used in laboratory studies that were designed to selectively modify the perceptual analysis of mature adult listeners. Because these two contrasts have quite different acoustic correlates and phonological properties in different languages, they are ideal candidates to consider in studies of perceptual learning. Perception of Voicing Contrasts in Stop Consonants Numerous studies employing synthetically produced speech stimuli have investigated the perception of voice onset time (VOT) in adults, infants, chinchillas, and nonhuman primates (Kuhl and Miller 1975; Lisker and Abramson 1970). These developmental and cross-species comparisons have attempted to study potential interactions between genetic predispositions and experiential factors in the development of speech perception. The results of these diverse studies have shown the combined influence of both factors. First, linguistic experience has been shown to have a substantial effect on speech perception, particularly in human adults exposed to different language-learning environments (Lisker and Abramson 1964). The data show that subjects identify and discriminate speech sounds with reference to the linguistic categories of their native language. Second, basic sensory and psychophysical constraints on auditory system function page_127 Page 128 seem to affect perception of speech and nonspeech signals in similar ways by restricting the inventory of acoustic correlates that can be used as distinctive features (Stevens 1972, 1980). This inventory is then modified and selectively reorganized by speakers and hearers in the language-learning environment. The results of the earliest cross-language experiments confirmed that the native linguistic environment exerts a profound influence on the adult's ability to produce and perceive differences in voicing of initial stop consonants. Lisker and Abramson examined voicing and aspiration differences among stops produced by native speakers of eleven diverse languages. They identified three primary modes of voicing: (1) a lead mode, in which voicing onset precedes the release from stop closure; (2) a shortlag mode, in which voicing onset is roughly simultaneous with the release of the stop closure; and (3) a long-lag mode, in which voicing onset occurs substantially after release. In addition to measurements of VOT in the production of stop contrasts, Lisker and Abramson (1970) and Abramson and Lisker (1970) also carried out several perceptual experiments using synthetically produced speech stimuli varying in VOT. The results of these perceptual experiments demonstrated that subjects from different linguistic backgrounds identified and discriminated speech stimuli with respect to the distinctive phonological categories of their language. Figure 5.2 shows the identification functions for native speakers of English, Thai, and Spanish for three series of synthetic stimuli that differ in VOT. The functions display perceptual boundaries at either one or two locations along the
VOT continuum, corresponding to the presence of two or three voicing categories. The discrimination functions, which are not shown here, reveal discontinuities along the stimulus continuum, with peaks located at the crossover points separating perceptual categories in identification. The correspondence of heightened discrimination at the category boundaries and the relatively poor discrimination within perceptual categories demonstrates that subjects could discriminate between stimuli only as well as they could absolutely identify them. These findings demonstrate that perceptual categories are determined, in large part, by the linguistic experience of the listener. While these findings do indicate that listeners perceive speech stimuli with reference to the phonetic categories of the native language, they do not allow us to differentiate between sensory losses and selective-attention shifts in development. The subjects in these early perceptual experiments, as well as those used in more recent studies, had great difficulty in identifying and discriminat page_128 Page 129
Figure 5.2 Adult identification functions for synthetic labial, apical, and velar stop consonants that differ in voice-onset-time (VOT). The data were obtained from native speakers of English (N = 12), Thai (N = 8), and Spanish (N = 5). Adapted from Lisker and Abramson 1970. page_129 Page 130 ing between stimuli that were not distinctive in their native language. The failure of adults to perceive nonnative distinctions in voicing has been interpreted as support for the view that early linguistic experience exerts a profound and lasting effect on an individual's ability to discriminate speech stimuli. Indeed, based on his work with young infants, Eimas (1978) has suggested that the neural mechanisms mediating VOT perception might atrophy or degenerate if stimulation is not forthcoming during an early period of language development. Eimas states that ''the course of development of phonetic competence is one characterized by a loss of abilities over time if specific experience is not forthcoming" (p. 346). Thus, if phonetic differences are not used distinctively in the language-learning environment of an infant, sensitivity to the relevant acoustic attributes differentiating these speech sounds may be attenuated, and the child
may fail to maintain the specific mechanisms needed to discriminate between them (see also Werker 1989). Eimas (1978) has argued further that the lack of experience with particular phonetic contrasts in the local environment during language acquisition may have the effect of modifying the appropriate phonetic feature detectors by reducing their sensitivity to specific acoustic cues in the speech signal. Detectors that were originally designed to process certain phonetic distinctions may be captured or subsumed by other detectors after exposure to particular acoustic signals in the language-learning environment. These detectors might, therefore, assume the specificity for only those attributes present in the stimuli to which they have been exposed rather than retaining their original specificity. As a consequence, the poor discrimination observed for some phonetic contrasts might be due to the modification of low-level sensory mechanisms employed in the discrimination of these acoustic attributes. If this strong view of development is correct, it implies that mature adults would never be able to reacquire a phonetic contrast that was not present in their language-learning environment (see, however, Best, McRoberts, and Sithole 1988; Werker 1989). These conclusions concerning the role of linguistic experience in the development of speech perception have become widely accepted in the literature, despite the existence of several studies demonstrating that subjects can discriminate small differences between speech sounds that were identified as belonging to the same phonetic category (see Pisoni 1973, 1977; Pisoni and Lazarus 1974; Pisoni and Tash 1974; Streeter 1976a, 1976b). When the experimental conditions are modified to reduce uncertainty or when the subjects' attention is explicitly directed to acoustic, page_130 Page 131 rather than phonetic differences between stimuli, subjects can accurately discriminate very small changes in VOT (see also Carney, Widin, and Viemeister 1977). These findings undermine the general conclusion that subjects cannot discriminate between speech sounds unless they are used distinctively in their native language and suggest that the traditional loss view must be modified. A serious consideration of the traditional loss view raises the implication that listeners cannot be trained to recognize phonetic distinctions that are not contrastive in their native language. In an extensive review of the effects of linguistic experience on speech perception, Strange and Jenkins (1978) concluded that the use of laboratory training techniques with adult subjects was generally ineffective in promoting enhanced discrimination of phonetic contrasts that were not employed phonemically in the subjects' native language. Strange and Jenkins's review, as well as the earlier training studies by Strange (1972), led us to reexamine the performance of adults in identifying and discriminating VOT contrasts that were not phonemically distinctive in their native language (Pisoni et al. 1982). In particular, we wanted to know why previous attempts to use laboratory training procedures appeared to be so uniformly unsuccessful in producing changes in the perception of VOT. Given the previous work demonstrating that 5and 6-month-old infants from English-speaking environments could discriminate both lead and lag contrasts from a VOT continuum (Aslin et al. 1981), we fully expected that native English-speaking adults would also be able to discriminate these VOT contrasts, unless there was a real sensory loss in their underlying perceptual abilities as maintained by the traditional loss view. Furthermore, we were also interested in determining precisely how much training would be required for adult English listeners to reacquire a nondistinctive perceptual category along the voicing dimension. In our first experiment on the perception of stops differing in VOT, two groups of naive subjects were required to identify a set of synthetic stimuli varying in VOT in two different conditions. In the first condition, subjects used only two response categories, corresponding to the phonemes /b/ and /p/. In the second condition, they were given three response alternatives, corresponding to [b], [p], and [ph]. The conditions were counterbalanced across both groups over a two-day period. The results are shown in figure 5.3. As expected, subjects showed very reliable and consistent two-category identification functions for the English-voicing categories. More interest page_131 Page 132
Figure 5.3 Average identification functions for two- and three-category labeling of synthetic speech stimuli that differ in voice-onset-time. From Pisoni et al. 1982. ingly, however, both groups were able to reliably identify stimuli into a third noncontrastive perceptual category, which was based on VOT values in the voicing-lead region of the continuum. Only two of the twenty subjects tested failed to use three responses. Although there was some variability in the labeling data for individual subjects, there was also a surprising amount of consistency among most of the subjects, as shown in these group data. Another experiment was also carried out with two additional groups of subjects using the same stimuli and procedures. However, in addition to the two- and three-category identification tasks, subjects also carried out a discrimination task with the same stimuli. The average identification-and ABX-discrimination functions from this experiment are shown in figure 5.4 The two- and three-category identification functions shown in the left-hand panel of each figure are quite similar to those obtained in the first experiment. Although the average two-category data are consistent and representative of individual subjects, the average three-category data are less consistent and show greater variability in the minus region of the VOT continuum. Examination of the average ABX-discrimination functions, shown in the right-hand panels of figure 5.4, reveals the presence of two distinct peaks in discrimination, regardless of prior labeling experience. The larger peak occurs in the voicing-lag region of the continuum at roughly page_132 Page 133
Figure 5.4 Average identification and ABX-discrimination functions for two-category (upper panel) and three-category (lower panel) labeling conditions. From Pisoni et al. 1982. +20 msec, whereas a smaller peak can be observed in the voicing-lead region at roughly -20 msec. It should be emphasized that the subjects in the two-category labeling condition showed evidence of discriminating stimuli in the voicing-lead region of the VOT continuum, despite the fact that these stimuli were all identified as belonging to the same perceptual category. Such a finding is not surprising, given previous demonstrations of withincategory discrimination (Pisoni and Lazarus 1974). Furthermore, it is important to note that no special efforts were made to control or direct the subjects' attention to the differences between stimuli in this region of the VOT continuum or to modify the ABX-discrimination task to improve subjects' sensitivity. page_133 Page 134 Despite variability among individual subjects, the results of these two experiments indicate that a large majority of naive English subjects can identify and discriminate an additional voicing category without any special training or feedback. The differences in the voicing-lead region are apparently discriminable, and subjects can reliably identify these sounds if given the opportunity to do this with an additional response category. Given the strong conclusions by Strange and Jenkins (1978) about the difficulty of discriminating these differences in VOT, we were surprised with the results obtained using such simple experimental manipulations. To reduce intersubject variability and increase response consistency in perceptual categorization in a relatively short period of time, we carried out a third experiment. We used a discrimination-training procedure with immediate feedback after exposure to representative exemplars of the three voicing categories. The purpose of this experiment was to determine if subjects who received a brief period of training would show more robust perceptual data, that is, steeper slopes in identification and heightened peaks in ABX discrimination at both voicing boundaries. The training sequences were presented in a predictable order using only three stimuli, one representative token from each of the three voicing types, (- 70 msec, 0 msec, + 70 msec). After the training phase was completed, subjects who met a predetermined performance criterion in identification accuracy were selected for subsequent testing in which both identification and ABX discrimination data were collected.
Of the original twelve subjects, six passed the 85% criterion in training and were invited back for the remaining sessions. Subjects who failed to meet this criterion all responded to the three training stimuli at levels well above chance, indicating that they were able to use the available response labels. These results were anticipated since the previous experiment demonstrated some variability among individual subjects in VOT identification. The average identification functions for the six subjects who reached criterion are shown in the left-hand panel of figure 5.5. These were the data collected on Day 2 of testing. As expected, these six subjects were highly consistent in labeling stimuli in the voicing-lead region of the continuum, despite receiving only a very modest number of training trials on the three VOT exemplars. Moreover, the very steep slopes in the group identification function indicate the presence of three discrete, well-defined perceptual categories. The slope in the minus VOT region is much steeper in this experiment than in the previous experiments in which no training procedures were used. page_134 Page 135
Figure 5.5 Average identification and ABX-discrimination functions for subjects meeting an 85% identification criterion. Data in the left panel show the average identification function on day 2; the data in the right panel show both average identification and ABX-discrimination functions combined over days 3 and 4. From Pisoni et al. 1982. The average identification- and ABX-discrimination data collected on Days 3 and 4 are shown in the right-hand panel of figure 5.5. As observed in the previous experiment, the ABX-discrimination functions obtained here also show peaks corresponding to the boundaries between the voicing categories and troughs corresponding to the centers of well-defined perceptual categories. The results of this training study demonstrate that, in a short period of time, native Englishspeaking adults can reacquire nonnative contrasts in voicing relatively easily using simple laboratory training techniques. In another training study, McClasky, Pisoni, and Carrell (1983) showed that knowledge about VOT perception gained from discrimination training on one place of articulation (e.g., labial) can be transferred readily to another place of articulation (e.g., alveolar) without any additional training on the specific test stimuli. The results of the transfer experiment for one group of subjects are shown in figure 5.6. The data show that, independently of the specific stimuli used in the original training sessions, naive subjects can learn very detailed and specific information about the temporal and spectral properties of VOT. Taken together, the results of our training experiments on voicing perception demonstrate quite clearly that naive English listeners can reliably perceive differences in the prevoicing region of the VOT continuum. These findings differ markedly from the results reported in earlier investigations page_135 Page 136
Figure 5.6 Average identification functions for two- and three-category labeling of labial stop consonants and transfer labeling for alveolar stop consonants. From McClasky, Pisoni, and Carrell 1983. page_136 Page 137
of VOT perception by Strange and Jenkins (1978), which indicated that prior linguistic experience substantially diminishes perceptual sensitivity to nonphonemic voicing contrasts in adults. Our results also contradict the major conclusions of Strange and Jenkins (1978), who argued that short-term laboratory training procedures are ineffective in modifying speech perception (see also Strange and Dittman 1984). Given appropriate experimental procedures, our results show that naive subjects can perceive an additional perceptual contrast in voicing quite easily after a very short training period. Moreover, subjects can transfer their knowledge of VOT to new stimuli with a different place of articulation. Our findings are robust and indicate that the underlying sensoryperceptual mechanisms have not been permanently modified or lost due to prior linguistic experience. These findings rule out strong versions of universal theory and attunement theory, which suggest that the ability to discriminate nonnative phonemic differences is permanently lost in the adult listener. Instead, the overall pattern of results suggests an account based on shifts in the listener's selective attention. To see this point more clearly, let us compare the findings described above with two unsuccessful efforts to modify the perception of adult listeners. In an experiment specifically designed to study the learning of a new contrast in voicing, Lisker (1970) attempted to train native speakers of Russian to distinguish between voiceless unaspirated and voiceless aspirated stops, a voicing contrast that is distinctive in English but not in Russian. Although the Russian subjects learned to identify the endpoint stimuli (i.e., + 10 and + 60 msec VOT) slightly better than chance, their performance was not the same for both stimuli. While the majority of Lisker's subjects could differentiate among the training stimuli and could use
two discrete labeling responses, their performance on this task was neither consistent nor reliable. Since immediate feedback for correct responses in identification was not provided after each training trial, the subjects probably had a great deal of difficulty in determining to which specific acoustic attributes of the stimuli they were to attend selectively. Another attempt to modify voicing perception in adults was carried out by Strange (1972). She tried to train a small number of college-age students to identify and discriminate differences in the lead region of the VOT continuum in which the Thai voiced/voiceless-unaspirated boundary occurs. In three experiments, Strange trained subjects using an oddity-discrimination paradigm with immediate feedback, an identification paradigm with delayed feedback, and a scaling procedure. In general, Strange found only small changes in perceptual categories, with perfor page_137 Page 138 mance improving in the target region of the VOT continuum when identification and scaling procedures were employed. However, across each of her experiments, subjects failed to generalize from one VOT series to another. Moreover, her results were marked by a high degree of variability. Based on the outcome of these training experiments, Strange and Jenkins offered the following conclusions about the effects of laboratory training on speech perception: The results of these three studies show that, in general, changing the perception of VOT dimensions by adult English speakers is not easily accomplished by techniques that involved several hours of practice spread over several sessions. Although performance on each of the kinds of tests did change somewhat with experience, only the identification training task (which involved practice with general feedback only) produced categorical results approaching those found for native speakers of Thai. (Strange and Jenkins 1978, 154) Why have previous researchers been unsuccessful in selectively modifying the perception of VOT in adults? Is there something peculiar about the specific speech stimuli used, or might the differences be a consequence of the particular experimental methods employed? To answer these questions, we turn first to an examination of the earliest crosslanguage speech perception experiments on VOT carried out by Lisker and Abramson (1970). They found that subjects could readily identify synthetic stimuli varying in VOT into the phonological categories of their native language. Subjects were required to name initial stop consonants by identifying them with words from their native language. Unfortunately, as far as we know, Lisker and Abramson never asked their subjects to identify the synthetic stimuli into additional perceptual categories in any of their experiments. Although subjects in the Lisker and Abramson cross-language experiments might have been able to use additional categories, the results of their oddity-discrimination tests indicated that subjects did not reliably discriminate withincategory differences in VOT. When discrimination is measured in the oddity paradigm, subjects are strongly encouraged to adopt a context-coding mode of response (Durlach and Braida 1969). In order to solve the discrimination problem, the stimuli are immediately recoded into a more durable phonetic form for maintenance in short-term memory (see Pisoni 1973, 1975). A context-coding mode of perception is also favored by the high degree of uncertainty present in the odditydiscrimination task. The use of a roving standard from trial to trial effec page_138 Page 139 tively mixes easy trials with hard trials. Finally, immediate feedback was not provided during identification or discrimination testing in the previous experiments. The absence of feedback in complex discrimination tasks, such as the oddity procedure, promotes the use of highly over-learned phoneme labels and discourages fine discrimination of phonologically nondistinctive information. Thus, the experimental situation may provide an indication of how people apply familiar labels but may not assess their underlying sensory capacity to differentiate among speech sounds. Under the testing conditions described above, native listeners apparently have great difficulty in determining precisely to which acoustic attributes of the speech signal they are supposed to attend. Thus, subjects may consistently fail to discriminate fine phonetic differences within a perceptual category if they adopt a very lax criterion for detecting small differences between speech sounds. These observations, taken together with the present results, indicate that the poor performance in discrimination of VOT contrasts in earlier studies is clearly not due to a capacity limitation of any kind in
processing the sensory input. We suspect that the particular combination of experimental tasks and their order of presentation may have been the major methodological factors responsible for the failure to observe discrimination of new phonetic contrasts. When the results of our recent experiments on VOT are considered in light of these previous findings, as well as the conclusions of Strange and Jenkins, it is apparent that numerous methodological factors contributed to the poor performance observed by other investigators. Nevertheless, it has generally been assumed that the failure to reacquire a nondistinctive voicing contrast was somehow related to a permanent change of the perceptual or sensory mechanisms of the listener, thus providing strong support for the universal and attunement theories. In short, the data indicate that mature English adults are quite capable of discriminating and categorizing acoustic information that is not phonologically distinctive in their native language. We conclude that the underlying sensory, perceptual, and cognitive mechanisms are not lost or realigned and that the attentional strategies used in speech perception are far from being as rigid as a number of investigators have assumed in the past. These conclusions are appropriate for voicing perception in stops. However, it remains to be seen if they apply to the perception of other speech contrasts as well. In the next section, we consider the case of /r/ and /1/ perception which differs in several important respects from the voicing contrast. page_139 Page 140 Perception of /r/ and /1/ A great deal of research in speech perception has been concerned with the perception of stop consonants varying in VOT. This was due, in part, to the availability of high-quality synthetic stimuli that could be used to test interesting experimental hypotheses quite easily in new paradigms using a variety of subject populations (Eimas et al. 1971; Kuhl and Miller 1975; Lasky, Syrdal-Lasky, and Klein 1975; Streeter 1976a, 1976b). More recently, investigators have turned their attention to several other phonetic contrasts in order to study the effects of early experience on perceptual development (Best, McRoberts, and Sithole 1988; Werker 1989; Werker and Tees 1984). One speech contrast that has been investigated in some detail is the /r/-/l/ distinction in English (Goto 1971; Mochizuki 1981). In the first cross-language study of /r/ and /1/, Goto (1971) studied a group of native Japanese subjects who were fluent in English and found that they had great difficulty discriminating /r/ and /1/ produced by native speakers of English, even though these Japanese subjects could produce the contrast reliably in their own utterances. Miyawaki et al. (1975) tested both English and Japanese listeners with a set of synthetic speech stimuli and a set of nonspeech stimuli containing only the formant transitions appropriate for contrasting /r/ and /1/. Both groups of subjects were required to discriminate pairs of stimuli selected from each test series using an oddity-discrimination test. For the English listeners, discrimination of the speech stimuli was nearly categorical. Discrimination of pairs of stimuli that were perceived as different phonemes was very good, whereas discrimination of pairs of stimuli that were perceived as the same phoneme was very poor. In contrast, the Japanese listeners' discrimination of the speech stimuli was close to chance for all comparisons. Discrimination of the nonspeech stimuli, on the other hand, was comparable for both the English and Japanese listeners and was significantly above chance. The results of this study were interpreted by Miyawaki and her colleagues as additional support for an effect of linguistic experience on speech perception. Familiarity with the /r/-/l/ distinction plays a major role in a listener's ability to correctly discriminate these stimuli. Furthermore, the differences in discrimination between the speech and nonspeech stimuli suggested that the effects of linguistic experience were restricted to the phonetic coding of the acoustic signals as speech and not to the auditory processing of the underlying acoustic cues to the /r/-/1/ contrast. page_140 Page 141 More recent studies on the perception of /r/ and /1/ by Japanese listeners are consistent with these earlier findings. For example, MacKain, Best, and Strange (1981) found that, even after several years of living in an English-speaking
environment, adult Japanese listeners still differed from native speakers of English in their identification and discrimination of synthetic /r/ and /1/ stimuli. Sheldon and Strange (1982) showed that Japanese listeners have difficulty perceiving natural tokens of /r/ and /1/ produced by native speakers of English. Other studies using Japanese listeners have shown that the perception of /r/ and /1/ is highly dependent on phonetic context (Gillette 1980; Mochizuki 1981; Sheldon and Strange 1982). Performance is generally poorest for the perception of /r/ and /1/ in initial-singleton position or initial clusters and best for /r/ and /1/ in final position. Acoustic analyses of /r/ and /1/ in several different phonetic environments have revealed large systematic differences in the durations of the formant transitions as a function of phonetic environment (Dissosway-Huff, Port, and Pisoni 1982). Thus, acoustic differences, coupled with the phontactic constraints of Japanese, which do not allow for consonant clusters, suggest two possible sources of the positional effect. In a more recent study, Strange and Dittman (1984) attempted to modify Japanese listeners' perception of /r/ and /1/. Although Strange and Dittman (1984) were primarily concerned with assessing the generalization of the training procedures to naturally produced English words, they did raise several important criticisms of the earlier training studies on VOT. Their criticisms of the VOT-training experiments and many of the previous studies of VOT perception are well motivated in our view and played an important role in the design of their study and our own, both of which are described below. Strange and Dittman argue that earlier training studies used highly controlled synthetic stimuli instead of tokens of natural speech. When synthetic speech stimuli are used in perceptual experiments, subjects are exposed to highly improverished stimuli that contain only the minimal acoustic cues necessary to distinguish particular phonetic contrasts (Liberman et al. 1967). Natural speech, in contrast, is extremely redundant. Each phonetic contrast has multiple acoustic cues encoded in the speech signal to aid in maintaining intelligibility under very adverse conditions. When synthetic speech stimuli are used in training experiments, it is very likely that listeners focus their attention only on the cues that are present in the signal and will fail to generalize to other stimuli containing multiple redundant cues to the same phonetic contrast. It is interesting to page_141 Page 142 note here that Mochizuki (1981) actually found very high levels of performance for naturally produced tokens of /r/ and /1/, although only the results from her synthetic speech conditions tend to be cited in the literature (MacKain, Best, and Strange 1981; Strange and Dittman, 1984). Strange and Dittman also point out that all of the previous training studies used nonsense syllables rather than English words. The use of nonsense syllables as stimuli in training experiments is problematic for several reasons. First, nonsense syllables remove any lexical contributions to recognition and, consequently, focus the listener's attention on only the individual phonemes that distinguish the test syllables. Second, in most of the previous training studies using nonsense syllables as stimuli, the range of phonetic environments was very small. Thus, subjects received very little stimulus variability during learning. The lack of stimulus variability may prevent the development of robust perceptual categories that would be helpful in later tests of transfer and generalization with real words, in which there is typically a great deal of variability across different phonetic environments. Strange and Dittman further argue that there are important differences in the phonetic and phonological distributional properties of voicing in stop consonants as compared with the distributional properties of /r/ and /1/. In particular, the voicing contrasts that were used in the previous training studies on VOT perception were allophonic in English. Listeners are, in fact, exposed to these sounds in their environment, even though the contrasts are not used distinctively in English. As Strange and Dittman point out, this is not true for the phonemes /r/ and /1/, which are not allophonic in Japanese. Thus, native speakers of Japanese are not exposed to these contrasts during language acquisition (Werker 1989). Finally, Strange and Dittman (1984) note that the acoustic cues underlying the voicing distinction in stops are markedly different from the complex temporal and spectral changes used to distinguish /r/ and /1/ in various phonetic environments. Thus, voicing may somehow be psychophysically more distinctive or robust and, therefore, more discriminable to listeners than the acoustic cues that underlie other speech contrasts (see Burnham 1986). Because the acoustic correlates of phonetic contrasts differ widely and have quite different psychological spaces, it is often difficult to equate the underlying sensory scales (Lane 1965). However, if a phonetic contrast is discriminable on a psychophysical
basis (i.e., differences are above threshold), then the relative differences in perception between various speech contrasts must be considered within the domain of selec page_142 Page 143 tive attention rather than viewed simply as a basic limitation on sensory processing of the stimulus input (see Nosofsky 1986, 1987). The distinction between a true sensory loss and a loss due to shifts in selective attention has not been widely recognized in the speech perception literature and is often treated as having the same underlying basis (see Burnham 1986). Given these criticisms of the earlier training studies on VOT, Strange and Dittman (1984) attempted to modify Japanese listener's perception of /r/ and /1/ by measuring changes in identification accuracy using a set of naturally produced real English words. Subjects were trained in a pretest-posttest design. During training, subjects participated in an AX-fixed standard discrimination paradigm that employed a synthetic rock-lock continuum. In the pretest and the posttest, subjects were required to identify a member of a naturally produced minimal pair using a two-alternative, forced-choice identification test. The effectiveness of the training procedure was assessed by comparing the initial levels of performance with naturally produced words to the performance with the same materials after discrimination training with the synthetic speech. Strange and Dittman (1984) found that, although discrimination performance improved gradually for all subjects over the training sessions, the effects of discrimination training did not generalize to the naturally produced words. Comparisons of pretraining and posttraining categorical perception tests using the synthetic rock-lock training stimuli did show some changes in performance for seven of the eight subjects. Furthermore, five of the seven subjects also showed improvement and more categorical-like perception in identification and oddity-discrimination tests on an acoustically dissimilar rakelake synthetic test series. Based on these results, Strange and Dittman concluded that ''modification of perception of some phonetic contrasts in adulthood is slow and effortful" and "required intensive instruction and considerable time and effort at least for some types of phonetic contrasts" (Strange and Dittman 1984, p. 142). As in the studies of voicing perception described earlier, we believe that a number of factors may have been responsible for Strange and Dittman's failure to find improvement in the perception of /r/ and /1/ in naturally produced words. Some of these factors are primarily methodological in nature and are easy to modify, but others are more conceptual in scope and reflect important theoretical biases. A close examination of the design of Strange and Dittman's study reveals a number of interesting theoretical assumptions that were made about what listeners learn in laboratory page_143 Page 144 training experiments of this kind. An examination of these assumptions provides some insight into why they failed to find evidence of generalization to naturally produced words. First, let us consider Strange and Dittman's AX-discrimination-training procedure. Based on earlier successful work by Carney, Widin, and Viemeister (1977), this procedure was employed to improve listeners perception of within-category acoustic differences by focusing attention on the subtle acoustic cues that differentiate synthetic tokens of rock and lock. There is now an extensive literature demonstrating that low-level sensory information is extremely fragile and often quite difficult to maintain in sensory memory without additional recoding into more permanent representations in short-term memory (Shiffrin 1976). Except under special testing conditions, such as the ones used by Carney, Widin, and Viemeister (1977), listeners in speech-perception experiments typically have access to only the end product of this process, namely, the phonetic representations (see Pisoni 1973; Fodor 1983). It is likely that discrimination-training procedures that focus the listener's attention on low-level acoustic information in sensory memory will probably not be very successful in promoting robust generalization to conditions in which naturally produced words are used as test stimuli (see Jamieson and Morosan 1986). Discrimination training may generalize from the specific training stimuli to other synthetic stimuli having the contrast in the same phonetic environment. However, it seems unlikely that training listeners to perceive small within-category differences among synthetic /r/'s and /l/'s will be of much help in identifying these contrasts in other phonetic environments using natural speech, in which there is typically a great deal of acoustic-phonetic variability.
Therefore, the outcome of Strange and Dittman's study is not surprising. Jamieson and Morosan (1986) have made similar points with regard to designing training methods to modify speech perception and have suggested that the introduction of stimulus variability will help listeners to respond accurately to the complexity of natural speech. A second issue deals with the theoretical assumptions concerning the kind of knowledge listeners acquire in discrimination-training experiments. While not explicitly stated in their paper, Strange and Dittman implicitly assumed that, by training subjects on /r/ and /1/ in syllable-initial position, their subjects would be able to generalize what they learned to other phonetic environments. We believe this assumption implies that subjects are learning about fairly abstract, context-independent, perceptual units, such as phonemes. page_144 Page 145 Alternatively, it may be the case that subjects acquire highly stimulus-specific information about the acoustic cues for /r/ and /1/ during discrimination training. The training and knowledge gained about one phonetic environment may not generalize easily to other environments without explicit presentation of exemplars from these environments. Again, the results reported by Strange and Dittman are consistent with this observation. They found some improvements in identification and discrimination of synthetic tokens that were phonetically similar to the stimuli used in training, but they failed to find evidence of generalization to the /r/-/l/ contrast in new phonetic environments using naturally produced English words. Thus, subjects were probably not learning about abstract, context-independent, perceptual units, such as phonemes, but instead, were encoding specific details of the stimulus into their developing representations. Finally, Strange and Dittman used highly controlled synthetic speech stimuli in training and in subsequent tests of identification and oddity-discrimination. However, they tested their subjects for generalization with naturally produced words, using a minimal-pair forced-choice identification test. In the AX-discrimination-training tests and the subsequent categorical-perception tests, subjects were required to focus their attention on the minimal acoustic cues used to distinguish phonemes. However, in the minimal-pair test, subjects were required to identify words contrasting in /r/ and /1/, not individual phonemes, and they were required to do this for /r/ and /1/ in a variety of new phonetic environments. The changes in tasks and stimuli across different phases of the experiment may have been partially responsible for the failure to obtain generalization. New Data on the Perception of /r/ and /1/ Recently, Logan, Lively, and Pisoni (1991) carried out a training study to investigate the conditions under which a group of native Japanese speakers could learn to identify naturally produced words contrasting /r/ and /1/ in a variety of phonetic environments. The experiment was motivated, in part, by the results of the earlier study carried out by Strange and Dittman (1984) and, in part, by our previous training studies on the perception of stop consonants varying in VOT. In designing this study, we wanted to develop a set of training procedures that would not only produce changes in the perception of /r/ and /1/ in real words but would also prove useful in settings outside the laboratory. We began by adopting the same pretest-posttest design that Strange page_145 Page 146 and Dittman (1984) used. In fact, so that direct comparisons could be made between the two studies, we used the same sixteen minimal pairs of test words and the same two-alternative, forced-choice identification test. However, our training procedures differed in several important ways from the methods used by Strange and Dittman (1984). The first change in the training procedure involved replacing the AX-discrimination test with a two-alternative, forcedchoice identification test. This was done so that the responses made in training would be directly compatible with the responses made in generalization testing. Maintaining response compatibility encouraged subjects to use the same type of acoustic information throughout the experiment. Moreover, we used an identification test rather than a discrimination test. Identification tests focus attention on similarities among stimuli, while discrimination tests require the resolution of subtle within-category differences.
The second change in training involved the use of naturally produced tokens of real words contrasting /r/ and /1/ in five different phonetic environments. Recall that Strange and Dittman trained subjects using synthetic speech stimuli that contrasted /r/ and /1/ only in initial position. However, in their pretest and posttests, they presented /r/-/l/ in four different phonetic environments using naturally produced words. Again, by training subjects to selectively attend to acoustic cues in words contrasting /r/ and /1/ in several different phonetic environments, we hoped that they would focus their attention on the relevant criterial features of these different contexts and would use this information when presented with novel words in subsequent generalization tests. Examples of the test words used in the five phonetic training environments are shown in table 5.1. Third, the naturally produced tokens used in the training phase were produced by five different talkers. This presented listeners with a wide range of stimulus variability during learning. In order to dissociate talker-specific and item-specific learning effects, none of the items used during the training phase appeared in the pretest or posttest conditions. Additionally, the pretest and posttest items were produced by a different talker than the ones used to produce the training items. The use of multiple test items produced by several different talkers was motivated, in part, by the desire to present the subjects with a great deal of stimulus variability during training. We hoped that this would encourage subjects to form robust phonetic representations for /r/ and /1/ (see Jaimeson and Morosan 1986; Lively, Pisoni, and Logan 1991). Finally, in addition to assessing the effectiveness of the training procedures by measuring transfer from the pretest to the posttest, we also in page_146 Page 147 Table 5.1 Phonetic environments Environments
Examples
Initial-consonant cluster
brush-blush, grass-glass
(cr/l v. . .) Initial singleton
rake-lake, rock-lock
(r/lvc) Intervocalic
pirate-pilot, oreo-oleo
(. . .vr/lv. . .) Final-consonant cluster
mart-malt, board-bold
(c v r/l c) Final singleton
mare-mail, pear-pail
(. . . v r/l) Source: Logan, Lively, and Pisoni 1991 cluded two additional generalization tests using novel words contrasting /r/ and /1/ in a variety of phonetic environments. These words were not presented in either the pretest-posttest or training phases of the experiment. One generalization test used an unfamiliar talker, whereas a second generalization test used a familiar talker who had produced a set of the training items. We hoped these additional generalization tests would provide detailed information about what aspects of the stimuli subjects were encoding into long-term memory. For ease of exposition, the results of our experiment will be presented in three sections below, each corresponding to the pretest-posttest data, the training data, and the generalization data. In all cases, we will be examining performance in perceiving naturally produced words containing /r/ and /1/ using the minimal-pair identification test. In this procedure,
subjects were required to identify a word on each trial using one of two possible response alternatives. Our subjects were six native speakers of Japanese who were enrolled as students at Indiana University. They had lived in the U.S. for periods ranging from six months to three years at the time of testing. These subjects were comparable to those used by Strange and Dittman (1984). Pretest-Posttest In comparing subjects' performance on the pretest to their performance on the posttest, a significant increase in performance was observed (78.1%, pretest versus 85.9%, posttest). While the absolute difference of approximately 8% was not large, the effect was highly significant and was page_147 Page 148 observed for every one of the six subjects. The present findings are quite different from those reported by Strange and Dittman using the same pretest-posttest items. They found no significant differences in performance between pretest and posttest after training. Our results demonstrate that the major factor distinguishing the two studies must be the training procedures since the pretest-posttest items and procedures were identical. Before examining the training data, however, it is useful to look at the pretest-posttest results broken down by phonetic environment to gain some insight into the nature of the perceptual learning that occurred during the training phase. The percentage of correct responses for each of the four phonetic environments in the pretest-posttest is plotted in figure 5.7 which shows two important trends. First, overall performance on /r/ and /1/ differs substantially across the four phonetic environments. Performance is best for /r/ and /1/ as singletons in final position and worst for /r/ and /1/ in clusters in initial position. Second, the effects of training appear to have had the largest influence on /r/ and /1/ in initial clusters and in intervocalic position. Training produced almost no change in performance for /r/ and /1/
Figure 5.7 Average percent of correct identification of test words in pretest and posttest conditions as a function of phonetic environment. From Logan, Lively, and Pisoni 1991. page_148 Page 149
as singletons in initial position or in final position. The latter is probably due to a ceiling effect: performance before training on these items was quite high. The absence of any change in performance for the singletons in initial position may be due to a variety of factors, including the inherent discriminability of /r/ and /1/ in this environment and the attentional focus of the listener. Salient cues to the identity of/r/ and /1/ segments may have been less available to
listeners when responding to initial-position targets than when targets were in the word-final position. Taken together, however, the present results show not only that perceptual learning took place but that the learning was apparently highly context dependent. The lack of uniformity across the four phonetic environments implies that subjects were attending to and encoding different acoustic cues from the different environments to which they were exposed during training. Rather than learning about an abstract context-independent unit such as a phoneme, subjects were apparently learning very detailed context-dependent information about /r/ and /1/ across these different environments. Training Subjects were trained using a two-alternative, forced-choice identification test for a total of fifteen days over a threeweek period. Each training session was divided into two blocks of 136 trials. On each day of training, subjects heard two repetitions of each stimulus word. Only tokens produced by a single talker were presented on each day of training. Feedback was given after every trial. If a subject made a correct response, the next trial was presented. If a subject made an incorrect response, a cue light indicated the correct response, and the stimulus word was repeated. Performance improved over the three-week training period. The largest change occurred during the first two weeks. The results suggest that subjects rapidly learned to attend to the salient acoustic cues differentiating /r/ from /1/. Learning to attend to more subtle cues, which might have increased performance even more, appears to require more training than we provided. Figure 5.8 shows the effects of phonetic environment as a function of the week of training. As in the pretest-posttest data, performance varied across the different phonetic environments. The best performance was observed again for /r/ and /1/ as singletons in final position, and the worst performance was observed for /r/ and /1/ in initial clusters. The same overall pattern of results observed for the five different phonetic environ page_149 Page 150
Figure 5.8 Average percent of correct identification of words used in training as a function of the week for each phonetic environment. From Logan, Lively, and Pisoni 1991. ments is replicated each week. Figure 5.8 suggests inherent differences in perception of /r/ and /1/ across different phonetic environments (Mochizuki 1981; Disdsoway-Huff, Port, and Pisoni 1982). Because the training procedures were under the control of a laboratory computer, we were able to record subjects' response times during training. Figure 5.9 shows the average response times for correct responses to test items in the five phonetic environments as a function of the week of training. The overall pattern of the response times reflects
identification accuracy. Response times were fastest for /r/ and /1/ in final position and slowest for /r/ and /1/ in initial clusters and intervocalic position. For those phonetic environments in which identification was initially very high (i.e., final singletons and final clusters), the response times decreased consistently across each successive week of training. For the other three phonetic environments, in which performance was initially poor, the response times show a very different pattern over the three-week period, perhaps reflecting the subjects' initial difficulty in focusing their attention on the appropriate acoustic cues for /r/ and /1/ in page_150 Page 151
Figure 5.9 Responses to words used in training as a function of the week for each phonetic environment. From Logan, Lively, and Pisoni 1991. these particular phonetic environments. The inverted U-shaped functions for these phonetic environments suggest that once subjects learned to which aspects of the stimulus to attend, their performance began to improve and their latencies decreased substantially. In addition to the variability in identification performance as a function of the phonetic environment, we also observed differences in performance as a function of the talkers who produced the stimulus items used in training. Table 5.2 shows the percentage of correct responses in training for each of the five talkers. Words produced by the training Talkers 4 and 5 were more intelligible than words produced by the other three talkers. In pretesting these items with native speakers of English, no reliable differences were observed among any of the talkers. The results of the training phase of the experiment show clearly that the perceptual learning of/r/ and /1/ is highly context dependent. Large differences in accuracy and response latency were observed in the identification of words containing /r/ and /1/ as a function of phonetic environment and of the talker. While there was a trend for performance to improve over the three weeks of training, the largest gains were made during the first two page_151 Page 152 Table 5.2 Mean training accuracy by talker Talker
Mean percent correct
1
79.8%
2
79.7%
3
81.9%
4
87.5%
5
85.0%
weeks. The pattern of response times suggests that subjects actively tried to learn the specific criterial properties that distinguish /r/ and /1/ in different phonetic environments. Some environments were relatively easy to learn, while others appeared to be more difficult. Even after three weeks of training, during which moderate overall improvements in performance were observed, the identification of /r/ and /1/ as singletons in initial position did not change reliably from pretest to posttest. In contrast, perception of /r/ and /1/ in initial consonant clusters and intervocalic position showed substantial changes in performance after training. Generalization with Novel Words At the conclusion of the posttest, subjects were given two tests of generalization. Test TGI consisted of novel words produced by a novel talker, while test TG2 consisted of novel words produced by Talker 4, who was used during the training phase. These results, which are based on only three subjects, show that identification of novel words from condition TG2, a familiar talker, tends to be better than identification of novel words from TGI, a novel talker (83.7% correct responses, TG2 versus 79.5% correct responses, TGI). Apparently, familiarity with a talker's voice improves identification performance on novel words. Similar findings have been reported recently by Mullennix, Pisoni, and Martin (1989) and Nygaard, Sommers, and Pisoni (in press). These results should be qualified by pointing out that generalization was tested with the best talker from the training set. Had generalization been tested with one of the poorer talkers, a different pattern of results might have been found. The present results suggest the intriguing possibility that listeners are encoding context-sensitive information about both the phonetic environment in which the contrasts occur and about the voice of the talker producing the contrast. Acoustic information about a talker's voice may be page_152 Page 153 encoded along with a phonetic representation of the input and stored in long-term memory for later use (see Goldinger 1992; Goldinger, Pisoni, and Logan 1991; Martin et al. 1989). The stimulus variability generated by exposure to speech produced by different talkers may play a central role in developing robust perceptual categories that are not defined exclusively by a small number of criterial acoustic features. The stimulus variability produced through these experimental manipulations may be similar to the kind of stimulus variability encountered by nonnative speakers who have had intensive conversational experience with English (MacKain, Best, and Strange 1981). Subjects involved in conversational instruction with native speakers apparently do show improved perceptual skills with nonnative speech contrasts (Gillette 1980; MacKain, Best, and Strange 1981; Mochizuki 1981). Conversational experience may, therefore, provide exposure to speech produced by diverse talkers producing phonetic contrasts in a wider variety of environments than would be encountered in a laboratory setting (see Goto 1971). Regardless of the precise explanation, the present findings demonstrate that stimulus variability appears to be useful in perceptual learning and may contribute to the development of robust perceptual categories (Posner and Keele 1968). General Discussion The results of the training experiments reported in this chapter demonstrate quite convincingly that listeners can learn to perceive speech contrasts that are not distinctive in their native language. These findings, which were obtained with two very different phonetic contraststhe voicing distinction in stop consonants cued by VOT and the /r/ versus /1/ contrastshow that the developmental decline in discriminative capacities, as well as associated perceptual loss, is not permanent and can be reacquired in a relatively short period of time using relatively simple laboratory-training procedures. Based on these findings, we believe that the previous conclusions about the effects of early linguistic
experience on speech perception are unjustified and have been greatly exaggerated in the literature on perceptual development. There can be little question that a major aspect of the development of speech perception in infants and young children involves some form of developmental change and perceptual reorganization as a function of specific experiences in the language-learning environment. The acquisition of language requires an intensive period of vocal learning, during which time the young child begins to acquire the local dialect and lexicon of his or her language-learning environment. We are just beginning to develop ade page_153 Page 154 quate theoretical accounts of how this process takes place and how the sensory prerequisites and phonetically relevant capabilities are shaped, modified or tuned to the important phonetic distinctions in the language-learning environment (see Aslin 1985; Aslin and Pisoni 1980; Best, McRoberts, and Sithole 1988; Jusczyk 1985, 1986; Jusczyk et al. 1990; Studdert-Kennedy 1986, 1987; Werker 1989). When one considers the data from a wide variety of studies on infant speech perception, it becomes obvious that prelinguistic infants display evidence of a universal sensitivity to phonetically relevant sound contrasts in language (for reviews see Aslin, Pisoni, and Jusczyk 1983; Jusczyk, 1985, 1986; Kuhl 1987; Locke 1993; Werker 1989). These findings have demonstrated consistently that infants have the sensory and perceptual prerequisites to eventually acquire the phonetics and phonology of any spoken language (Jusczyk 1993). Unfortunately, relatively few of these infant studies have addressed the broader issues of the nature of the developmental change that takes place when the child begins to acquire the first rudiments of spoken language and reliably starts to assign meanings and communicative intent to sound patterns in his or her environment. Even fewer studies have addressed the issue of the apparent developmental decline in the perceptual abilities of mature adults after they have acquired their native language (see Aslin and Pisoni 1980; Flege 1988; Walley, Pisoni, and Aslin 1981). Some attempts have been made recently by Werker (1989) to deal with developmental change, but these efforts have focused mainly on delineating the phenomena rather than providing theoretical explanations for them. Taking a more mechanistic approach, Jusczyk (1985, 1986) has recently suggested that young infants actively adjust perceptual weights to encode phonologically distinctive information in their environment. His proposal, which we will return to in the next section, places a great deal of emphasis on attentional mechanisms in perceptual development. It provides a unified way of dealing with both the infant data showing universal phonetic sensitivity, as well as the adult data showing perceptual reorganization and developmental decline in phonetic discrimination. Both Ferguson (1986) and Studdert-Kennedy (1987) have remarked recently about the serious gap in our knowledge of the relationship between the infant speech-sound perception data and the child phonology literature. Not only have there been few efforts to relate findings from these two areas, but there has been little useful theoretical work on the nature of the perceptual reorganization that takes place when the child begins to use spoken language in a communicatively relevant way. Previ page_154 Page 155 ous accounts of developmental change, such as the one suggested recently by Werker (1989), have focused on a simple dichotomy between language-based and sensory-based processes. However, this is not sufficient to account for the present set of findings with mature adults, who are able to reacquire perceptual abilities needed to discriminate and identify phonetic contrasts that were not distinctive in their language-learning environment. If mature adults have the underlying sensory abilities needed to discriminate phonetically relevant speech contrasts, as the present findings demonstrate, then why do they apparently have such great difficulty using these abilities when they are called upon to perform tasks that require linguistically relevant perceptual responses? Is it desirable to have two accounts of perceptual change, one for infants and young children acquiring their first language and another for adults reacquiring new phonetic contrasts? Or, should we attempt to develop a common approach that is appropriate for both sets of findings?
Parsimony dictates the latter. A single explanation for both types of change would provide the most satisfying account. As we pointed out in the introduction, however, contemporary theories of speech perception were formulated to deal with the mature adult and little, if any, attention has been devoted to issues of development, developmental change, and second-language acquisition. In the next section, we suggest an approach that can be applied to both the adult and infant perceptual findings. Our basic argument is that these seemingly diverse findings reflect the operation of attentional processes in speech perception that are primarily perceptual, rather than sensory-based, in nature. The Role of Selective Attention in Speech Perception Although cognitive psychologists have studied selective attention for many years (Garner 1973), it has only been recently that researchers working in speech perception have acknowledged the importance of attentional processes in identification, categorization, and discrimination of speech sounds (for a review see Nusbaum and Schwab 1986; see also Jusczyk, this volume). In a number of important studies on the identification and categorization of complex multidimensional visual stimuli, Nosofsky (1986, 1987) has recently shown how selective attention to specific stimulus dimensions can modify the underlying psychological space and change the perceived similarity relations among component dimensions in different tasks. Ac page_155 Page 156 cording to Nosofsky, selective attention can be thought of in terms of a metaphor that involves the stretching of psychological distances along attended dimensions and the shrinking of distances along unattended dimensions. When subjects are required to attend selectively to one specific stimulus dimension, two events occur. First, attributes on the attended dimensions become more dissimilar to each other. Second, attributes of the unattended dimensions become more similar to each other. Based on several categorization studies, Nosofsky (1986, 1987) has shown that a selective attention strategy to one dimension serves to maximize within-category similarity among exemplars sharing the same dimensions and minimize between-category similarity. Using this strategy, he was able to account for a number of seemingly diverse findings in the categorization-classification literature. One consequence of this view of selective attention for speech perception is that it provides a way to account for the effects of linguistic experience quite easily in terms of modifications in the relative salience of different phonetically relevant dimensions depending on the specific language-learning environment. For example, Terbeek (1977) used a scaling technique to measure the magnitudes of differences between pairs of vowels presented to native speakers of five different languages. He found that prior language experience affected vowel perception by modifying the perceived psychological distances between the vowels. The perceptual distance between a pair of vowels was judged to be much larger if members of the pair contrasted phonologically in the subject's native language than if the pair was not phonologically distinctive in the language. Thus, linguistic experience apparently modifies the underlying psychological scales by altering the similarity relations for different perceptual dimensions. Jusczyk (this volume) has also made the same suggestions based on infant data. Linguistic experience affects perception by modifying attentional processes, which, in turn, affect the underlying perceived psychological dimensions. Viewed in this context, the apparent developmental loss brought about by acquiring a language is not a true sensory-based loss but rather a change in selective attention. In principle, it should be possible to demonstrate that all nonnative speech contrasts can be discriminated reliably by adults in a short period of time using relatively simple laboratory training techniques. Because the underlying sensory abilities are still intact, discrimination training only serves to modify attentive processes that are assumed to be susceptible to realignment (see Aslin and Pisoni 1980). page_156 Page 157 If the changes in perceptual reorganization and the associated developmental loss in speech perception are primarily attentional in nature, then what are the consequences of this approach for discrimination and categorization of nonnative speech contrasts? One consequence of this view is a systematic warping or restructuring of the psychological space
favoring important distinctive contrasts in a particular language and the attenuation of cues for noncontrastive distinctions. However, in addition to modifying the perceived psychological spacing among dimensions, there are also changes in the memory representations for the psychologically more salient dimensions. Changes in memory could account for the large and reliable differences observed in perception for within versus between-category comparisons in speech discrimination tasks (Pisoni 1973, 1975). Similarly, changes in memory representations could also explain the failure of discrimination-training tasks, such as the AX test, which emphasizes very fine within-category acoustic differences, to produce reliable differences in subsequent categorization and identification tasks using real words. Listeners have the underlying sensory mechanisms to make very fine phonetic discriminations, but they cannot develop stable representations in long-term memory that can be used in other tasks that require more abstract memory codes (Pisoni 1973). One additional point about Nosofsky's work on selective attention is relevant here in terms of tests involving transfer and generalization of knowledge gained in training. Following the earlier work of Tversky (1977), Nosofsky (1986, 1987) has shown that similarity relations for multidimensional stimuli are not invariant over tasks or contexts and that attentional processes may operate differently under different experimental procedures. According to this view, similarity is context dependent, and subjects may attend selectively to different dimensions when the cues are in different contexts. One of the most distinctive and pervasive characteristics of speech is its highly context-dependent nature. Given this view of attention and similarity, Strange and Dittman's (1984) failure to find transfer of training from the pretest to the posttest is not at all surprising. Indeed, the contexts used in training (i.e., synthetic rock-lock) were so different from those used in testing (i.e., minimal pairs of natural speech) that only a very abstract context-independent learning strategy employing a unit like a phoneme would have produced any positive transfer effects. The data from a large number of theoretical and experimental studies show that, while phonemes may be extremely useful linguistic descriptions, their status in the actual processing of spoken language is problematic. page_157 Page 158 Segmentation, Perceptual Units and Language Development The results of our /r/-/l/ study also raise several theoretical issues about what adult subjects are learning in training experiments. An analysis of both the training data and the pretest-posttest data demonstrates that subjects are learning about phonetic contrasts in a highly context-dependent manner. We found very little evidence that subjects were extracting or using context-independent perceptual attributes or units such as traditional phonemes. If anything, the data demonstrate that subjects were encoding very specific contextual information about the acoustic cues for /r/ and /1/ in the different phonetic environments. While these data are from adult listeners who have already acquired their native language, the results nevertheless raise several important questions about what infants and young children are encoding from the speech in their environment. For example, what type of mental representation does the child develop for phonemic contrasts in his or her native language? Do these representations retain positional or contextual information? Or, are they context-free abstract units? Further, are attributes of the talker's voice retained? Or, do children develop representations that are independent of any particular talker? The available evidence suggests that infants and young children are not breaking down the speech signal into phonemelike segments, but rather, they are responding to much more global characteristics of the similarity structure of speech that they hear in their environment (Ferguson 1986; Studdert-Kennedy 1987). This suggests that much more information than an abstract code may be retained in memory. Since the initial report by Eimas et al. (1971), there has been a great deal of speculation about precisely what the infant speech perception findings mean and how they may be related to later stages of language development. Some investigators, such as Werker (Werker and Lalonde 1988) and Best, McRoberts, and Sithole (1988), believe that the discrimination data demonstrate that infants are perceiving speech signals in a phonetically relevant manner and that the underlying perceptual abilities will someday be useful for the eventual task of language learning. An example of this line of reasoning follows: This mapping between biologically given sensitivities and phonetic categories allows the young infant to segment the incoming speech stream into discrete perceptual entities and enables the infant to divide the ongoing and overlapping stream of speech into the units that will be required in the important task of beginning to learn a
language. (Werker and Lalonde 1988, 681) page_158 Page 159
We believe there are some problems with these claims. Inherent in this line of speculation is the assumption that young infants somehow acquire the sound structure and lexicon of a language by a reductionist bottomup strategy that involves the segmentation of the acoustic signal into perceptual entities that correspond to adultlike units such as phones, allophones, phonemes, and words. While such an approach is appealing from an attunement or universal theorist's perspective, it lacks empirical support. These speculations are closely tied to a set of theoretical assumptions derived from an adult model of language development, in which segments and features play an important role. These constructs have been incorporated into accounts of infant speech perception and early lexical development. However, the data from studies of child phonology suggest quite a different picture of development. According to Studdert-Kennedy (1987), the child first uses relatively large undifferentiated units, such as prosodic contours, to express meanings. When the child begins to use sound patterns in a semantically contrastive way, the units of production are more likely to approximate holistic wordlike patterns than smaller units, such as segments. Many of the phonetic forms used in these first words are also highly context specific, suggesting that the child has little, if any, awareness or active control over the arrangement and sequencing of the component sounds in these patterns. Given what we currently know about the child's early attempts at speech production, it is possible that they are not actively segmenting the incoming speech signal into relatively small perceptual units for subsequent identification and categorization. Instead, the perceptual process may be just the opposite: the child may move progressively from relatively large undifferentiated units to smaller context-dependent units as the child's lexicon begins to increase in size (see Moskowitz 1973; Studdert-Kennedy 1986). Only when the size of the child's lexicon becomes too large for efficient retrieval will a segmentally based production strategy begin to emerge (Logan 1992). At this point, if the child is able to control and coordinate gestures in speech production, then the sound patterns can function contrastively to express different meanings. Until the lexicon becomes organized, words and larger sound patterns are probably the units common to both perception and production. Viewed in this manner, the extensive literature on infant speech sound perception merely provides information about the sensory capabilities of young infants and not much more than that. The discrimination findings reported in the literature must, therefore, be interpreted cautiously with regard to any strong claims about their direct relevance to the develop page_159 Page 160 ment of the child's lexicon or the emerging functional use of spoken language as the child matures. Conclusion The findings reported in this chapter permit us to make several general conclusions about the effects of laboratory training on speech perception. First, based on the two studies reviewed here, there can be little question that laboratory training procedures can be used to selectively modify the perception of nonnative phonetic contrasts. The limitations and apparent perceptual difficulties observed with nonnative contrasts in the past appear to be due to selective attention and memory rather than any basic limitation on the sensory capabilities underlying these particular phonetic contrasts. This rules out strong versions of universal or attunement theory which suggest a permanent loss of nonnative contrasts in adult listeners. Second, the perception of/r/ and /1/ appears to depend quite extensively on the phonetic environment in which the contrast appears. In both training and testing, subjects displayed strong evidence showing that their encoding and subsequent representations were highly context dependent. We found no evidence to suggest that subjects were attempting to encode these contrasts in terms of some abstract context-independent perceptual unit like a phoneme or phonetic segment. Third, the learning that we observed was facilitated by the high degree of stimulus variability used during the training
phase. Fourth, the use of nonsense syllables and highly controlled synthetic speech stimuli in past training studies may have placed a number of constraints on subjects' learning strategies. The success of the present /r/ and /1/ study can be attributed, in part, to the use of highly redundant, naturally produced words in which /r/ and /1/ appeared in a wide variety of different phonetic environments. Using these procedures, subjects were able to learn about the range of variability these contrasts might assume in natural-speech tokens. Finally, the results of our training studies are compatible with the view that developmental change and associated perceptual reorganization in speech perception is primarily due to selective attention rather than any permanent changes in the underlying sensory mechanisms. According to this view, selective attention to linguistically relevant phonetic contrasts operates by warping the underlying psychological space. For speech contrasts that are distinctive in the language-learning environment, the psychological dimensions are stretched so that important phonetic differences page_160 Page 161 become more salient. For speech contrasts that are nondistinctive, the psychological dimensions are shrunk so that unattended differences become more similar to each other. This view of the role of selective attention in speech perception can accomodate a wide variety of developmental and cross-linguistic data and can provide a psychological basis for the mechanisms underlying perceptual change. Acknowledgments Preparation of this chapter was supported by NIDCD research grant RO1 DCOO1 11-13 to Indiana University in Bloomington. We thank Dan Dinnsen, Judith Gierut, and Robert Nosofsky for suggestions and advice at various stages of this work. References Abramson, A. and Lisker, L. (1970). Discriminability along the voicing continuum: Cross language tests. Proceedings of the 6th international congress of phonetics sciences, Prague, 1967 (pp. 569-573). Prague: Academia. Aslin, R. N. (1985). Effects of experience on sensory and perceptual development: Implications for infant cognition. In J. Mehler and R. Fox (eds.), Neonate cognition: Beyond the blooming, buzzing confusion (pp. 157-183). Hillsdale, N.J.: Erlbaum. Aslin, R. N. and Pisoni, D. B. (1980). Some developmental processes in speech perception. In G. Yeni-Komshian, J. Kavanagh, and C. Ferguson (eds.), Child phonology: Perception and production (pp. 67-96). New York: Academic Press. Aslin, R. N., Pisoni, D. B., Hennessy, B. L., and Perey, A. J. (1981). Discrimination of voice onset time by human infants: New findings and implications for the effects of early experience. Child Development, 52, 1135-1145. Aslin, R. N., Pisoni, D. B., and Jusczyk, P. W. (1983). Auditory development and speech perception in early infancy. In M. Haith and J. Campos (eds.), Handbook of child psychology, vol 2: Infancy and developmental psychobiology (pp. 573687). New York: Wiley. Best, C. T., McRoberts, G. W., and Sithole, N. M. (1988). Examination of the perceptual re-organization for speech contrasts: Zulu click discrimination by English-speaking adults and infants. Journal of Experimental Psychology: Human Perception and Performance, 14, 245-260. Burnham, D. K. (1986). Developmental loss of speech perception: Exposure to and experience with a first language. Applied Psycholinguistics, 7, 207-240. Carney, A., Widin, G., and Viemeister, N. (1977). Noncategorical perception of stop consonants differing in VOT. Journal of the Acoustical Society of America, 62, 961-970. page_161
Page 162 Dissosway-Huff, P., Port, R., and Pisoni, D. B. (1982). Context effects in the perception of /r/ and /1/ by Japanese. Research on speech perception progress report No. 8. Bloomington, Ind.: Speech Research Laboratory, Department of Psychology, Indiana University. Durlach, N. I. and Braida, L. D. (1969). Intensity perception: I. Preliminary theory of intensity resolution. Journal of the Acoustical Society of America, 46, 372-383. Eilers, R. E., Gavin, W. J., and Wilson, W. R. (1979). Linguistic experience and phoneme perception in infancy: A cross-linguistic study. Child Development, 50, 14-18. Eimas, P. D. (1975). Auditory and phonetic coding of the cues for speech: Discrimination of the [r-l] distinction by young infants. Perception and Psychophysics, 18, 341-347. Eimas, P. D. (1978). Developmental aspects of speech perception. In R. Held, H. Leibowitz, and H. L. Teuber (eds.), Handbook of sensory physiology, vol 8: Perception (pp. 357-374). New York: Springer-Verlag. Eimas, P. D. and Miller, J. L. (1978). Effects of selective adaptation on the Perception of speech and visual patterns: Evidence for feature detectors. In R. D. Walk and H. L. Pick Jr. (eds.), Perception and experience. (pp. 307-345). New York: Plenum. Eimas, P. D., Siqueland, E. R., Jusczyk, P. W., and Vigorito, J. (1971). Speech perception in infants. Science, 171, 303306. Ferguson, C. A. (1986). Discovering sound units and constructing sound systems: It's child's play. In J. Perkell and D. Klatt (eds.), Invariance and variability in speech processes (pp. 36-51). Hillsdale, N.J.: Erlbaum. Flege, J. E. (1988). The production and perception of foreign language speech sounds. In H. Winitz (ed.), Human communication and its disorders, vol. I (pp. 224-401). Norwood, N.J.: Ablex. Fodor, J. A. (1983). Modularity of mind. Cambridge, Mass.: MIT Press. Garner, W. R. (1973). The processing of information and structure. Potomac, Md.: Erlbaum. Gillette, S. (1980). Contextual variation in the perception of L and R by Japanese and Korean speakers. Minnesota Papers in Linguistics and the Philosophy of Language, 6, 59-72. Goldinger, S. D. (1992). Words and voices: Implicit and explicit memory for spoken words. Research on speech perception technical report No. 7. Bloomington, Ind.: Speech Research Laboratory, Department of Psychology, Indiana University. Goldinger, S. D., Pisoni, D. B., and Logan, J. S. (1991). On the nature of talker variability effects on recall of spoken word list. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 152-162. page_162 Page 163 Goto, H. (1971). Auditory perception by normal Japanese adults of the sounds ''L" and "R." Neuropsychologia, 9, 317323. Gottlieb, G. (1981). The roles of early experience in species-specific perceptual development. In R. N. Aslin, J. R. Alberts, and M. R. Peterson (eds.), Development of perception: Psychobiological perspectives, vol. 1: Audition, somatic perception, and the chemical senses (pp. 5-44). New York: Academic Press. Hubel, D. H. and Wiesel, T. N. (1965). Binocular interaction in striate cortex of kittens reared with visual squint. Journal of Neurophysiology, 28, 1041-1059. Jamieson, D. G, and Morosan, D. E. (1986). Training nonnative speech contrasts in adults: Acquisition of the English /? /-/ð/ contrast by francophones. Perception and Psychophysics, 40, 205-215.
Jusczyk, P. W. (1993). Developing phonological categories from the speech signal. In C. A. Ferguson, L. Menn, and C. Stoel-Gammon (eds.), Phonological Development. Models, Research, Implications (pp. 17-64). Timonium, Md: York Press. Jusczyk, P. (1985). On characterizing the development of speech perception. In J. Mehler and R. Fox (eds.), Neonate cognition. Beyond the blooming buzzing confusion (pp. 199-229). Hillsdale, N.J.: Erlbaum. Jusczyk, P. W. (1986) Toward a model of the development of speech perception. In J. Perkell and D. Klatt (eds.), Invariance and variability in speech processes (pp. 1-19). Hillsdale, N.J.: Erlbaum. Jusczyk, P., Bertoncini, J., Bijeljac-Babic, R., Kennedy, L., and Mehler, J. (1990). The role of attention in speech perception by young infants. Cognitive Development, 5, 265-286. Klatt, D. H. (1988). Review of selected models of speech perception. In W. Marslen-Wilson (ed.), Lexical Representations and Process. Cambridge, Mass.: MIT Press. Kuhl, P. K. (1987). Perception of speech in early infancy. In P. Salapatek and L. Cohen (eds.) Handbook of infant perception, vol. 2 (pp. 275-381). New York: Academic Press. Kuhl, P. K. and Miller, J. D. (1975). Speech perception by the chinchilla: Voiced-voiceless distinction in alveolar plosive consonants. Science 190, 69-72. Lane, H. L. (1965). The motor theory of speech perception: A critical review. Psychological Review, 72, 275-309. Lasky, R. E., Syrdal-Lasky, A., and Klein, R. E. (1975). VOT discrimination by four to six and a half month old infants from Spanish environments. Journal of Experimental Child Psychology, 20, 215-225. Liberman, A. M., Cooper, F. S., Shankweiler, D., and Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431-461. Liberman, A. M., Harris, K., Hoffman, H., and Griffith, B. (1957). The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology, 54, 358-368. page_163 Page 164 Lisker, L. (1970). On learning a new contrast. Status report on speech research (SR-24). New Haven, Conn.: Haskins Laboratories. Lisker, L. and Abramson, A. (1964). A cross language study of voicing in initial stops: Acoustical measurements. Word, 20, 384-422. Lisker, L. and Abramson, A. (1970). The voicing dimension: Some experiments in comparative phonetics. Proceedings of the 6th international congress of phonetic sciences, Prague, 1967 (pp. 563-567). Prague: Academia. Lively, S. E., Pisoni, D. B., and Logan, J. S. (1991). Some effects of training Japanese listeners to identify English /r/ and /1/. In Y. Tohkura (ed.) Speech perception, production and linguistic structure (pp. 175-196). Tokyo: OHM Publishing. Locke, J. L. (1993). The child's path to spoken language. Cambridge, Mass: Harvard University Press. Logan, J. S. (1992). A computational analysis of young children's lexicons. Research on spoken language processing technical report No. 8. Bloomington, Ind.: Speech Research Laboratory, Department of Psychology, Indiana University. Logan, J. S., Lively, S. E., and Pisoni, D. B. (1991). Training Japanese listeners to identify /r/ and /1/: A first report. Journal of the Acoustical Society of America, 89(2), 874-886. McClasky, C. L., Pisoni, D. B., and Carrell, T. D. (1983). Transfer of training of a new linguistic contrast in voicing. Perception and Psychophysics, 34, 323-330. MacKain, K., Best, C., and Strange, W. (1981). Categorical perception of English /r/ and /1/ by Japanese bilinguals. Applied Psycholinguistics, 2, 369-390.
Martin, C. S., Mullennix, J. W., Pisoni, D. B., and Summers, W. V. (1989). Effects of talker variability on recall of spoken word lists. Journal of Experimental Psychology. Learning, Memory, and Cognition, 15, 676-684. Miyawaki, K., Strange, W., Verbrugge, R., Liberman, A., Jenkins, J., and Fujimura, O. (1975). An effect of linguistic experience: The discrimination of /r/ and /1/ by native speakers of Japanese and English. Perception and Psychophysics, 18, 331-340. Mochizuki, M. (1981). The identification of/r/ and /1/ in natural and synthesized speech. Journal of Phonetics, 9, 283303. Moskowitz, A. I. (1973). The acquisition of phonology and syntax. In G. Hintikka, J. Moravesik, and P. Suppes (eds.), Approaches to natural language (pp. 4884). Dordrecht: Reidel. Mullennix, J. W., Pisoni, D. B., and Martin, C. S. (1989). Some effects of talker variability on spoken word recognition. Journal of the Acoustical Society of America, 85, 365-378. Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39-57. page_164 Page 165 Nosofsky, R. M.(1987). Attention and learning processes in the identification and categorization of integral stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 700-708. Nusbaum, H. C. and Schwab, E. C. (1986). The role of attention and active processing in speech perception. In E. C. Schwab and H. C. Nusbaum (eds.), Pattern recognition by humans and machines, vol. 1: Speech perception (pp. 113157). New York: Academic Press. Nygaard, L. C., Sommers, M. and Pisoni, D. B. (in press). Speech perception as a talker-contingent process. Psychological Science. Pisoni, D. (1973). Auditory and phonetic codes in the discrimination of consonants and vowels. Perception and Psychophysics, 13, 253-260. Pisoni, D. B. (1975). Auditory short-term memory vowel perception. Memory and Cognition, 3, 7-18. Pisoni, D. B. (1977). Identification and discrimination of the relative onset of two component tones: Implications for voicing perception in stops. Journal of the Acoustical Society of America, 61, 1352-1361. Pisoni, D. B. (1978). Speech perception. In W. K. Estes (ed.), Handbook of learning and cognitive processes, vol. 6 (pp. 167-233). Hillsdale, N.J.: Erlbaum. Pisoni, D. and Lazarus, J. (1974). Categorical and noncategorical modes of speech perception along the voicing continuum. Journal of the Acoustical Society of America, 55, 328-333. Pisoni, D. and Luce, P. (1987). Acoustic-phonetic representations in word recognition. Cognition, 25, 21-52. Pisoni, D. B. and Tash, J. (1974). Reaction times to comparisons within and across phonetic categories. Perception and Psychophysics, 15, 285-290. Pisoni, D., Aslin, R., Perey, A., and Hennessy, B. (1982). Some effects of laboratory training on identification and discrimination of voicing contrasts in stop consonants. Journal of Experimental Psychology: Human Perception and Performance, 8, 297-314. Posner, M. and Keele, S. (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77, 353-363. Sheldon, A. and Strange, W. (1982). The acquisition of /r/ and /1/ by Japanese learners of English: Evidence that speech production can precede speech perception. Applied Psycholinguistics, 3, 243-261.
Shiffrin, R. M. (1976). Capacity limitations in information processing, attention, and memory. In W. K. Estes (ed.), Handbook of learning and cognitive processes, vol. 4 (pp. 177-236). Hillsdale, N.J.: Erlbaum. Stevens, K. N. (1972). The quantal nature of speech: Evidence from articulatory-acoustic data. In E. E. David and P. B. Denes (eds.), Human communication: A unified view (pp. 51-66). New York: McGraw-Hill. page_165 Page 166 Streeter, L. A. (1976a). Kikuyu labial and apical stop discrimination. Journal of Phonetics, 4, 43-49. Streeter, L. A. (1976b). Langauge perception of 2-month-old infants shows effects of both innate mechanisms and experience. Nature, 259, 39-41. Studdert-Kennedy, M. (1986). Sources of variability in early speech development. In J. Perkell and D. Klatt (eds.), Invariance and variability in speech processes (pp. 58-76). Hillsdale, N.J.: Erlbaum. Studdert-Kennedy, M. (1987). The phoneme as a perceptuomotor structure. In A. Allport, D. MacKay, W. Prinz, and E. Scherer (eds.), Language perception and production (pp. 67-84). London: Academic Press. Tees, R. and Werker, J. F. (1984). Perceptual flexibility: Maintenance or recovery of the ability to discriminate nonnative speech sounds. Canadian Journal of Psychology, 34, 579-590. Terbeek, D. A. (1977). A cross-language multi-dimensional scaling study of vowel perception. Working Papers in Phonetics (University of California at Los Angeles), 37. Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327-352. Walley, A., Pisoni, D. B., and Aslin, R. N. (1981). The role of early experience in the development of speech perception. In R. N. Aslin, J. R. Alberts, and M. R. Peterson (eds.), Development of perception: Psychobiological perspectives, vol. 1: Audition, somatic perception, and the chemical senses (pp. 219-255). New York: Academic Press. Werker, J. F. (1989). Becoming a native listener. American Scientist, 77, 54-59. Werker, J. F. and Lalonde, C. (1988). Cross-Language speech perception: Initial capabilities and developmental change. Developmental Psychology, 24, 672-683. Werker, J. F. and Tees, R. (1984). Phonemic and phonetic factors in adult cross-language speech perception. Journal of the Acoustical Society of America, 75, 1866-1878. Stevens, K. N. (1980) Acoustic correlates of some phonetic categories. Journal of the Acoustical Society of America, 68, 836-842. Strange, W. (1972). The effects of training on the perception of synthetic speech sounds: Voice onset time. Unpublished doctoral dissertation, University of Minnesota. Strange, W. and Dittman, S. (1984). Effects of discrimination training on the perception of /r-l/ by Japanese adults learning English. Perception and Psychophysics, 36, 131-145. Strange, W. and Jenkins, J. (1978). Role of linguistic experience in the perception of speech. In R. D. Walk and H. L. Pick (eds.), Perception and experience (pp. 125-169). New York: Plenum Press. page_166 Page 167
Chapter 6 The Emergence of Native-Language Phonological Influences in Infants:
A Perceptual Assimilation Model Catherine T. Best When we hear words from an unfamiliar language spoken by a native of that language, we often have difficulty perceiving the phonetic differences among contrasting consonant (or vowel) sounds that are not distinct phonemes in our own language. Of course, we experience no difficulty with phones that are very similar to our own native-phonemes. Very young infants, however, discriminate not only the segmental contrasts of their native language but many nonnative contrasts as well. That is, they are apparently unfettered by the phonological constraints of their language environment.1 Moreover, young children typically come to perceive and produce with relative ease just those phones that the language of their community uses. It is apparent, then, that the phonology of the native language comes to exert substantial influence on speech perception and production during development. As will be discussed later in the chapter, the nature of the experiential effect on perception of nonnative segments appears to be largely an adjustment of selective attention rather than a permanent revision of the initial state of sensory-neural mechanisms. The effect of language experience is neither absolute in extent nor irremediable in adulthood, and it varies in degree among specific types of nonnative contrasts and among individuals (e.g., Best, McRoberts, and Sithole 1988; Flege 1988; MacKain, Best, and Strange 1981; Tees and Werker 1984; Werker and Tees 1984b; Pisoni, Lively, and Logan, this volume). When and how does the language environment come to influence the perception of phones that are not contrasting phonemes in the native sound system? And how might that developmental transition provide insight into the acquisition of the native phonological system? In particular, how do young listeners come to recognize the way in which their language organizes disparate phonetic details into phonemic categories to serve distinctly linguistic functions? page_167 Page 168 These are the central issues to be examined in this chapter (see also Werker, this volume). The focus will be on segmental rather than suprasegmental contrasts, particularly on consonants rather than vowels and on nonnative rather than native contrasts. Thus, I will examine developmental reorganization in perception of phonetic differences among consonant sounds that do not signal a linguistic distinction in the native language but that infants in the early months can discriminate. An underlying assumption of this chapter is that developmental change in perception of nonnative contrasts reflects concommitant changes in the way the child perceives the sound structure of the native language, whether at the segmental or the prosodic level. Current findings suggest that infants begin life with language-universal abilities for discriminating segmental phonetic contrasts (i.e., they are not yet perceptually constrained by the phonemic contrasts of their language environment: see note 1) but that, by the second half-year of life, listening experience with the native language has begun to influence the perception of contrasts that are nondistinctive in the native phonological system (Werker 1989; Werker and Lalonde 1988; Werker and Tees 1984a; Werker et al. 1981; see Werker, this volume). Recent findings from my own laboratory (Best, McRoberts, and Sithole 1988; Fowler, Best, and McRoberts 1990) may illuminate the means by which languageparticular experience exerts its effect on perception. Specifically, the influence of the native phonological system on adult listeners entails the perceptual assimilation of nonnative phones to the native phoneme categories with which those nonnative-phones share the greatest similarity in phonetic characteristics. However, our findings with infants suggest that the developmental change in perception of nonnative contrasts during the second half of the first year does not yet involve the mature pattern of assimilation to native-phonemes. Based on these findings, I propose that the native-language influence on perception of nonnative-phonetic contrasts begins with the older infant's emerging recognition that native speech sounds are structured as specific, recurring constellations, or patterns, of coordination among phonetic-articulatory gestures (e.g., the pattern of temporal coordination between the glottal-adduction gesture to begin voicing and the bilabial-release gesture that characterizes English /p/). As the older infant begins to recognize these gestural constellations in ambient speech, he or she may detect similar patterns in some nonnative phones but may be unable to do so with others. This approach to understanding the influence of the language en page_168
Page 169 vironment on phonemic development may also help elucidate certain aspects of phonological behavior in early speech productions, as suggested by acoustic-phonetic analyses from a case study on early word imitations to be presented in the final section of the chapter. It should be noted that, while this chapter focuses on consonant contrasts, language-particular gestural constellations are assumed not to be restricted to phonemic segments but to extend also to syllables and to units of meaning in speech (e.g., morphemes). Language Specificity in Phonology Languages vary in their phonological systems. Of specific interest to the present discussion, they differ in their inventories of phonemic contrasts, which are defined as segment-sized constellations of phonetic properties that have become linguistically distinctive because they are used systematically to convey differences in word meanings. For example, the lexicon of many languages, including Japanese and Korean, lacks the liquid consonant contrast /1/-/r/ found in English. Likewise, the English vowel contrast /I/-/e/ (as in -) is absent from Spanish, Italian, and numerous other languages. In turn, English lacks many phonemic contrasts found in other languages, such as the ejective stops found in Ethiopian Tigrinya and elsewhere, the click-consonant contrasts of African Bantu languages including Zulu, and the front-rounded vowels of German, French, and Swedish. Even in cases where languages share a phonemic contrast, the phones involved often differ between languages in their articulatory details, that is, in the exact phonetic realization of how those phonemes are produced. To illustrate, English and French both use the phonemic contrast /b/-/p/ to mark lexical distinctions (e.g., English, -<pat>; French, <pain>). Yet, in the English phoneme /b/, glottal pulsing (voicing) may begin either simultaneously with the release of the bilabial closure, the phone [b] in International Phonetic Alphabet (IPA) transcription, or slightly after the release, short-lag voiceless-unaspirated [p]. In the English phoneme /p/, voicing instead begins either after a longer postrelease lag, which is aspirated as an aerodynamic result of the lag in glottal adduction ([ph]), or it begins after a shorter lag (unaspirated [p]) in certain positions (e.g., following ). In French, however, the phoneme /b/ is realized consistently as [b] across contexts, while /p/ is phonetically realized consistently as the short page_169 Page 170 lag voiceless-unaspirated [p]. Thus, French fails to define a phonemic distinction between the phones [p]-[ph] as English does, whereas English fails to define a phonemic contrast between [b]-[p] as French does. By extension, languages may share a single phoneme which nonetheless differs phonetically between the languages, as with the phoneme /r/ found in both American English (phonetically realized as the liquid-approximant phone ) and in Spanish (realized as an alveolar tap or trill [r]). Furthermore, within a language, there are often striking dialectal differences in the phones that typically realize a given phoneme (i.e., allophones), as with /r/, which is pronounced as in American English but is produced as a tapped or trilled [r] in Scottish English. Even within a dialect, a phoneme may have allophonic variants. These can vary either systematically, dependent on their phonetic context (context-conditioned allophones such as the American English /k/ realized as [kh] word-initially but as [k] following ) or independent of context (in free variation: e.g., final /d/ may be either released or unreleased in American English). Thus, languages and dialects use but a subset of the phonetic gestures of which the human vocal tract is capable, and they differ in how they relate those articulatory details to phonemic distinctions. These types of language-particular phonological characteristics are known to influence speech production, the degree of the experience-based effect varying with development. The phonetic details of the native-language (L1) phonology are strongly ingrained in the production patterns of mature speakers who speak like natives not only in their choice of words but also in the accent of their speech. A corollary influence of the native phonology is that adults usually maintain an LI accent when they learn to speak a new language (L2) and typically find it quite difficult to produce L2 with fully correct phonetic details. However, normal young children rather quickly learn to speak the language of their community with native accents; unlike adults, they usually can acquire additional languages prior to 5-6 years of age with little or no trace of accent from LI phonology (Brière 1966; Flege 1987; Flege, McCutcheon, and Smith 1987; Oyama 1976; Tahta, Wood, and Loewenthal 1981; see review, Flege 1990).
Language-specific Influences on Adult Speech Perception But what might these observations suggest regarding the listener's perception of native and nonnative speech sounds? Given the relative ease with which children learn to speak the ambient language(s) with appropriate page_170 Page 171 phonetic detail (i.e., both L1 and L2), they certainly recognize the articulatory properties of the speech around them. Furthermore, because this occurs regardless of which particular language is being learned, we can assume that, at least at some time during the developmental process of language learning, the auditory system must be capable of physiological sensory registration of the acoustic results of the phonetic gestures employed by natural languages.2 But something changes developmentally with respect to the perceptual recognition of the organization of phonetic-articulatory details within phoneme categories. Whether or not sensory registration changes as a result of auditory exposure to particular sound patterns, perceptual recognition ability certainly does change as a result of the child's listening experience with a particular language. It is the latter, not the former, type of developmental change that is of concern in this chapter, which addresses the question, what is the nature of the developmental change in speech perception? One possibility is that the perceptual change results from experience with producing the sounds of L1. But a potential problem with that possibility is the paradoxical implication that correct performance can precede competence. Alternatively, the effect of the native language on perception could be independent of its influence on production. For instance, while developmental changes in ease of producing nonnative sounds with correct phonetic detail might result from a history of differential articulatory practice, speech perception could remain unaffected by L1 acquisition. Although existing evidence suggests there may indeed be disparities between the improvements in production and in perception of nonnative contrasts by adult L2 learners (Flege 1988; Goto 1971; Sheldon and Strange 1982), there have been no direct tests of the perception-production relation in L2 acquisition during the young child's sensitive period for learning an L2 without an LI accent (see Flege 1987). Regardless of the relation between perception and production in acquisition of L2, perceptual research has shown that the phonological characteristics of the native language do indeed influence the perceptual tendencies of mature language users. Monolingual speakers of languages with differing phoneme inventories and/or with differing phonetic realizations of a given phoneme show language-particular differences in perceptual sensitivities to native versus nonnative contrasts. The cross-language pattern of variations in discrimination performance on the contrasts tested have been generally consistent with the phonemic inventories of the languages studied. For example, monolingual adult speakers of English and Thai show language-appropriate differences in their perceptual bound page_171 Page 172 aries between voice onset time (VOT) categories along synthetic-stop-consonant continua (Lisker and Abramson 1970), monolingual Korean and Japanese speakers have difficulty discriminating the English /1/-/r/ contrast (Gillette 1980; Goto 1971; Miyawaki et al. 1975), and monolingual English speakers have difficulty discriminating the Czech voiced, retroflex-alveolopalatal fricative contrast (Trehub, 1976) as well as several Hindi and Native American contrasts that are not used in English (Tees and Werker 1984; Werker and Tees 1984b). It might be that the difficulties adults have with discriminating the pair members of many nonnative contrasts reflect a permanent, absolute loss of sensory-neural sensitivity to the acoustic properties of those contrasts or to their linguistic properties (Eimas 1978) due to a lack of environmental exposure during a critical or sensitive period of development (see Aslin and Pisoni 1980b). However, lack of a phonemic contrast (i.e., linguistic information) need not imply lack of exposure (i.e., acoustic or phonetic data). Several factors mitigate both against an absolute lack of exposure to the nonnative phonetic properties and against absolute sensory-neural loss. The absence of a given contrast does not assure that the environment is devoid of the crucial acoustic and/or phonetic properties of the phonemes involved since those patterns may be present in allophonic variants (MacKain 1982) or in nonspeech sounds (Best, McRoberts, and Sithole 1988). Moreover, perceptual recognition of phonetic organization in unfamiliar speech sounds can remain open to change even in adults learning a new language or a new dialectal accent, although the extent of malleability may be more limited than in early childhood (Flege 1990).
Discrimination of some nonnative contrasts may be quite good even without training (Best, McRoberts, and Sithole 1988) or with only minimal training (Werker and Tees 1984b). Even for those nonnative contrasts that are initially difficult for the listener, discrimination may become better, with some apparent limits in degree and in generalization across phoneme contexts, through extensive naturalistic conversation experience with L2 (MacKain, Best, and Strange 1981; Mochizuki 1981), through more extensive laboratory training (McClasky, Pisoni, and Carrell 1983; Pisoni et al. 1982; Strange and Dittman 1984; Pisoni, Lively, and Logan, this volume), or under listening conditions that minimize memory demands and/or phonemiclevel perceptual constraints (Werker and Logan 1985; see also Carney, Widin, and Viemeister 1977; Pisoni and Lazarus 1974). It is relevant here to note that there are also individual differences in the degree of difficulty listeners have with a given nonnative contrast (MacKain, Best, and Strange 1981; Mann 1986; Strange and Dittmann page_172 Page 173 1984). Finally, even when listeners have difficulty discriminating a particular nonnative contrast, they have been shown to discriminate the critical acoustic features when these are isolated from the speech context (Miyawaki et al. 1975; Werker and Tees 1984b). These observations indicate that the effect of the native language on perception of nonnative contrasts is neither absolute nor permanent and, hence, cannot be fully accounted for by sensory-neural mechanisms. Rather than causing a sensoryneural loss of sensitivity to nonnative distinctions, the native language most likely promotes an adjustment of attention to language-particular, linguistic characteristics of speech signals, especially when the listener is focused at the level of phonemic information (Werker and Logan 1985). Although the change appears to be attentional and somewhat malleable rather than strictly physiological and permanent, the facts remain that adults have initial difficulty with many (although, as I will show, not all) nonnative contrasts, and that there are constraints on perceptual plasticity (Strange and Dittmann 1984; MacKain, Best, and Strange 1981; Tees and Werker 1984; Pisoni, Lively, and Logan, this volume). This pattern of language-particular attunement from infancy to adulthood is consistent with the observation that perceptual learning generally involves a shift of attention away from irrelevant stimulus information, as well as an increases in the ability to discover and recognize functionally relevant higher-order patterns of stimulus organization (Gibson 1966; Gibson and Gibson 1955). How do these phonological influences of the native language on speech perception arise developmentally? Development of Phonemic Influences on Perception In order to understand and speak the language of their environment, infants must come to perceive the phonetic, that is, articulatory and/or acoustic, properties that define the phonological organization of that language. Such languageparticular perceptual attunement is essential in guiding the child's own productions, if she or he is eventually to reproduce the speech patterns of other members of the language community. The universal developmental sequence and the intrinsically motivated nature of normal language acquisition suggest that infants are well equipped to begin making this sort of perceptual adjustment. That suggestion is also borne out by two aspects of research findings on infant speech perception, which will be discussed in more detail below. First, studies on the discrimination of both native and nonnative segmen page_173 Page 174 tal contrasts suggest that during infants' early months, perception of segmental contrasts is largely unaffected by the language of their community, although some native-language prosodic influences may appear earlier than do segmental influences (Best 1991 a; Mehler et al. 1988; this volume; cf. Best, Levitt, and McRoberts 1991). Young infants' perception of segmental contrasts may reflect general prelinguistic abilities that are not yet constrained by the properties of their specific language environment (see note 1). This does not imply that infants possess an innate ability to discriminate all phonetic contrasts from all languages but only that perceptual successes and failures during the first few months cut across both native and nonnative segmental contrasts, revealing a pattern quite different from the native language constraints seen in adults. Second, infants' phonetic perception does begin to show clear influences of the native language at least in the second half-year. What information do infants initially perceive in speech sounds, and what are they beginning to recognize about higher-
order organization in speech as they approach acquisition of their native language around the end of the first year? As they become language users, infants must move from detecting only general information in speech (e.g., simple phonetic properties) to recognizing and producing various language-particular functional elements (e.g., words and phonemes) carried in the signal. Several key questions about this developmental transition must be addressed. Is the general information in speech that is perceptually accessible to the young infant linguistic or nonlinguistic in nature? If the infant initially perceives nonlinguistically, how does the developmental shift to perception of linguistic information take place? And finally, regardless of whether the initially detected information is linguistic or nonlinguistic, how does the infant come to recognize language-particular structural organization in the speech she or he hears? The next section will summarize the primary contemporary theoretical views and empirical investigations regarding the initial state of infant speech perception and the emergence of language-particular influences. Development of Infant Speech Perception Theoretical Perspectives There are three general theories of speech perception, all of which carry implications about perceptual development. One view differs from the other two in its assumptions about the nature of the information that the perceiver initially apprehends in speech, that is, it assumes a different page_174 Page 175 immediate object of perception, a different sort of perceptual primitives. That approach, referred to here as the psychoacoustic theory, posits that the immediate object of speech perception, and hence of the infant's perceptual learning, is the proximal stimulus, or the raw acoustic components into which the speech signal is assumed to be decomposed by the auditory periphery. This view assumes that the perceptual primitives for speech perception are an array of intrinsically meaningless, simple acoustic features, such as spectral distribution patterns, bursts of bandlimited aperiodic noise, and temporally defined silent gaps, into which the speech signal can be analyzed (Aslin, Pisoni, and Juszcyk 1983; Diehl and Kluender 1989). The psychoacoustic primitives are thus analogous in nature to the simple, twodimensional visual features, such as edges, lines, angles and spatial frequency components, that are often described for instantaneous two-dimensional retinal images of visible patterns. By contrast, both the current motor theory (Liberman and Mattingly 1985, 1989; Mattingly and Liberman 1988) and the ecological theory of speech (Best 1984; Fowler 1986, 1989, 1991; Fowler, Best, and McRoberts 1990; Fowler and Dekle 1991; Fowler and Smith 1986; Rosenblum 1987; Studdert-Kennedy 1985, 1986a, 1989, 1991) argue that the immediate objects of speech perception are the distal events that shaped the signal (see Gibson 1966, 1979). These two views assume that the perceptual primitives for speech perception are the articulatory gestures of the vocal tract, that is, the formation and release of constrictions by diverse articulators at various positions along the vocal tract (see treatments of articulatory phonological theory by Browman and Goldstein 1986, 1989, in press). Speech signals directly provide articulatory-gestural information because their complex, time-varying patterns are lawfully shaped according to the principles of acoustic physics (Fant 1960) and by the physical structure of the vocal tract and its dynamic gestures (e.g., bilabial closure, velum lowering, and glottal opening). Gestural information, then, is present in the complex organization of the speech signal as it changes over time, certainly to no less an extent than pure acoustic features are present in the signal. The motor theory and the ecological view both assume that it is the gestural information that is directly extracted from speech signals and that this information is not built up from an analysis of simple acoustic features. Thus, these views require no intervening mental step to translate raw acoustic features into gestural patterns. The point of contention among the theories with respect to the perceptual primitives for speech perception, then, is whether the information that the perceiver extracts directly from speech is comprised of pure, page_175 Page 176 simple acoustic features or rather is comprised of dynamic articulatory patterns. The psychoacoustic approach assumes
that the acoustic pressure wave is decomposed into simple, meaningless features, which serve as the immediate object of perception. However, the motor theory and the ecological view regard the acoustic waveform as but one of the energy media (along with the dynamic optical patterns of visible articulations and even the haptic patterns of manually felt articulations) that are shaped by and carry information about distal vocal tract gestures, which are the immediate objects of perception. As background for the remainder of this chapter, a brief overview will be given of some of the primary differences and similarities of these three models. An in-depth comparative analysis of the three models, however, is beyond the scope of this chapter. A number of existing sources provide detailed treatments of each of the theoretical positions and of the debates among them; readers interested in more extended discussions of the logical and empirical grounds for each viewpoint are directed to Diehl and Kluender (1989) for an examination of the psychoacoustic approach, to Liberman and Mattingly (1985, 1989) for the presentation of the motor theory, and to Fowler (1986; see also her reply to Diehl and Kluender 1989); and Best (1984) for discussions of the ecological account, which are based on Gibson's (1979) ecological theory of perceptual systems in general. The three speech perception theories differ with respect to how the primitives of perception are assumed to be related to the linguistic entities represented in the speech signal, for example, phonemic segments. According to the psychoacoustic perspective, the infant must ultimately learn to associate combinations of acoustic features, which are intrinsically meaningless and nonlinguistic, with the linguistic entities of words and phrases (meaningful units), as well as with the syllables and phonemes (structural units) that may be recombined to convey different meanings (see Jusczyk 1981, 1986, this volume; Pisoni, Lively, and Logan, this volume). Thus, the infant must develop auditory templates or prototypes that become paired associates of abstract linguistic entities. The ecological and the motor-theory perspectives assume, alternatively, that the infant must discover which particular temporospatial constellations of articulatory gestures are employed as specific linguistic elements (words, phonemes, etc.) in their native language, such as the temporal relation between bilabial closure and glottal opening at the beginning of American English words like /peak/ and /pat/. Unlike the motor theory, however, and analogous to the psychacoustic view, the ecological view page_176 Page 177 assumes that the information young infants initially detect in the speech signal is nonlinguistic. According to the ecological approach, the sort of distal articulatory information detected by the prelinguistic infant is initially devoid of linguistic relevance for them. Young infants, and indeed other animals, presumably also detect analogous distal event information in other environmental sounds and sights (see Best 1984; Fowler, Best, and McRoberts 1990; StuddertKennedy 1986a). For example, recent studies show that young infants can recognize lawful temporal macrostructural (rhythmic) as well as microstructural (object composition) commonalities between the sights and sounds of single versus multiple marbles being turned back and forth inside plexiglass containers (Bahrick 1987). They come to recognize the intermodal relations on the basis of physically lawful relationships between the objects/events and the corresponding sounds, and they show no evidence of learning intermodal matching when sight and sound are only arbitrarily associated (Bahrick 1988). Also, adults' perceptions of auditory nonspeech events involving steel balls rolling down two-part runways with differing slopes are consistently determined by the dynamic properties of the distal events in ways that cannot be reconciled by psychoacoustic principles such as auditory contrast (Fowler 1991). With respect to speech, the human child differs from other animals because, at some point in development, she or he begins to discover sound-meaning correspondences between higher-order patterns of articulatory gestures and specifically linguistic functional elements, such as referents for objects, events, and people and their interactive relationships. The child discovers that mature speakers organize their articulatory gestures into systematic, recurring constellations in order to convey different meanings. These gestural constellations are the physical instantiations of the multiple, nested levels of linguistic organization in speech (phonemes, morphemes, words, phrases, and sentences), all of which are specific to the language environment (see also Best 1984). Thus, the ecological view and the psychoacoustic view put forward the notion that the basis for speech perception is initially nonlinguistic in nature, sharing common ground with the perception of nonspeech sounds, and that speech
perception must then shift developmentally to a linguistic basis. However, the two views obviously differ with respect to the nature of the nonlinguistic information the infant is presumed to derive initially from speech and as to the means by which the infant is presumed to make the developmental shift to a linguistic basis for speech perception. page_177 Page 178 The current motor theory (Liberman and Mattingly 1985, 1989; Mattingly and Liberman 1988) differs from the ecological theory in two ways. It assumes that even the simple articulatory gestures that infants perceive in speech are linguistic in nature, specifically that they are phonetic, and that these gestures are detected from the outset via a biologically specialized language module in the human brain that is independent from the general neural mechanisms involved in the perception of all other, nonlinguistic information. The detection of distal event information is a characteristic of specialized, or closed, modules (see Fodor 1983) but not of general, or open, modules, such as the general auditory system that handles perception of other types of sounds. Here, again, the motor theory differs from the basic assumptions of both the psychoacoustic model and the ecological approach. The motor theory assumes that perception of nonspeech auditory patterns presumably proceeds, more or less, in the manner described by the psychoacoustic view rather than that described by the ecological view, that is, nonspeech auditory perception is assumed to begin with detection of the proximal acoustic properties at the auditory periphery, which does not involve detection of distal event information. According to the motor theory, the task of the uniquely human phonetic module is to relate the articulatory gestures detected in speech to the more abstract phonological units and to translate abstract structures, such as words, into neuromotor commands for producing specific utterances. The view of the development of speech perception and production that is taken in this chapter follows the ecological approach. It begins with the premise that the articulatory-gestural properties of ambient speech serve as the primitives for the infant's task of learning to use speech as a tool for communicative purposes within a particular language. Analogous to the way we tend to perceive the characteristics of a physical tool (e.g., an adze) in terms of possible goal-related actions with that tool (i.e., its affordances: Gibson 1966, 1979), learning about speech must entail a link between perception and action (speech production) in the context of affordancesoutcomes that the speaker-listener perceives can be accomplished by vocal communication, such as shared games or positive emotional interactions (see also Dent 1990)in order for the child to become a speaker of a particular language. Thus, the immediate object of the infant's perception of speech is the pattern of articulatory gestures that shaped the signal. These gestures are the first and foremost properties that the infant must recognize so that she or he can come to use the vocal tract as a tool for language-specific communicative purposes. page_178 Page 179 The notion that gestural information is the basis of infant speech perception is supported by evidence that 4- to 6-monthold infants match visual and auditory speech patterns in bimodal perception studies (Kuhl and Meltzoff 1982, 1984; MacKain et al. 1983) and that even younger infants show perceptual compensation for coarticulatory information in speech (Fowler, Best, and McRoberts 1990) according to the same pattern found in adults (Mann 1980, 1986). The latter reports considered several likely psychoacoustic explanations for the perceptual shifts observed in differing coarticulatory contexts, including auditory contrast and critical band effects but rejected those accounts on both logical and empirical grounds in favor of an articulatory basis for the perceptual patterns.3 As for the bimodal speech perception findings, the psychoacoustic account that has been offered is that the match between the optical pattern and the acoustic pattern could have been learned by association. Although no published studies have tested the association-learning account in infants, two sets of findings with adults mitigate against an associationist explanation. Specifically, synchronized, but incongruent, audiovisual speech stimuli often yield singular phonetic percepts that do not correspond to either the audio or the video signal, and could not have learned by direct association (MacDonald and McGurk 1978; McGurk and MacDonald 1976; cf. Massaro 1987). Even more damaging to the associationist account are the recent findings of Fowler and Dekle (1991). In that study, subjects were tested for the McGurk-MacDonald type of cross-modal phonetic percepts under incongruous auditoryhaptic presentations. The subjects' fingers rested against the silently moving lips of an unseen face while they were played a synchronous, but phonetically incongruent, auditory token. They were also tested bimodally with incongruous
auditory-orthographic presentations. Haptically felt information about lip movements during speech is the lawful outcome of those speech movements, just as the acoustic speech signal is the lawful outcome of those same articulatory gestures, whereas orthographic symbols relate to phonetic segments by convention, that is, by arbitrary association. Although the subjects had no prior experience with haptically felt speech and, hence could not have formed previous auditory-haptic associations, they had many years of reading experience founded explicitly on repetitive arbitrary association between phonemes and specific orthographic symbols. Nonetheless, these subjects showed cross-modal phonetic percepts akin to those found in the McGurk and MacDonald studies only under the auditory-haptic condition. Their per page_179 Page 180 cepts in the auditory-orthographic condition were determined directly by the auditory stimuli and were completely unaffected by the simultaneous orthographic presentations. Thus, the results clearly run counter to the associationist account and support the ecological account that speech perception is based on gestural information. As discussed earlier, the distal-gestural patterns of utterances are organized at multiple linguistic levels. But, the ecological view assumes that these linguistic levels of organization can be detected only by perceivers who have become familiar with ambient speech and have begun to discover its affordances for conveying meaning, such that they can recognize the invariant and contrastive properties of its language-particular gestural constellations (see Dent 1989 for an ecological account of semantic and syntactic development). Accordingly, the emerging influence of the language environment on speech perception involves a shift from the detection of nonlinguistic information about simple gestural properties of speech to the detection of higher-order and functionally linguistic coordinations among articulatory gestures. By hypothesis, this shift begins as the child discovers, during the final quarter of the first year, that contextually defined references to real world objects/events (meanings) repeatedly co-occur with specific patterns of intergestural constellations in spoken words and phrases. Thus, ecological theory assumes that these emergent properties of speech gestures are themselves the linguistic entities rather than assuming the linguistic entities to be abstract, static mental representations. Language is composed of dynamic action patternswhether spoken, manually gestured, or writtenwhose function is to afford speakers and listeners a means by which to communicate about actual or potential activities in which they may wish to engage, such as to indicate rules about a game to be played or to collaborate on a joint project. A major appeal of the ecological approach to speech development, therefore, is its assumption that perception and production share a common metric of informationthe articulatory gestures of the human vocal tract. This perceptionproduction link is crucial to the language-learning child, who must come not only to recognize the patterns of native words across acoustically diverse productions by widely differing speakers but also to produce reasonable approximations of those patterns. By the ecological account, no translation is needed between perception and production because they are informationally compatible. The psychoacoustic approach assumes informational incompatibility between perception and production. The auditory percepts are specified in page_180 Page 181 acoustic but not motoric terms, and the production patterns are specified in motor-control but not auditory terms, so acoustic <> motor translation is required. Because learning a native language requires that perception of the acoustic signal and self-production of speech be, or become, informationally compatible, the child of the psychoacoustic approach would either have to learn (i.e., construct) algorithms for linking perception and production or else the translation routines would have to be innate. In either case, cognitive operations would have to transform acoustic and motor-control parameters to some common abstract form of information, presumably linguistic in nature, that is, phonetic categories or features. Thus, behind the superficial acoustic cues or templates or prototypes must lie abstract mental representations of linguistic entities. Most existing versions of the psychacoustic model do not address the acoustic <> motor translation issue. But the premises of the model would seem to mandate that the translation routines be learned associatively (Diehl and Kluender 1989) rather than determined innately, or otherwise, the psychoacoustic approach would paradoxically mirror a central tenet of the original motor theory (Liberman et al. 1967).
The implication of associative learning for auditory-motor translation routines brings us back full circle to the problem of informational incompatibility between auditory perception and motor production. It is by no means a trivial matter to accomplish the necessary bootstrapping from one form to the other, especially given the problem that auditory feedback from self-produced speech could not be expected to provide guidance to motor control for the very reason that it is informationally incompatible at the outset (see Fowler and Turvey 1978). If the auditory and motor information are incompatible, it is not clear how the child would decipher which properties of the auditory signal are, or ought to be, associated with which aspects of motor production, or how she or he would be able to evaluate whether the correct associationist inference had been formed. These basic logical problems with the implications of the psychoacoustic approach are among the primary reasons that I reject that approach and will give no further consideration to it in my subsequent discussion of early developmental changes in cross-language speech perception. The current motor theory, like the ecological approach, postulates articulatory gestures as the primitives of speech perception (Liberman and Mattingly 1985, 1989; Mattingly and Liberman 1988). The motor theory requires no cognitive translation from acoustic cues to phonetic elements, which are directly and precognitively perceived as the distal gestures of the speaker's vocal tract articulators. However, unlike the ecological ap page_181 Page 182 proach, the motor theory proposes that a specialized phonetic module is needed, in part to translate the abstract gestural patterns of words, phonological elements, and so forth, into neuromotor commands for speech production. The ecological approach avoids the need for translation between different forms of information either in perception or in the perception-production link and, thus, has the benefit of parsimony. What the infant needs at the start is simply the general property of perceptual systems as described by Gibson (1966, 1979): that they are organized for the detection of information in stimulation about the distal objects and events that shaped the energy patterns reaching the perceiver, particularly with respect to information about the actions that those objects/events may afford the perceiver. By this general definition, human infants do not differ from other species in their general approach to perceiving speech or other events. But, ultimately, we do differ from other species with respect to the specific affordances that speech holds for us. We alone possess the apparatus for producing the gestures of speech (e.g., Lieberman 1975). The few avian species that can approximate some of its acoustic patterns do so by quite different vocal mechanisms. More importantly, their imitations are not used meaningfully, and they are holistic, failing to reorder or recombine phonetic, syllabic, or lexical subunits to create new utterances. In fact, so far as we know, no other species systematically varies the sequence of discrete elements of meaning and/or structure in order to change the intent of their communicative messages, as the phonological and syntactic organization of human languages allow us to do. Even the apes who have been taught to use languagelike systems of manual, or visual signs, to communicate with humans (Gardner and Gardner 1973; Premack 1971) have not mastered the grammatical functions of word order or the use of the closed class, which are both crucial characteristics of syntactic systems and hence of true language (Aitchinson 1983; Terrace et al. 1979). Even at the level of phonetic perception, there is recent evidence that monkeys do not organize speech categories around prototypic exemplars as human adults and infants do (Kuhl 1991). Thus, in the context of communicative interactions, human infants move beyond other species as they discover, within the context of human social and communicative interaction, the multiple, interlocking levels of linguistic organization in speech that are carried by language-particular constellations of articulatory gestures. Having established the theoretical background, we will turn next to a brief review of empirical findings regarding the influence of the language page_182 Page 183 environment on infant speech perception. We will attempt to discover what information young infants perceive in speech sounds and how this changes as they become attuned to the native language. Our focus is on recent laboratory findings on perception of nonnative speech contrasts, that suggest that, although the language environment has begun to influence speech perception before the end of the first year, the older infant's perception is not yet fully organized according to the
phonological system of the native language that guides adults' perceptions of nonnative sounds. But, first, we must summarize earlier findings on the general characteristics of infant speech perception, and on possible developmental influences of the language environment. Language-particular Developments in Infant Speech Perception Numerous studies in the past two decades have examined young infants' abilities to discriminate a wide variety of phonetic contrasts during the first half year of life, although almost none of these have examined infants under 2 months of age (see for reviews Jusczyk 1985; Kuhl 1987). The results have indicated that, in general, 2- to 6-month-old infants can discriminate between-category differences for most synthetic or naturally produced phonetic contrasts on which they have been tested. The between-category pairings that these young infants have discriminated, as well as those with which they have had difficulty, include both native and nonnative contrasts (e.g., Trehub 1973, 1976). Although it would be an overstatement to claim that young infants possess an innate ability to discriminate all phonetic contrasts from all languages, nonetheless, their speech perception abilities are broad, are relevant to many phonetic category distinctions, and are apparently general across languages (i.e., show cross-language similarities) rather than being biased by their specific language environment as those of adults are. The few segmental contrasts that have been reported as difficult for young infants to discriminate are consistent with the notion that general, rather than language-particular, abilities underlie early speech perception because they have involved both native and nonnative contrasts. In several studies, English-learning infants under 6 months have failed to discriminate certain English fricative contrasts: (Eilers, Wilson, and Moore 1977) and /s-z/ (Eilers 1977; Eilers and Minifie 1975). However, they have discriminated other fricative contrasts, such as Kuhl 1980). While some published reports have found discrimination by infants 6 months and younger of such difficult fricative contrasts as (Kuhl 1980; Levitt et al. 1988) and /s-z/ (Eilers, Wilson, and Moore 1977; Eilers, page_183 Page 184 Gavin, and Oller 1982), even the latter findings have suggested that those particular contrasts may be more difficult for young infants to discriminate than are other contrasts such as the stop-place distinction /ba/-/da/. As for nonnative contrasts that may be difficult for infants under 6 months to discriminate, English-learning infants have failed to discriminate some acoustic voice-onset-time (VOT) distinctions in the range of the nonnative Spanish and Thai prevoiced-voiced stop contrasts (Eimas et al. 1971; Eimas 1974). This failure was of particular interest in light of two reports that such contrasts are discriminated by young infants from language environments that employ prevoiced-voiced stop distinctions: Guatemalan Spanish 4- to 6-month-olds (Lasky, Syrdal-Lasky, and Klein 1975) and Kikuyu 2-montholds (Streeter 1976b). Unfortunately, ambiguities in these studies preclude a straightforward conclusion as to whether speech perception abilities in this early developmental period show general versus language-particular constraints (see also MacKain 1982). Two important general limitations of all these studies was their failure to assess directly for age-related changes in the perception of native versus nonnative contrasts or to compare infants from different language environments with identical stimuli and tasks. In order to conclude that a language-particular developmental change in perception has taken place, infants from a single language environment would have to show similar reponses to native and nonnative contrasts at a young age and then show a language-relevant difference in response to the two contrasts at some later age. Alternatively, support for a language-particular influence would be obtained if infants from each of the two language environments showed equivalent levels of discrimination at some younger age on a contrast that is present in one but not the other language and then yielded language-relevant differences at some later age. In either case, a lack of language differences across ages would suggest that language-particular learning had not yet clearly affected perception. There are also more specific problems in interpreting the findings. In the reports on the Spanish- and Kikuyu-learning infants, the subjects failed to discriminate the acoustic VOT contrast actually present in productions by adults of their language community. Instead, they discriminated some other nonnative prevoiced VOT contrast. Nor has the issue been clarified by another report that young English-learning infants may be able to discriminate some acoustic VOT distinctions in the prevoicing range (Eimas 1975). The latter study employed VOT differences that were much larger than the intervals used to test for category boundaries in the Spanish
page_184 Page 185 and Kikuyu-learning infants or in other regions of the acoustic VOT continuum. In addition, the use of computer-synthesized acoustic VOT continua in all these studies may pose a problem with respect to the voicing categories of the adult languages. Articulatory VOT distinctions among stop categories actually result in multidimensional acoustic differences between categories, but the synthetic continua manipulated only the timing of acoustic onset of periodicity following stop release. This obviously would be problematic if some property other than acoustic VOT per se were the actual or primary source of perceptual information for native listeners. Indeed, even native adults' perceptual boundaries with synthetic VOT stimuli fail to correspond to the voicing categories found in Spanish (Lisker and Abramson 1970) or Kikuyu productions (Streeter 1976a), suggesting that acoustic VOT is not the primary perceptual information that distinguishes the prevoiced-voiced categories for them (Lisker and Abramson 1970). And the possibility that even native voiced-voiceless stop distinctions are discriminated by infants on the basis of some other acoustic property besides timing differences in acoustic VOT is suggested by the failure of English-learning infants to discriminate synthetic /du/-/tu/ stimuli. These stimuli differed only in acoustic VOT and lacked the F1 transition cutback cue that had been confounded with VOT in other synthetic speech studies (Eilers et al. 1981). Thus, there is only sparse, equivocal evidence of any language-specific influences on the perception of consonant contrasts in infants under 6 months. The weight of empirical findings favors the view that, during their first half-year, infants possess only general abilities to discriminate many, though not all, consonant contrasts from both native and nonnative languages. This characterization does allow that some phonetic contrasts may be easier than others for young infants to discriminate. It assumes only that such variations are not yet constrained by the infant's specific language environment. However, the possibility remains open that improvement may occur even during these early months for perception of other properties available in the native language environment, that is, for the beginnings of some languageparticular learning, such as global prosodic properties of the native language (Mehler et al. 1988). Other evidence indicates that the native language does begin to influence perception of phonetic contrasts during the second half-year of life. The emergence of language-particular perceptual effects during this developmental period would be consistent with general observations that infants generally start to produce their first words by the end of their first page_185 Page 186 year and that they begin to understand words even earlier, by between 8 and 10 months. Both of these observations imply the development of sensitivity to the sound patterns of native words. The first such studies with older infants suggested that 6- to 8-month-olds from different language environments may differ in their discrimination of native and nonnative-phonetic distinctions. Those studies examined Spanish- and English-learning infants' discrimination of synthetic versions of a prevoiced-voiced stop distinction found only in Spanish and a voiced-voiceless stop distinction found only in English (Eilers, Wilson, and Moore 1979; Eilers, Gavin, and Wilson 1979). These studies also tested the discrimination of naturally produced distinctions between the tapped versus trilled /r/ found only in Spanish, the fricative-voicing contrast /s/-/z/ found only in English, and the Czech fricative-place contrast found in neither Spanish nor English (Eilers, Govin, and Oller 1982). The results suggested that Spanish-learning infants discriminated the Spanish voicing and /r/ contrasts, while the English-learning infants showed marginal or no discrimination of these contrasts. However, both groups discriminated the English and Czech contrasts, with the Spanish-learning infants performing no worse than the English-learning infants on the former and actually performing significantly better on the latter. The authors concluded that the language environments of the two groups of infants differentially affected their discrimination of the cross-language contrasts. These findings suggest a possible language-particular influence on phonetic perception at 6-8 months. Although some concerns about methodological and interpretive difficulties were raised by Aslin and Pisoni (1980b), Jusczyk, Shea, and Aslin (1984), and MacKain (1982), the authors have rebutted many of the criticisms (Eilers, Gavin, and Wilson 1980; Eilers et al. 1984). However, some ambiguities remain. As with the studies of younger infants, age change was not assessed. One report by Aslin et al. (1981) on discrimination of nonnative synthetic prevoiced/ voiced stop contrasts by English-learning infants between 6-12 months does little to help resolve the issue. Subjects were not assessed for age
changes in perception, and only a very small number of the subjects who began the study completed the prevoiced/voiced tests, their results showing wide variations in boundary positions tested across rather wide VOT intervals (e.g., VOT differences of up to 70 msec, as compared to the 20 to 30 msec VOT intervals used in other studies with Spanish infants). An alternative explanation of language group differences at a single age in the Eilers's studies might be that the Spanishlearning infants are simply page_186 Page 187 better discriminators overall than the English-learning infants for some reason other than language experience itself. Indeed, the Spanish infants discriminated the English-voicing contrast as well as the English-learning infants, and they discriminated the nonnative Czech contrast significantly better than did the English-learning infants. The authors suggested several additional factors, both linguistic and nonlinguistic, that might account for the Spanish infants' high performance on non-Spanish contrasts. Spanish may provide more or better phonological analogies of those nonnative contrasts than English does, the bilingual Spanish-English environment to which the Spanish infants were exposed may have aided them in discriminating the English contrast, and/or the English voicing contrast may be acoustically salient even to infants who have not been exposed to it. More recent findings from Janet Werker's lab and from my own lab are consistent with the idea that general rather than language-particular abilities underlie discrimination of many segmental contrasts at 6-8 months. Of greater interest, however, is the related developmental finding that unequivocal language-specific changes in perception of nonnative contrasts certainly have begun to appear around 8-10 months of age and are strong by 10-12 months. Using a version of the conditioned head-turn technique, Werker and colleagues (1981, 1984, 1988) presented English-learning infants at 68, 8-10, and 10-12 months with an English stop-place contrast (/b/-/d/) and with the following nonnative contrasts: Hindi dental-retroflex stops and breathy voiced-voiceless dental stops /dh/-/th/, Thompson Salish (Native American) ejective velar-uvular stops /k/-/q/. In their several studies, the authors have used both natural CV syllables and synthetic continua, as well as both cross-sectional and longitudinal developmental designs. Yet, regardless of the variation in stimuli and experimental design, the results have been remarkably consistent. At 6-8 months, the infants discriminated not only the native contrast but also all nonnative contrasts, while at 10-12 months they showed significant discrimination only for the native contrast. Results for the 8-10-month-olds showed intermediate levels of discrimination for the nonnative contrasts. For comparison, several Hindi and Salish infants tested at 10-12 months showed good discrimination of their native contrasts, the same contrasts on which the oldest English-learning infants had failed. On the basis of these findings, Werker hypothesized that the reorganization in infants' perception of nonnative contrasts by 10-12 months of age reflects the emergence of the native phonological system. Werker's findings are exciting because they suggest that language-particular perceptual reorganization corresponds to the period during page_187 Page 188 which infants are beginning to comprehend words and to establish a receptive vocabulary. It also corresponds to the period during which many infants move from producing only reduplicated babbles, in which a single syllable is repeated several times, to incorporating variations in consonant and vowellike elements within their multisyllabic babbles (Oller 1980; Stark 1980). Moreover, at the same time, infants are making the transition from Piaget's third sensorimotor substage of secondary circular reactions to the fourth substage of means-ends differentiation. This cognitive shift suggests the possibility that while younger infants at the secondary circular-reaction stage may attend to and discriminate among speech sounds because of their interesting sound properties, after shifting to the mean-ends stage infants may become more interested in speech sounds as functional means that can be directed toward communicative ends. Thus, concommitant with the cognitive transition, there may be a shift toward perceiving speech sounds as members of functional linguistic categories in the infant's own language community. This could be expected to have adverse effects on perception of nonnative speech sounds. Recent findings from Werker's group indicate that this cognitive transition to means-ends differentiation is indeed strongly associated with the developmental decline in perception of nonnative contrasts (Lalonde and Werker
1990). The timing of the cognitive shift for individual infants neatly predicted their loss of discrimination for the nonnative contrasts. The Werker hypothesis about phonological influences at 10-12 months of age is intriguing, but it suggests a different developmental pattern in perception than is provided by recent accounts of phonological development based on the productions of older, language-learning children. Specifically, it implies that the infant begins constructing his or her perceptual map of the native phonological system with phonemic segments as the basic building blocks. Presumably, the rapid expansion of the child's receptive vocabulary during the second year would then consist of words built up from phonemic segments. Yet, researchers in child phonology have instead argued that the phonological system and phonemic contrasts are differentiated out of larger linguistic units, emerging only after the child has acquired a sizable vocabulary rather than preexisting as the building blocks for the larger units. Recent findings from young children's speech have indicated that the earliest linguistic units are morpheme-, word-, or even phrase-sized (Ferguson 1986; Macken 1979; Macken and Ferguson 1983; McCune and Vihman 1987; Menn 1971, 1978, 1986; Waterson 1971). From these page_188 Page 189 global units, first syllables, and subsequently phonemes and phonemic contrasts, are only later differentiated (Lindblom, MacNeilage, and Studdert-Kennedy 1983), both in production (Goodell and Studdert-Kennedy 1990; Nittrouer, StuddertKennedy, and McGowan 1989) and in perception (Best, 1984; Studdert-Kennedy 1981, 1986b, in press), most likely in response to the pressure that vocabulary growth exerts on the organization of the mental lexicon (Studdert-Kennedy 1987, 1991). If we extend this reasoning to the emergence of language-specific influences in infant speech perception, then we would expect language-particular reorganization in infants' perception of speech to be initiated not by the recognition of phonemic contrasts but rather by the discovery of global patterns of gestural organization in native utterances, from which phonemes may later be differentiated. In either case, the cross-language findings with infants raise important questions. Would the developmental pattern hold for all nonnative contrasts? In particular, might there be some types of nonnative contrasts that would remain discriminable because of their specific similarities to, or differences from, native contrasts? These questions can actually be traced to several underlying theoretical questions. By what means does the mature listener's phonological system affect the perception of nonnative sounds? And what can the answer to this question suggest to us about the development of the phonological systemthe way in which the child moves from perceiving general information to discovering language-specific organization in the speech signal? Is the difference between the young infant and the 10- to 12-monthold best characterized as a transition from prelinguistic to phonological or by some other sort of perceptual reorganization? These are the questions I have addressed in my recent research on infants' and adults' perception of various nonnative contrasts, which were chosen to differ in their phonetic-articulatory relationship to categories in the phonological system of the listeners' native language. This work was sparked by a consideration of how the constraints of the native phonological system might be expected to influence the perception of nonnative-phonetic contrasts.4 A Perceptual Assimilation Model for Nonnative Speech Contrasts When presented with a speech contrast that is not employed by the native language, the mature listener is confronted with discrepancies between the properties of the nonnative sounds and those of native phonemes. How page_189 Page 190 do listeners respond to these discrepancies? We can generally dismiss the possibility that adults are unable to perceive any discrepancies. For example, listeners can detect discrepancies between familiar native-accented speech and that spoken with another regional accent or even with a foreign accent (Flege 1984). These observations indicate that mature listeners hear discrepancies between nonnative and native phones even though they often recognize sufficient similarities to familiar native phonemes to comprehend native-language utterances.
What is it about the discrepancies that the listener is picking up, and how do the nonnative sounds relate perceptually to native phoneme properties? The nature of the discrepancies and similarities can be viewed according to the three theoretical approaches to speech perception summarized earlier. The discrepancies and similarities may be perceived in terms of either articulatory properties or acoustic properties. For reasons already discussed, this chapter takes the ecological perspective that it is primarily the evidence about articulatory gestures in the speech signal that informs the perceiver. Thus, my premise is that phonologically mature listeners perceive in nonnative phones information about their gestural similarities to native phonemes. A listener will fail to detect discrepancies between native and nonnative phonemes if she or he perceives the phones to be very similar in their articulatory-gestural properties to a native phoneme category. In this case, the nonnative phones will be assimilated to the native phoneme category that the listener perceives to be most similar. Conversely, a listener will perceive discrepancies between native and nonnative phones if she or he cannot detect a correspondence between the articualtory-gestural properties of the native and nonnative phones that is even moderately acceptable. In this case, no assimilation would take place. However, assimilation is not expected to be all or none. (Liberman et al. 1967) but consistent with subsequent evidence of above-chance within-category discrimination (Carney, Widin, and Viemeister 1977; Pisoni and Lazarus 1974), listeners should retain some degree of sensitivity to gestural variations even within native categories (Best, Morrongiello, and Robson 1981; Grieser and Kuhl 1989; Werker and Logan 1985). Contrary to some early claims for absoluteness in categorical speech perception. Therefore, even as a nonnative phone is assimilated to the native category perceived to be most similar, the listener often recognizes discrepancies between them (i.e., recognizes that the unfamiliar phone is less than nativelike). page_190 Page 191 According to the reasoning just outlined, it follows that not all nonnative contrasts should be treated alike by phonologically sophisticated listeners. Only some nonnative contrasts should prove difficult for mature listeners to discriminate, while others should be easy to discriminate even without prior exposure or training. The perceptual variations should be predictable from differences in the patterns of gestural similarities and discrepancies between various nonnative contrasts and the properties of native phoneme distinctions. Specifically, Best, McRoberts, and Sithole (1988) have listed four patterns by which the two members of a given nonnative contrast could be perceptually assimilated to native phonemes 1. The members of a nonnative contrast may be gesturally similar to two different native phonemes, thereby becoming assimilated to Two Categories (TC type). For example, the Hindi retroflex stop is likely to assimilate to English [d] while Hindi breathy-voiced dental stop may assimilate a different English phoneme category, the voiced-dental fricative [ð]. 2. The nonnative phones may both be assimilated equally well, or poorly, to a single native category, in which case they may be equally similar/discrepant to native exemplars of that Single Category (SC type). For example, both the Thompson Salish ejective velar /k/ and uvular /q/ are likely to assimilate to English [kh], although both will be heard as strange or discrepant from the English standard. 3. Alternatively, the nonnative pair may both be assimilated to a single native category, yet one may be more similar than the other to the native phoneme, that is, the nonnative phones may show differences in Category Goodness (CG type). For example, both the Zulu voiceless-aspirated velar /k/ and ejective velar /k/ are likely to assimilate to English [kh], but the former should be perceived as essentially identical with English standard, while the latter should be heard as quite discrepant from it. 4. Finally, the nonnative sounds may be too discrepant from the gestural properties of any native categories to be assimilated into any categories of the native phonology and should, therefore, be perceived as nonspeech sounds, that is, they are Nonassimilable (NA type). For example, the suction-produced click consonants of southern Bantu languages are unlikely to assimilate well to any English phoneme categories. Predictions of the Perceptual Assimilation Model The perceptual assimilation model predicts that phonologically sophisticated listeners will show near-ceiling discrimination of TC contrasts, given that the phones involved should assimilate to two different and easily discriminable native phoneme categories. These listeners should also show moderate to good discrimination of CG contrasts, which
assimilate to a single native category but differ in their discrepancy from the ideal native exemplar because they can differentiate good from less-good exemplars page_191 Page 192 within the native category. However, discrimination of the CG-type contrasts is not expected to reach the high levels of discrimination found for TC contrasts because, even in the native language, between-category distinctions are better differentiated perceptually than are within-category variants. Mature listeners are also expected to have moderate to good discrimination of NA contrasts but for a different reason. In this case, discrimination performance will depend on how similar the two sounds are perceived to be as nonspeech sounds. For example, the Zulu clicks cited above may be easily discriminable if they sound like a cork popping versus fingers snapping, or else they may be only moderately discriminable if they sound like two different finger snaps. Different CG and NA contrasts may vary in discriminability due to variations in the degree of similarity, respectively, in their phonetic-articulatory properties or in their auditory properties. Finally, mature listeners are expected to show poor discrimination of SC contrasts, because the two phones assimilate to a single native-phoneme category but are equally similar or discrepant from the standard exemplar of that category. Thus, the discrimination performance pattern for adults should be, from highest performance to lowest, TC > (NA< = >CG) > SC. This prediction assumes strong phonological influence from the native language and is precisely the pattern of performance we have obtained with adult listeners across several experiments with nonnative speech contrasts. It should be noted that CG and SC contrasts fall at different ends of a single dimension, in that both involve assimilation of a nonnative phone pair to a single native category. In the case of CG contrasts, discrimination is aided by the listener's recognition that one nonnative phone is more discrepant from the standard native exemplar than is the other. A corollary of this principle is that various CG contrasts may differ in the degree of discrepancy between the two nonnative phones and, hence, may vary in degree of discriminability. If the discrepancy from the native prototype is large for one nonnative item but very small for the other (a strong CG difference), discrimination will be better than if there is only a small difference in discrepancy between the two nonnative items (a weak CG difference). At the extreme of this dimension of assimilability, both members of the nonnative contrast are equally discrepant from native-category exemplars, in which case we have a SC contrast with poor discriminability. Also note that most of the earlier-studied nonnative contrasts that have proven difficult for adults and older infants to discriminate fit the definition of SC contrasts or of weak CG contrasts, which could account for the page_192 Page 193 listeners' difficulties. In a few previous reports, adults have fared better with some nonnative contrasts even with little or no training, as in English-speaking listeners' discrimination of the Hindi voiceless aspirated /th/ versus breathy voiced /dh/ stops (Werker and Tees 1984b) and Kikuyu-speaking listeners' discrimination of the English voiced-voiceless stop distinction (Streeter 1976a). The latter cases fit the definition of a TC contrast and a strong CG contrast, respectively. Finally, it should be noted that NA contrasts, like CG contrasts, theoretically may vary in degree of discriminability, which will in these cases be determined by variations in salience of the auditory differences between pair members (Burnham 1986). Auditory rather than phonetic-articulatory differences should determine discrimination of NA contrasts because phonologically sophisticated listeners are expected to perceive them as nonspeech sounds, that is, as nonlinguistic mouth sounds or perhaps as sounds produced by similar nonvocal events. The predictions for phonologically sophisticated adult listeners are clear. The expectations for young infants under 6-8 months are likewise clear, although different from those for adults. Specifically, young infants' discrimination performance should not differ in a phonologically consistent way according to the four assimilation types but rather should be good for most native and nonnative contrasts. To the extent that these young infants may show different discrimination levels for various contrasts, these variations should not follow the pattern described for adults but should instead be related to nonlinguistic differences in the complexity or salience to the young infant of the phoneticarticulatory distinctions involved.
On logical and/or theoretical grounds, however, there are several possible outcomes for the older 10- to 12-month-old infants, who have shown clear evidence in previous studies of a dramatic change in perception of nonnative contrasts. Their pattern of performance on the four assimilation types should provide insight into the nature of that perceptual change. One logical possibility is that advances in the infant's general cognitive/ memory abilities may affect their responses to stimulus familiarity/ novelty, perhaps leading to a simple heuristic in which sounds, including speech sounds, that occur in the ambient environment are recognized as familiar while those that do not occur are unfamiliar and therefore pose perceptual difficulties (but see earlier discussion and also MacKain 1982 for problems with the underlying assumptions of this reasoning with respect to the language environment). On this account, at least as regards speech perception, older infants on the verge of language acquisition page_193 Page 194 should become less sensitive to and/or interested in contrasts between phones that are absent from their environment, that is, both unfamiliar. By definition, this description would fit TC, NA, and SC contrasts, as well as at least, weak CG contrasts. Older infants should therefore show poor discrimination of all nonnative contrasts except perhaps the strong CG type, which contrasts a familiar nativelike phone against an unfamiliar one. This set of predictions I will call the general familiarity hypothesis. Alternatively, the perceptual shift may be specifically linguistic in nature rather than simply being an instance of a general language-independent change in response to unfamiliar stimuli. There are several potential patterns by which a linguistically based reorganization might occur. If the perceptual shift by 10-12 months is a reflection that perception has shown a stagelike shift to becoming rule governed by the phonological system of the native language, then these older infants should show the same pattern of phonologically based discrimination performance as the adults of their language community. They should show excellent discrimination of TC contrasts, good-to-moderate discrimination of CG and NA contrasts, and only poor discrimination of SC contrasts. I will refer to this view as the strong phonological hypothesis to reflect its prediction of the infant's stagelike emergence into a higher linguistic level of perceptual organization that is governed by the native phonological system. Note that this approach entails the infant's recognition of the linguistic function of phonemic contrasts and other phonological rules, such as allophonic distributional constraints, which are problematic assumptions (see MacKain 1982 and discussion in the section Language-particular Developments in Infant Speech Perception, this chapter). Two other possible reorganization patterns seem more likely, each of which would indicate a different path for the infant's developing recognition of native phones and phonemic contrasts. One of these possibilities is that older infants' perception is organized according to phonemic contrasts but that their recognition of the patterns of coordination among phonetic details for individual native-phone categories is still under-differentiated. Thus, although they would be expected to discriminate clear between-category contrasts they may show greater acceptance of deviant tokens within a given phone category than adults do, that is, they may show lower within-category discrimination of the differences between good and poor exemplars, suggesting less refined mapping of the prototype space within the category (but see Grieser and Kuhl 1989). This view could be referred to as the phonemic contrast hypothesis. Once again, the hypothesis entails the problematic assumption that infants rec page_194 Page 195 ognize the linguistic function of phonemic contrasts. In this scenario, the older infants would discriminate TC contrasts, but because of their under-differentiated recognition of the coordinated phonetic details within individual native-phone categories, they would have difficulty discriminating CG contrasts, which entail within-category distinctions between good and less-good exemplars of a single phoneme. SC contrasts would likewise become difficult to discriminate as variants of a single native phone, although NA contrasts would remain discriminable as nonphones (i.e., as nonspeech). Thus, this view differs from the familiarity hypothesis by predicting good discrimination of TC contrasts, and it differs from both that hypothesis and the strong phonological hypothesis by predicting poor discrimination of CG contrasts. According to the final hypothesis, 10- to 12-month-olds' perception may not be organized around pairwise phonemic
contrasts as functional linguistic oppositions but rather may focus on the recognition of the patterns of gestural coordination that identify members within a given native categorya category recognition hypothesis. Note that the categories the infant comes to recognize need not, in fact, be confined to phonetic segments, but may also include larger gestural units, such as syllables and words. This last hypothesis, then, may be most compatible not only with the ecological view espoused in this chapter but also with the previously mentioned arguments that the child's earliest linguistic units are words, morphemes, and sometimes phrases (Ferguson 1986; Macken 1979; Macken and Ferguson 1983; Menn 1971, 1978, 1986; Waterson 1971), and that segmental phonology-phonemes whose boundaries are defined within a system of phonological contrasts-emerges only later by differentiation from these larger units in response to the pressure that vocabulary growth exerts on the organization of the child's lexicon (Lindblom, MacNeilage, and StuddertKennedy 1983; Studdert-Kennedy 1987, 1990). In other words, functional phonemic contrasts between phone categories are hierarchically more complex and abstract than are the coordinated gestural patterns that define category membership of a given utterance or phone. Nonetheless, the infant's recognition that two gestural coordination patterns differ from one another would be expected to lead to good discrimination of the two phones because of a recognition that either each phone is a clear member of a different phone category (i.e., the case of native-phonemic contrasts) or that one phone is a clear member of a given category while the other is not a good member of that category (i.e., the case of a nonnative strong CG contrast). page_195 Page 196 This final hypothesis predicts that the older infants should have difficulty discriminating SC contrasts, as well as TC contrasts, for which the nonnative phones are both unrecognizable to the infant as any native gestural-coordination patterns. Thus, while older infants could be expected to recognize many nonnative sounds as speech sounds because they can detect in them some of the general articulatory properties found in native speech, they should find it difficult to detect sufficient similarity between some nonnative-gestural-coordination patterns and the patterns they have discovered in specific native phones. This should make it difficult for older infants to discriminate not only the unfamiliar gestural coordinations seen in SC contrasts but also those TC phones whose gestural patterns deviate in many ways from even the most similar native phones. On the other hand, they should discriminate at least some CG contrasts as involving a nativelike gestural constellation as opposed to an unfamiliar gestural constellation, although they may not discriminate even those contrasts as well as adults do. However, they should perceive NA phones as nonspeech sounds (nonphones) because in them they would fail to detect even any global gestural similarities to native phones. Hence, they should discriminate NA contrasts on the basis of their nonspeech properties. Therefore, the category recognition hypothesis differs from both the strong phonological hypothesis and the phonemic contrast hypothesis by predicting poor discrimination of some TC contrasts. It further differs from the phonemic contrast hypothesis also by predicting good discrimination of some CG contrasts and from the general familiarity hypothesis by predicting good discrimination of NA contrasts and of some TC contrasts. Empirical Investigations of Perceptual Assimilation In the first test of the model's predictions, we assessed English-speaking adults' and infants' discrimination of a NA nonnative contrast, the Zulu apical versus lateral click contrast (Best, McRoberts, and Sithole 1988). This nonnative pair was expected to be nonassimilable to any English phonemes because the suction-release gesture used in them is not employed in any English phones. Nor, except for variation in laryngeal maneuvers, is it reasonably similar to any English gestural maneuvers as implosive stops are gesturally similar to plosive voiced stops or ejective stops are similar to voiceless stop. Moreover, the asymmetrical lingual release of the lateral click is not found in any English phonemes. The apical and lateral clicks sometimes do appear in isolation (i.e., without page_196 Page 197 vowels) as nonspeech ''mouth sounds" in our culture, the former appearing as "tsk-tsk" sounds that indicate frustration or disapproval, the latter as a "chucking" sound used to indicate approval or to urge a horse along. These nonspeech occurrences may reinforce the American listener's tendency to perceive the Zulu clicks as nonspeech sounds. We began by testing American adults' discrimination of the eighteen minimal-pair contrasts among the nine nonnasalized
Zulu clicks (apical, lateral, and palatoalveolar places of articulation crossed with prevoiced, voiced, or voiceless-aspirated manner) in click + /a/ syllables. The AXB-discrimination task employed multiple natural tokens of each category (X was a physically different token from both A and B) and thus depended upon some degree of perceptual constancy for successful completion. There was no training on the click contrasts and just a few practice trials to orient subjects to the task. Even with this minimal exposure to Zulu clicks, the American adults, as predicted, discriminated all contrasts well above chance showing between 85-95% correct discrimination for all pairings except one. Moreover, the subjects' responses on a posttest questionnaire revealed that, as predicted, they had indeed perceived the clicks as nonspeech sounds made by release of tongue suction (e.g., tongue clucking) or as other sounds resulting from abrupt pressure changes (e.g., cork popping) rather than perceiving them as phonemic segments. Their lowest performance was 80% correct discrimination with the apical-versus lateral-voiced (short-lag VOT) pair, so this contrast was chosen for further testing with adults and infants. To eliminate the most obvious acoustic difference between these two click categories, we equated the amplitudes of the clicks, and verified that they were still acceptable category exemplars in standard identification and discrimination tests with six Zulu listeners. A second AXB-discrimination test with a new group of American adults found discrimination to be virtually as good as before the stimulus manipulations, around 78% correct. We then tested English-learning infants in age ranges of 6-8, 8-10, and 10-12 months (the ages examined by Werker and colleagues 1981, 1984a, 1989), ten of each age, for discrimination of this click contrast and of a control English contrast (/ba/-/da/), with test order counterbalanced in each age group. For comparison, we tested another group of adults using the infant procedure and extended the test to even older infants at 12-14 months. In each test, the subject was conditioned to fixate on a colorful slide to hear repetitions of the multiple natural tokens of the habituation syllable, which terminated whenever the infant looked away from the page_197 Page 198 slide. Following a significant decline in fixation times below a criterion habituation level for two consecutive trials, the audio presentations were shifted to the test stimulus for that nonnative contrast. To assess for discrimination, mean looking times were computed for the two trials immediately preceding the stimulus shift (habituation level) and for the first two postshift trials (response recovery). The results of the infant study upheld the prediction that discrimination would remain high across all ages. The younger infants discriminated the category change, and moreover, the older infants or adults showed no evidence of a decline in discrimination (in fact, adults showed better discrimination than the infants in this habituation task-see also Eilers, Wilson, and Moore [1979]). Thus, the findings differ from those reported by Werker et al. (1981, 1984a, 1989) in that, although the click contrast is a nonnative distinction, older infants do not lose sensitivity to it. Our results are compatible with the claim that loss of discrimination for nonnative contrasts is not absolute and across the board but rather is due to differences in perceptual assimilation. This finding is at odds with the general familiarity hypothesis about the nature of the developmental change in perception of nonnative contrasts, but it is still compatible with the predictions of each of the other three hypotheses. That first study had employed a different experimental procedure than Werker had used, however, so the possibility remained that the difference between her results and ours could be traced to methodological factors rather than to assimilation differences between the Zulu clicks and the nonnative-phonetic contrasts she had used. Therefore, in a second study we replicated both our findings with the Zulu click contrast and Werker's findings with her Salish ejectives, an SC contrast (Best and McRoberts 1989). We used our visual fixation procedure with new groups of 6- to 8- vs. 10- to 12-month-olds, twelve per age. Each infant was tested on both of the nonnative contrasts as well as on the English control contrast. We again found that both age groups discriminated the NA-Zulu contrast and the English control, whereas only the younger infants discriminated the SC-Salish ejectives. Thus, the developmental difference between the two nonnative contrasts could be attributed to differences in perceptual reorganization for those types of contrast and not simply to methodological factors. This finding is again at odds with the general familiarity hypothesis but still fails to differentiate among the three linguistic hypotheses. The next step, then, was to compare discrimination of other nonnative assimilation types. For this purpose, we examined
three additional con
page_198 Page 199
trasts from Zulu. The TC contrast was a lateral-fricative voicing distinction , produced with the tongue in essentially the position used for English /1/ but with a greater degree of constriction along the sides of the tongue. For adult American listeners, the voiceless lateral fricative would be expected to assimilate to the English voiceless-coronal fricatives , or , perhaps heard with a (devoiced) subsequent /1/ due to its /1/-like positioning of the tongue tip/blade co-occuring with fricative manner and voicelessness (see Ladefoged 1981).5 The voiced-lateral fricative would be expected to assimilate to the English voiced fricatives /z/, /3/, or /ð/, and/or the approximant /1/. The CG contrast was a velar-voiceless versus ejective-stop distinction /ka/-/ka/. The Zulu /k/ is virtually identical to English /k/ (both [kh]). But the ejective /k/ involves a nonnative glottal gesture (rapid upward movement of the glottis during complete glottal closure) that should lead to its assimilation as a clearly less-than-ideal exemplar of English /k/. Thus, this stimulus pair constituted a strong CG contrast. The remaining contrast was a plosive versus implosive bilabial-stop distinction , originally chosen to represent an SC contrast. However, further consideration of the articulatory properties of these phones suggested that it was actually a weak CG contrast. Zulu /b/ is essentially like English /b/, but the downward movement of adducted and vibrating vocal folds for the implosive differs primarily in degree from the laryngeal movement involved in English /b/. According to the strong phonological hypothesis for mature listeners, the English-speaking adults would be expected to discriminate the TC contrast nearly perfectly, the strong CG contrast somewhat less well, and the weak CG contrast most poorly but still above chance. In the first part of the study, twenty-five monolingual English-speaking adults completed separate AXB category-identity discrimination tests on the three contrasts (as in Best, McRoberts, and Sithole 1988), each test again composed from multiple natural tokens. As predicted, adults discriminated the TC contrast with near-ceiling performance levels (~ 96% correct). They did slightly, but significantly, less well with the Zulu CG contrast (~ 88%), and discrimination performance was substantially and significantly lower on the SC contrast, although it was nonetheless above chance level (~ 66%). The subjects' posttest questionnaire descriptions of the Zulu sounds indicated that nearly all subjects assimilated the TC contrast to two different English phonemes or phoneme clusters, although there was variability as to which native phonemes (or clusters) were named (s, sh, or thl versus page_199 Page 200 z, zh, zhl, or 1), compatible with the gestural similarities and discrepancies from various English phonemes. Most heard the strong CG contrast as a normal /k/ versus a clearly deviant /k/ (e.g., choked or coughed). And, as expected, the majority heard the weak CG items as exemplars of English /b/. Only some of these subjects could identify a difference, in which they generally characterized one /b/ as murmured or swallowed. Thus, consistent with the perceptual-assimilation model, adults appear to assimilate nonnative contrasts to the closest native phoneme categories, apparently on the basis of articulatory similarities and discrepancies. Furthermore, the discrimination performance pattern precisely mirrors the predictions of the strong phonological hypothesis, as we expected for phonologically sophisticated listeners. We recently verified near-ceiling performance with another TC contrast, the Ethiopian ejective-stop distinction /pe/-/te/, which this time was assimilated virtually unanimously to English /p/-/t/, as expected from the straightforward gestural correspondence between the nonnative and native sounds (Best 1990). In part two of the second Zulu study, English-learning 6- to 8- and 10- to 12-month-olds, fourteen per age, completed visual-fixation habituation tests for each of the three contrasts (Best et al. 1990). The predictions from all four developmental reorganization hypotheses were that 6- to 8-month-old infants would discriminate all three contrasts. The three linguistic hypotheses offered different predictions for the 10- to 12-month-olds. Analyses of the data for the 6- to 8month-olds indicated significant discrimination across all three contrasts, consistent with predictions and with earlier
findings. However, discrimination of the TC contrast by itself was marginal, while discrimination was significant for each of the CG contrasts. However, contrary to both the younger infants and the adults, the 10-to 12-month-olds showed only marginal discrimination (p < .06) across these three Zulu contrasts, although this age had clearly discriminated the NA-Zulu clicks in our two previous studies. Moreover, the contrast on which they showed the poorest performanceactually a small decline in fixation times at the stimulus shiftwas the TC contrast that had proven easiest for the adults to discriminate. This age showed marginal discrimination of the strong CG contrast and nonsignificant discrimination of the weak CG contrast. Although the largest recovery in mean fixation time for either age was associated with the 10- to 12-month-olds' response to strong CG contrast, the statistical effect was hampered by high intersubject variability. This pattern of a high mean recovery paired with high intersubject variability suggests that some older infants may have detected page_200 Page 201 the category change while others utterly failed to. Planned comparisons of the 10- to 12-month-olds' discrimination pattern failed to support the strong phonological hypothesis (TC > strong CG > weak CG discrimination) or the phonemic contrast hypothesis (TC discrimination only), but they did offer marginal support (p < .08) for a pattern compatible with either the category-recognition hypothesis or the general familiarity hypothesis (strong CG discrimination only). Given that the two previous studies were consistent with the three linguistic hypotheses but not with the familiarity hypothesis, the full set of findings is most compatible with the category recognition account of perceptual reorganization at this age. But the category recognition hypothesis states that at least some TC contrasts should be discriminable to this age group. Why did this TC contrast pose such difficulty for the older babies? Two observations suggest a possible answer, although further research is needed to confirm it. First, recall that the gestural properties of the TC phones (lateral voiced and voiceless fricatives) do not provide a close match to any singular English phonemes. These Zulu phones involve a lingual gesture similar, but not identical, to that found in our lateral approximant /1/, yet they also involve lingual constrictions generally similar, but not identical, to a variety of English fricatives ( and their voiced counterparts). Thus, it would be understandable if the 10- to 12month-old who is only beginning to recognize the gestural coordination patterns found in English phones has difficulty recognizing any clear similarities between the Zulu phones and particular English categories. Indeed, even the adults were quite variable in the exact patterns by which they assimilated the phones in this TC contrast to specific English phonemes. Second, the older, and even the younger, infants' difficulty with the lateral fricatives does not appear to be a general problem with nonnative TC contrasts. We recently obtained evidence that, like adults, both 6- to 8- and 10- to 12-montholds can discriminate the Ethiopian ejective TC contrast, which bears a straightforward gestural relation to a single English distinction, /p/-/t/. Moreover, both ages also discriminated the English fricative-voicing distinction /s/-/z/, which is similar to the Zulu TC-fricative distinction that the older infants failed to discriminate (Best 1991b). It is interesting to note, however, that the older infants showed marginally lower discrimination than the younger infants on the English /s//z/ contrast, which involves only a relative shift in voicing onset during an ongoing frication. Thus, it appears likely that the reason the older infants had difficulty with the Zulu fricative-TC contrast, but not with page_201 Page 202 the Ethiopian ejective contrast, was that the gestural properties of the ejectives relate more a straightforwardly to English stops than the Zulu fricatives relate to any clear English categories for them. Thus, the pattern of findings across studies for the older infants are at odds not only with the general familiarity hypothesis but also with both the strong phonological hypothesis and the phonemic contrast hypothesis. The performance of the 10- to 12-month-olds on NA, SC, CG, and TC contrasts across studies is most compatible with the category recognition hypothesis. These results carry strong implications about the nature of developmental change in infants' perception of nonnative
speech sounds and, by extension, offer insights about the development of the native phonological system. They suggest that, by at least 10 to 12 months of age, infants have begun to discover the gestural-coordination patterns that identify categories roughly corresponding to phones in their native language. The findings also indicate that at least some of their categories still may not be as well differentiated as those of adults and may not be as strongly organized according to the pairwise linguistic contrasts of the native phonological system. More specifically, adults appear to assimilate nonnative phonemes to categories within the native language's system of phonemic contrasts, yielding near-ceiling discrimination performance on both of the tested nonnative TC assimilation contrasts. The older infants, however, and even the younger infants had difficulty discriminating the TC-fricative contrast, for which adults showed good discrimination but variable patterns of assimilation, while neither of the infantage groups had any difficulty discriminating another TC-ejective stop contrast, for which adults were in perfect agreement about assimilation. Moreover, a study recently completed in my lab confirms that the Zulu TC-fricative voicing contrast continues to pose difficulty even at 4 years of age, by which time the strong CG contrast (Zulu /k/-/k/) is discriminated consistently (Insabella 1990; Insabella and Best 1990). The pattern of findings with the infants and its extension to 4 year olds suggest that perception of nonnative speech contrasts in relation to the system of phonemic contrasts in the native language is a relatively late achievement that probably rests on a solid foundation of knowledge about the coordinated phonetic patterns of individual native phones. The 10- to 12-month-old infants' marginal discrimination of the strong CG contrast, compared with the greater difficulty both ages showed on the TC lateral-fricative contrast, runs counter to the predictions of the phonemic page_202 Page 203 contrast hypothesis. The initial stage of language-specific influence on perception of phonetic segments appears to involve the emerging recognition of the coordination of phonetic-gestural details within individual phone categories, rather than recognition of the phonetic distinctions that specify more abstract and linguistically functioning phonemic contrasts between categories. In keeping with the ecological theoretical perspective outlined earlier in the chapter, I suggest that the basis for infants' recognition of the language-specific properties of native and nonnative phones is the detection of evidence about the constellation of coordinated articulatory gestures that are associated with specific phones in the native language. Drawing from these findings, I suggest the following developmental progression in language-specific perceptual learning about speech: 1. Young infants initially perceive simple nonlinguistic (articulatory and/or acoustic) distinctions in speech-sound contrasts, and this ability is not yet influenced by their language environment. 2. By at least 10-12 months, infants have begun to discover certain gestural coordination patterns of phones used in their native language. But their recognition of these patterns is still broad and underspecified, at least for some phone categories, and does not reflect the linguistic function of phonemic contrasts. At this point, they appear to detect gestural properties in some nonnative phones that are similar to the coordinated patterns they have begun to detect in native speech, but they are less able than adults to recognize the full pattern of similarities and discrepancies. 3. During the preschool years, the gestural coordination patterns of native phone categories become better differentiated, especially with respect to good versus less-good exemplars, but even by 4 years, perception may not yet be fully organized at the level of phonemic contrast per se. 4. By adulthood, and probably much earlier, perception of speech sounds involves the recognition of linguistic structure at the level of phonemic contrasts, and unfamiliar sounds are assimilated to native-phoneme categories on the basis of their articulatory-gestural similarities and discrepancies. I have argued here that language-particular perceptual learning about speech involves the discovery of gestural coordination patterns that recur in the ambient language. It is this discovery that forms the basis of phonological development. This focus on the articulatory-gestural properties page_203
Page 204 perceived in native and nonnative speech sounds should also extrapolate to the development of phonological organization in the child's speech productions, that is, the ecological model of speech development posits that a common articulatory metric links perception and production. It follows that the child's emerging recognition of common gestural patterns in the ambient language should guide the development of language-specific phonological structure in his or her productions. In the final section of the chapter, we will examine this possibility in a case study of apparent phonological organization in a toddler's imitations of a diverse set of surface phonetic forms that realize a single phonemic contrast in American English. Phonological Behavior in Early Speech Production The study reported here, based on phonetic and acoustic analyses of a toddler's imitations of a set of phonologically opaque adult targets, found evidence of apparent phonological sophistication in production of intervocalic alveolar stops at 20-22 months of age. The American English adult targets were disyllables containing medial /d/ or /t/, followed by <er>, <-le>, or <-en>. In these contexts, intervocalic alveolar stops typically appear in normal conversation as the restricted phonetic variants flap , nasal release [dn], or glottal stop , (e.g., respectively, <wider> or <whiter>; <widen>; <whiten>). Although American English-speaking adults do not normally produce fully released alveolar stops in these phonetic environments, the child consistently substituted full alveolar stops for the phonetic variants actually found in the adult targets. Nonetheless, the phonetic and acoustic characteristics of the child's responses differed among the diverse phonetic target forms and distinguished between underlying /t/ and /d/. The analyses of the target utterances and of the child's imitations suggest that sensitivity to the articulatory properties of the target utterances and/or articulatory constraints on her productions of the target phonemes in different phonetic environments provided the basis for her behavior. In particular, the acoustically diverse allophones present in the adult targets nonetheless all involved alveolar contact coordinated with a release of obstruent constriction at some point in the vocal tract. I argue that the child's failure to imitate exactly the target utterances and the systematicity of her deviations reflect an important aspect of emerging phonological organization in her speech behavior. Specifically, the constraints provided by the articulatory information in the adult tar page_204 Page 205 gets and those provided by the child's articulatory limitations or preferences determine how a phoneme would be realized in particular phonetic contexts and, consequently, how phonetically disparate forms may become related to a common underlying phonological category. This general line of reasoning is based on the model of articulatory phonology put forth by Browman and Goldstein (1986, 1989, 1992) according to which phonological phenomena such as epenthesis, assimilation, and reduction can be understood simply as lawful consequences of the gestural organization of utterances. This study was prompted by informal observations of my daughter Aurora at 20 months of age when I noticed that she typically produced fully released intervocalic alveolar stops while imitating adult words in which medial was pronounced as a glottal stop . Because of the complex pattern of contextual effects that produce diverse phonetic realizations of phonemic and in intervocalic position in American English, I reasoned that Aurora's imitative responses to the differing phonetic forms of medial alveolar stops should reveal the characteristics of incipient phonological organization for those phonemes in her speech productions (these data are a portion of those presented in Best, Goodell, and Wilkenfeld, in preparation). Child phonology studies have typically excluded imitative responses from analysis on the assumption that imitations would closely match the phonetic details of the adult target and thus would not reveal much about the intrinsic organization of the child's phonology (Leonard, Fey, and Newhoff 1981; Leonard et al. 1978). Yet, Aurora's imitations were not exact phonetic replicas of the adult utterances. In fact, they differed systematically from the targets. Examining imitative responses offers several advantages over examining only spontaneous utterances: (1) demands on memory and lexical access are minimized, (2) unfamiliar and nonsense words can be presented to control for the influence of phonological idioms in familiar words (Moskowitz 1980), (3) there is no doubt about the child's intended target, and (4) the properties of both the child's production and the immediate adult target can be directly compared. In addition, the
standard approach in child phonology research has included only broad phonetic transcription of the child's utterances and has not involved corollary acoustic analyses of the productions that might provide some converging evidence about their phonetic-articulatory characteristics. For these reasons, I recorded Aurora in three sessions between 20 and 22 months as she imitated familiar, unfamiliar, and nonsense target words containing intervocalic alveolar stops that were realized as phonetically page_205 Page 206 diverse allophones. The targets were stress-initial disyllables containing intervocalic or produced as , or [dn]. Some words were familiar to the child and produced with familiar (American English, or AE) or unfamiliar pronunciations (Cockney English, or CE); others were unfamiliar words or nonsense words produced with AE or CE pronunciation. The medial stop was realized either as (e.g., AE or CE pronunciation of <spittle>), as (e.g., AE ), or as the nasal release [dn] (e.g., AE <widen>). The responses were elicited in the context of a vocal imitation game at home, during which Aurora was quite willing to provide citation-form repetitions (sometimes multiple tokens) of target words that had been presented in the sentence frame "Can you say _____?" We analyzed the phonetic and acoustic properties of both my targets and Aurora's responses for direct comparisons. Both speakers' utterances were computer-digitized and the disyllables of interest were extracted into separate files. After discarding a small number of utterances that were not acoustically analyzable due to background noise (nchild = 10; nadult= 4), broad phonetic transcriptions were made for the medial consonants in both the adult's (n = 51) and the child's (n = 52) remaining utterances. The transcriptions were conducted blind as to the context of the preceding and following utterances. These transcriptions were independently verified by a second listener. The majority of the utterances yielded identical transcriptions from the two transcribers for both speakers (child = 70%; adult = 100%). The greater difficulty in transcribing Aurora's utterances is typical of the generally decreased reliability for phonetic transcriptions of young children that has been noted in other phonology studies. Discrepancies in the transcriptions for Aurora were resolved either by mutual agreement between the transcribers in a joint listening session (11% of her total utterances) or via tie breaking by a third expert listener (19% of her utterances). Several acoustic measurements were also taken from the intervocalic portion of each disyllable for both speakers. Prerelease silence (from end of first vowel to release burst) and voice-onset-time (measured as the time from releaseburst onset to beginning of glottal pulsing) were measured from the waveforms for tokens containing a stop-release burst. Also, total duration of intervocalic silence was measured for tokens without a release burst. The latter measure provided a fairly objective index of the duration and timing of glottal devoicing. To index the timing of supralaryngeal and/or glottal closure gestures, we measured the total duration of closure, whether voiced or voiceless, for the intervocalic consonant of all utterances. Judgments of closure onset were based on a substantial, fairly page_206 Page 207 abrupt decrease in signal amplitude and a qualitative change in voicing at the end of the first vowel (according to changes in the waveform of the pitch pulses and/or to perceptual evidence of voice quality change). Judgments of closure offset were based on the onset of periodic voicing in the second vowel of rapid increase in amplitude, and sometimes, of the presence of a release burst. To assess for any vowel-length differences associated with voicing differences in the medial stop, duration measurements of the first syllable were also taken. According to phonetic transcriptions, the vast majority of Aurora's responses deviated from the phonetic properties of the adult targets. Her responses were predominantly fully released alveolar stops, corresponding to the phonological categories that underlie the diverse surface phonetic forms of the targets. Among Aurora's medial alveolar stops, the most frequent response to all three targets was /d/, which may reflect the relative ease of voiced alveolar stop production by children of this age in addition to, or instead of, reflecting the phonological status of the adult targets (Kewley-Port and Preston 1974; Locke 1983). Nonetheless, the proportion of /d/ versus /t/ responses and the pattern of less-frequent responses differed according to the phonological and phonetic (articulatory) properties of the targets. The voiced alveolar flap target , which is the surface realization of medial /d/ or /t/ followed by syllabic in adult AE, yielded the largest proportion of /d/ responses (~ 80%) and no voiceless [t]'s (the exceptions to /d/ were the voiced apical phones / /
and /ð/). In contrast, the glottal stop target , which is the surface realization of /t/ followed by syllabic in AE or CE and by syllabic in CE, elicited more /t/ responses (~ 35%) than any other target, which approached the proportion of /d/ responses for this target (~ 45%). The nasal (velar) release in target [dn], which is the surface realization of an underlying /d/ before syllabic , elicited an intermediate proportion of /d/'s (~ 60%). The exceptions in the latter case included /t/, the velar stops /g/ and /k/, and the glottal /h/. The acoustic measures also indicated that Aurora both deviated from the acoustic properties of the adult target utterances and, at the same time, was systematically responsive to phonetic-articulatory differences among the surface forms of the targets. Note that Aurora's alveolar stop responses almost always included stop-release bursts, in contrast to the absence of bursts in nearly all of the adult targetsonly two of the latter contained release bursts. More than half of the adult and [dn] targets contained no intervocalic silence, and in those that did, the silent period was only 20-67 msec in duration. In contrast, the targets had consis page_207 Page 208 tent intervocalic silent periods of 50-150 msec. Consistent with the asymmetry in the target pattern, the intervocalic silence in Aurora's /t/ and /d/ responses to targets were systematically longer than those to targets. This pattern is potentially consistent with either a gestural basis or an acoustic basis for Aurora's imitative responses. However, the intervocalic silent periods in her responses to the nasal release [dn]targets were even longer than those to the glottal stops , in direct opposition to the bias shown in the adult targets. Thus, the child's responses in these cases cannot be a direct consequence of mimicking the acoustic properties of the targets. Instead, Aurora's responses to the [dn] targets more likely reflect increased difficulty in her ability to execute the heterorganic alveolar and velar gestures of a medial [d] followed by a syllabic . Separate examination of Aurora's prerelease silent periods versus her postrelease VOT measures offer some insight into the cause of the variations in her responses to the different targets. She produced the longest prerelease silent periods for the nasal targets ( ~ 100 msec), followed by glottal stops ( ~ 75 msec); her prerelease silences for flaps were substantially shorter ( ~ 20 msec). Thus, she appeared to use prerelease gap differences to differentiate systematically the three targets. However, she did not vary her postrelease VOT systematically among the three targets. Her mean VOTs ranged from 4050 msec for all three targets and did not differ between the responses that were transcribed as /d/ versus those transcribed as /t/ (see also Kewley-Port and Preston 1974). Overall, these results indicate that the phonetic properties of the child's responses were achieved by means of different patterns of glottal timing than were the adult targets. The measurements suggest that she had greater control over the duration of the prerelease silent period than over the (postrelease) VOT to instantiate intervocalic stop-voicing distinctions. We also examined evidence about the timing of closure itself, which is not adequately represented in the measures of silent intervals, especially for voiced stops. Given that children's utterances are longer than those of adults, the closure data were normalized by dividing the absolute closure duration by the length of the total utterance for both adult and child tokens. In all cases, the child's absolute closure durations were longer than those of the adult. However, the child appeared to have produced systematic differences in ratios of closure/utterance-duration for the three target allophonic categories. These values were largest for responses to targets with medial glottal stops ( ~ .23), followed by those for the responses to nasal released targets ( ~ .18), and smallest for responses to flap page_208 Page 209 targets ( ~ .12). The adult targets varied to a much lesser extent for absolute closure durations, and did not vary substantially in ratio of closure/ utterance duration. In the adult targets, the ratios for utterances containing nasal releases ( ~.12) were nearly identical to those containing flaps and glottal stops ( ~.11). In American English and other languages, vowel length differences preceding an intervocalic or final stop often distinguish between voiced and voiceless versions of the stop. Therefore, differences in the length of the vowels preceding and [dn] targets may have provided the child with additional information about their voicing difference. Moreover, these vowel-length differences may also have been reflected in the child's productions. For these reasons, we measured vowel length in the first syllable of the adult and child utterances for all -minimal pairs such as batten
and badden . The use of minimal pairs avoided confounding intrinsic vowel-length differences with voicingrelated vowel lengthening. Again, because children's utterances are longer than those of adults, the data were normalized by dividing the duration of the first vowel by the length of the total utterance. The adult targets showed vowel lengthening overall before /d/, but this pattern held only for the lax vowels /?/, /?/, and /æ/. Instead, the diphthongs /iI/ and /aI/ actually showed very slight shortening before /d/ relative to /t/. Aurora likewise showed overall vowel lengthening before /d/, but this held only for the diphthongs and not for the lax vowels, exactly the opposite of the pattern found in the adult targets. Thus, again, the child appears to have achieved the voicing contrast in a different manner than was provided by the adult targets rather than by simply mimicking the acoustic properties of the targets. Overall, the results suggest some level of phonologically relevant organization in Aurora's behavior, supported by the fact that systematic production patterns appeared across familiar and unfamiliar words and nonwords (some presented with unfamiliar CE pronunciation) that involved variations in phonetic contexts and in surface phonetic realizations of the underlying phonological categories. Because the study compared the phonetic transcriptions of the adult and child utterances with several acoustic measures that were intended to provide evidence about the articulatory gestures involved, the data may offer novel insights into the way in which young children begin to organize the articulatory/phonetic details of their productions in relation to abstract linguistic categories in the native phonological system. Specifically, the findings indicate the following: page_209 Page 210 1. By the age of 20-22 months, Aurora had developed a systematic pattern of behavior that related the diverse surface phonetic forms of the medial consonants in the target utterances to underlying alveolar stops. 2. Aurora's behavior pattern was more complex than a simple one-to-one mapping from a single allophone to a single phonological category, that is, she did not simply imitate the simple acoustic properties of the targets. Instead, her behavior showed many-to-one mappings (she associated multiple surface forms to singular underlying phonemes) that nonetheless retained some articulatory/phonetic differentiation among the diverse, context-specific surface realizations of the categories. 3. The relationship between the allophones in the adult targets and Aurora's substitutions may be best understood in terms of articulatory characteristics of the targets and/or articulatory limitations on the child's productions, given the notable discrepancies between the acoustic properties of the targets and those of the child's imitations. The complexity of these patterns in Aurora's productions is demonstrated not only by her nearly consistent substitution of alveolar stops for glottal stops, nasal releases and flaps but also by the differences in intervocalic silence and in closure intervals that she maintained among the stops she substituted for , for and for [dn]. The characteristics of the child's substitutions indicate a behavior pattern that could be considered to reflect the beginnings of phonemic organization, but an organization that is still immature and quite different from that seen in adult speech behavior. The results suggest that this behavior pattern is based on articulatory properties of the phones and phonological categories investigated. Consider that all of the intervocalic stop targets involve forward movement of the tongue tip into alveolar contact or approximation, either because of an intrinsic alveolar gesture in the target segment itself and/or because of alveolar gestures in the following syllable, . Accordingly, the vast majority even of Aurora's substitutions were consonants involving apical contact (including /n/ in response to and the linguodental /ð]/ in response to ). She could have done otherwiseshe was producing a broad array of stops, nasals, fricatives, and approximants at various places of articulation at this age, including glottal stops (e.g., as ). In addition, other articulatory properties of the targets could be related to gestures other than alveolar stop maneuvers. Two of the targets ( and [dn]) incorporate posterior vocal tract gestures (glottal stop and velar constric page_210 Page 211 tion) in addition to alveolar articulations, which could appropriately have been substituted by posterior consonants. In fact, of the ''other" responses, those to and [dn] were all posterior gestures (/g/, /k/, and /h/), while the only "other" response to , which did not involve a posterior gestural component, was the linguodental (/ð/). Nonetheless, it is important to remember that the vast majority of Aurora's responses were indeed fully released alveolar stops.
It thus appears that Aurora was sensitive to the gestural properties of the target words, even if she did not mimic them precisely. Specifically, she appears to have been sensitive to the main place(s) of constriction in the vocal tract and to characteristics of the associated glottal gestures. Furthermore, she was able to incorporate information about those properties into her productions so that she could both relate her productions to linguistic categories in her native language and arrive at a phonetic realization for them within her articulatory limitations. Her primary difficulty was apparently a failure to incorporate into her imitations the precise temporal coordination among the supralaryngeal and the glottal gesturesthe temporal phasing among discrete gestures of different articulatorsthat was provided in the adult targets. These findings are compatible with the ecological approach to speech development discussed earlier in the chapter and with the principles of Browman and Goldstein's model of articulatory phonology (1986, 1989, 1992). These imitation data carry important implications about the development of phonological organization in children's speech productions, which appear to complement our findings on perceptual assimilation of nonnative phones. The pattern of systematic, context-specific articulatory variations in Aurora's productions, along with their relation to the voicing categories of alveolar stops, suggests that she related diverse allophonic realizations to common phonological categories. Her phonological categories, however, were organized differently than in adult speech production and appear to be underdifferentiated with respect to the variations in gestural coordination that are found among allophones in the native phonology. This pattern would seem most consistent with the category recognition hypothesis discussed earlier regarding developmental reorganization in infants' perception of nonnative contrasts. Conclusion This chapter has examined the way in which the infant may come to relate the phonetic details of speech in his or her language environment to the page_211 Page 212 more abstract linguistic categories of the native phonological system. To this end, I have presented a model of how the infant may move from perceiving general information about phonetic contrasts in speech during the first six months of life to discovering language-particular patterns that ultimately correspond to the phonemes and phonemic contrasts that guide speech perception and production in the mature language user. This discovery, in turn, influences the properties perceived in nonnative speech sounds. I have also presented complementary information about the possible relations between the phonetic properties of adult utterances in the native language and the emergence of phonologically relevant organization in a young language-learning child's speech productions. The line of reasoning developed in this chapter is compatible with the premise that the recognition of phonemes as specifically linguistic elements, which convey meaningful contrasts and are functionally organized within a phonological system, develops only gradually as the child builds a lexicon (see Studdert-Kennedy 1987, 1989; Flege 1990; cf. Jusczyk 1986, this volume). According to that view, phonemic segments are differentiated from words rather than being preexisting elements that are concatenated into words, and the child's phonological system emerges in accord with the principles of self-organizing systems (Studdert-Kennedy 1987, in press). I have suggested that the means by which this phonemic development takes place in perception and production is through the young speaker-hearer's detection of information in speech about the articulatory events that produced the signal. This ecological view of speech development, which was based upon James Gibson's general ecological theory about perceptual systems, thus posits a common articulatory link between perception and production of speech. That commonality should provide obvious benefits for the young child's acquisition of a native language. The data summarized here support the proposed model of phonological development and are compatible with the ecological perspective described. I would argue that this ecological perspective on speech offers important and unique insights about the relation between perception and production in the development of spoken language. Future research must resolve the full time course of development of mature phonological organization in speech perception and production. Acknowledgments The preparation of this chapter, as well as the original research reported here, were supported by NIH grants HD01994 to Haskins Laboratories
page_212 Page 213 and DC00403 to the author. The ideas presented in this chapter have gained much clarity and detail as a result of discussions with numerous colleagues and students. I would like to extend particular thanks to Carol Fowler, Janet Werker, Doug Whalen, Winifred Strange, and Michael Studdert-Kennedy for stimulating discussions and constructive criticisms of the model as I have worked on it over the past few years. I also appreciate comments on earlier versions of this chapter by Alice Faber, Alvin Liberman, Michael Studdert-Kennedy, Jim Flege, Rebecca Eilers, and the book editors. I am especially indebted to my collaborators and research assistants: Gerald McRoberts, Elizabeth Goodell, Nomathemba Sithole, Deborah Wilkenfeld, Jane Womer, Stephen Luke, Rosemarie LaFleur, Shama Chaiken, Laura Klatt, Laura Hampel, Laura Miller, Leslie Turner, Pia Marinangelli, Ritaelena Mangano, Ashley Prince, Alex Feliz, Merri Rosen, Amy Wolf, Pam Speigel, Maria Poveromo, Peter Kim, Glenda Insabella, Cindy Nye, Suzanne Margiano, Diane Schrage, and Jean Silver. Notes 1. This is not meant to imply that infants initially have universal abilities to perceive all contrasts from all languages, although this has sometimes been claimed or implied. Certain contrasts, both native and nonnative, may be more diffcult than others for young infants to discriminate (e.g., Aslin et al. 1981; Eilers and Minifie 1975; Eilers et al. 1981; Eilers, Wilson, and Moore 1979; Kuhl 1980; see further discussion in section on Language-particular developments in infant speech perception). The point here is that initially infants' discrimination of segmental contrasts does not yet appear to be constrained by the particular language environment. 2. One alternative is that the auditory system begins and remains physiologically incapable of registering certain acoustic properties of speech unless the listeners' environment provides exposure to those properties, which would then induce sensitivity (presumably during some critical developmental period). This would be a strictly sensory-neural version of the induction hypothesis formulated by Aslin and Pisoni (1980a; see also Pisoni, Lively, and Logan this volume) to explain one possible form of experiential effect on perceptual development, which was based on Gottlieb's (1981) model of visual and auditory development in ducklings. Another alternative effect of auditory experience is that some phonetic contrasts are weak in acoustic salience and hence initially difficult for the infant to discriminate but that relatively frequent exposure to these can improve discrimination (Eilers and Oller 1988; Eilers, Wilson, Moore 1979). The logical problem with the absolute notion of sensory-neural induction is that, if the sensory system is incapable of registering an acoustic property, the exposure is intrinsically ineffectual and incapable of inducing sensitivity. To illustrate, the human visual system cannot register ultraviolet wavelengths as visible light, and our abundant exposure to ultraviolet light from both natural and artificial sources never induces visual sensitivity to those wavelengths. page_213 Page 214 The acquisition of perceptual ability to discover previously unrecognized organization in stimulation is, however, another matter so long as the sensory system is already capable of registering the supporting physical evidence. The emergence of such perceptual abilities certainly does occur associated with adjustments of selective attention (Gibson 1966; Gibson and Gibson 1955). In these cases, the term perceptual learning is preferable to induction. Sensory capacity is necessary, though not sufficient, for perceptual learning, but perceptual learning cannot induce sensory capacity. In other words, if the system cannot register a stimulus property, experience will not change that fact. If the system can register the property, but the perceiver does not initially recognize the pattern of information it conveys, then experience can lead to perceptual learning. Confusion about induction may arise within a theoretical model to the extent that it conflates experiential effects on physiological mechanisms with experiential effects on perceptual skills (i.e., selective attention). 3. Space limitations prevent a full recapitulation of the discussion here; for additional details, the reader is referred to the original papers. 4. A very similar question about adult perception and production of nonnative sounds has been addressed in the work of Flege (1988, Flege and Eefting 1986, 1987), whose discussion of the issue is compatible in some respects with the view presented here but divergent in other.
5. The diversity of possible assimilations is due to the fact that Zulu shares partial articulatory commonalities with each of these (and perhaps other) English consonants and is simultaneously discrepant from each of them on other articulatory properties, that is, the gestural coordination is unfamiliar with respect to English. As stated earlier, listeners should show sensitivity to both similarities and discrepancies between the nonnative phone and native categories. Furthermore, they may vary regarding which similarities/discrepancies capture their attention; hence they may differ regarding which of several possible native categories assimilates a given nonnative phone. References Aitchinson, J. (1983) The articulate mammal. New York: Universe Press. Aslin, R. N. and Pisoni, D. B. (1980a). Effects of early linguistic experience on speech discrimination by infants: A critique of Eilers, Gavin, and Wilson (1979). Child Development, 51, 107-112. Aslin, R. N. and Pisoni, D. B. (1980b). Some developmental processes in speech perception. In G. Yeni-Komshian, J. F. Kavanaugh, and C. A. Ferguson (eds.), Child phonology, vol 1: Production. New York: Academic Press. Aslin, R. N., Pisoni, D. B., Hennessy, B. L., and Perey, A. J. (1981). Discrimination of voice onset time by human infants: New findings and implications for the effects of early experience. Child Development, 52, 1135-1145. Aslin, R. N., Pisoni, D. B., and Jusczyk, P. W. (1983). Auditory development and speech perception in infancy. In M. M. Haith and J. J. Campos (eds.), Handbook page_214 Page 215 of child psychology, vol. 2: Infancy and developmental psychobiology (pp. 573-687). New York: Wiley. Bahrick, L. E. (1987). Infants' intermodal perception of two levels of temporal structure in natural events. Infant Behavior and Development, 10, 387-416. Bahrick, L. E. (1988). Intermodal learing in infancy: Learning on the basis of two kinds of invariant relations in audible and visible events. Child Development, 59, 197-209. Best, C. T. (1984). Discovering messages in the medium: Speech and the prelinguistic infant. In H. E. Fitzgerald, B. Lester, and M. Yogman (eds.), Advances in pediatric psychology, vol. 2 (pp. 97-145). New York: Plenum Press. Best, C. T. (1990) . Adult perception of nonnative contrasts differing in assimilation to native phonological categories. Journal of the Acoustical Society of America, 88, S177. Best, C. T. (1991a). Language-specific effects on perception of discourse prosody categories by 2-4 month olds. Presented at the International Conference on Event Perception and Action, Amsterdam, The Netherlands, August 1991. Best, C. T. (1991b). Phonetic influences on the perception of nonnative speech contrasts by 6-8 and 10-12 month-olds. Presented at the meeting of the Society for Research in Child Development, Seattle, Wash., April 1991. Best, C. T. and McRoberts, G. (1989). Phonological influences on infants' perception of two nonnative speech contrasts. Paper presented to the Society for Research in Child Development, Kansas City, Mo, April 1989. Best, C. T., Goodell, E., and Wilkenfeld, D. (in preparation). Phonologically-motivated substitutions in two- and fiveyear-olds' imitations of intervocalic alveolar stops. Best, C. T., Levitt, A. G., and McRoberts, G. W. (1991). Examination of language-specific influences in infants' discrimination of prosodic categories. Proceedings of the 12th international congress of phonetic sciences. Aix-enProvence, France. Best, C. T., McRoberts, G. W., Goodell, E., Womer, J. S., Insabella, G., Kim, P., Klatt, L., Luke, S., and Silver, J. (1990). Infant and adult perception of nonnative speech contrasts differing in relation to the listener's native phonology. Infant Behavior and Development, 13 (Abstract).
Best, C. T., McRoberts, G. W., and Sithole, N. N. (1988). The phonological basis of perceptual loss for nonnative contrasts: Maintenance of discrimination among Zulu clicks by English-speaking adults and infants. Journal of Experimental Psychology: Human Perception and Performance, 14, 345-360. Best, C. T., Morrongiello, B., and Robson, R. (1981). Perceptual equivalence of acoustic cues in speech and nonspeech perception. Perception and Psychophysics, 29, 191-211. page_215 Page 216 Brière, E. (1966). An investigation of phonological interference. Language, 42, 769-796. Browman, C. P. and Goldstein, L. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219-252. Browman, C. P. and Goldstein, L. (1989). Articulatory gestures as phonological units. Phonology, 62, 201-251. Browman, C. P. and Goldstein, L. (1992). Articulatory phonology: An overview. Phonetica, 49, 155-180. Burnham, D. K. (1986). Developmental loss of speech perception: Exposure to and experience with a first language. Applied Psycholinguistics, 7, 207-240. Carney, A. E., Widin, G. P., and Viemeister, N. F. (1977). Noncategorical perception of stop consonants differing in VOT. Journal of the Acoustical Society of America, 62, 961-970. Dent, C. (1990). An ecological approach to language development: An alternative to functionalism. Developmental Psychobiology, 23, 679-703. Diehl, R. and Kluender, K. (1989). On the objects of speech perception. Ecological Psychology, 1, 121-144. Eilers, R. E. (1977). Context-sensitive perception of naturally produced stop and fricative consonants by infants. Journal of the Acoustical Society of America, 61, 1321-1336. Eilers, R. E. and Minifie, F. D. (1975). Fricative discrimination in early infancy. Journal of Speech and Hearing Research, 18, 158-167. Eilers, R. E. and Oller, D. K. (1988). Precursors to speech: What is innate and what is acquired? Annals of Child Development, 5, 1-32. Eilers, R. E., Gavin W. J., and Oller, D.K. (1982). Cross linguistic perception in infancy: The role of linguistic experience. Journal of Child Language, 9, 289-302. Eilers, R. E., Gavin W. J., and Wilson, W. R. (1979). Linguistic experience and phonemic perception in infancy: A cross-linguistic study. Child Development, 50, 14-18. Eilers, R. E., Gavin W. J., and Wilson, W. R. (1980). Effects of early linguistic experience on speech discrimination by infants: A reply. Child Development, 51, 113-117. Eilers, R. E., Morse, P. A., Gavin, W. J., and Oller, D. K. (1981). The perception of voice-onset-time in infancy. Journal of the Acoustical Society of America, 70, 955-965. Eilers, R. E., Oiler, D. K., Bull, D. H., and Gavin, W. J. (1984). Linguistic experience and infant speech perception: A reply to Jusczyk, Shea and Aslin (1984). Journal of Child Language, 11, 467-475. page_216 Page 217 Eilers, R. E., Wilson, W. R., and Moore, J. M. (1977). Developmental changes in speech discrimination in infants. Journal of Speech and Hearing Research, 20, 766-780.
Eilers, R. E., Wilson, W. R., and Moore, J. M. (1979). Speech discrimination in the language-innocent and the languagewise: A study in the perception of voice onset time. Journal of Child Language, 6, 1-18. Eimas, P. D. (1974). Linguistic processing of speech by infants. in R. L. Schiefelbusch and L. L. Lloyd (eds.), Language perspectivesAcquisition, retardation, and intervention. Baltimore: University Park Press. Eimas, P. D. (1975). Speech perception in early infancy. In L. B. Cohen and P. Salapatek (eds.), Infant perception: From sensation to cognition. New York: Academic Press. Eimas, P. D. (1978). Developmental aspects of speech perception. in R. Held, H. Leibowitz, and H. L. Teuber (eds.), Handbook of sensory physiology, vol. 8: Perception. New York: Springer-Verlag. Eimas, P. D., Siqueland, E. R., Jusczyk, P., and Vigorito, J. (1971). Speech perception in infants. Science, 171, 303-306. Fant, G. (1960). Acoustic theory of speech production. The Hague: Mouton. Ferguson, C. A. (1986). Discovering sound units and constructing sound systems: It's child's play. In J. S. Perkell and D. H. Klatt (eds.) Invariance and variability of speech processes (pp. 36-53). Hillsdale, N.J.: Erlbaum. Flege, J. E. (1984). The detection of French accent by American listeners. Journal of the Acoustical Society of America, 76, 692-707. Flege, J. E. (1987). A critical period for learning to pronounce foreign languages? Applied Psycholinguistics, 8, 162-177. Flege, J. E. (1988). The production and perception of speech sounds in a foreign language. In H. Winitz (ed.), Human communication and its disorders: A review 1988. Norwood, N.J.: Ablex. Flege, J. E. (1990). Perception and production: The relevance of phonetic input to L2 phonological learning. In C. Ferguson and T. Huebner (eds.), Crosscurrents in second language acquisition and linguistic theories. Philadelphia: John Benjamins. Flege, J. E. and Eefting, W. (1986). Linguistic and developmental effects on stop perception and production by native speakers of English and Spanish. Phonetica, 43, 323-347. Flege, J. E. and Eefting, W. (1987). The production and perception of English stops by Spanish speakers of English. Journal of Phonetics, 15, 67-83. Flege, J. E., McCutcheon, M., and Smith, S. (1987). The development of skill in producing word-final English stops. Journal of the Acoustical Society of America, 82, 433-447. Fodor, J. A. (1983). The modularity of mind. Cambridge, Mass.: MIT Press. page_217 Page 218 Fowler, C. A. (1986). An event approach to the study of speech perception from a direct-realist perspective. Journal of Phonetics, 14, 3-28. Fowler, C. A. (1989). Real objects of speech perception: A commentary on Diehl and Kluender. Ecological Psychology, 1, 145-160. Fowler C. A. (1991). Sound-producing sources as objects of perception: Rate normalization and nonspeech perception. Journal of the Acoustical Society of America, 88, 1236-1249. Fowler, C. A. and Dekle, D. J. (1991). Listening with eye and hand: Cross-Modal contributions to speech perception. Journal of Experimental Psychology: Human Perception and Performance, 17, 816-828. Fowler, C. A. and Smith, M. (1986). Speech perception as "vector analysis": An approach to the problems of segmentation and invariance. In J. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes (pp. 123-139). Hillsdale, N.J.: Erlbaum.
Fowler, C. A. and Turvey, M. T. (1978). Skill acquisition: An event approach with special reference to searching for the optimum of a function of several variables. In G. Stelmach (ed.), Information processing in motor control and learning. New York: Academic Press. Fowler, C. A., Best, C. T., and McRoberts, G. W. (1990). Young infants' perception of liquid coarticulatory influences on following stop consonants. Perception and Psychophysics, 48, 559-570. Gardner, B. T. and Gardner, R. A. (1973). Evidence for sentence constituents in the early utterances of child and chimpanzee. Journal of Experimental Psychology: General, 104, 244-267. Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin. Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Gibson, J. J. and Gibson, E. J. (1955). Perceptual learning: Differentiation or enrichment? Psychological Review, 62, 3241. Gillette, S. (1980). Contextual variation in the perception of L and R by Japanese and Korean speakers. Minnesota Papers in Linguistics and the Philosophy of Language, 6, 59-72. Goodell, E. and Studdert-Kennedy, M. (1990). From phonemes to words or words to phonemes: How do children learn to talk? Paper presented at the International Conference on Infant Studies, Montreal, April 1990. Goto, H. (1971). Auditory perception by normal Japanese adults of the sounds "L" and "R." Neuropsychologia, 9, 317323. Gottlieb, G. (1981). The roles of early experience in species-specific perceptual development. In R. N. Aslin, J. R. Alberts, and M. R. Peterson (eds.), Develop page_218 Page 219 ment of perception: Psychobiological perspectives, vol. 1: Audition, somatic perception, and the chemical senses. New York: Academic Press. Grieser, D. A. and Kuhl, P. K. (1989). Categorization of speech by infants: Support for speech-sound prototypes. Developmental Psychology, 25, 577-588. Insabella, G. (1990). Four-year-olds' perception of nonnative phonetic contrasts. Unpublished senior honors thesis, Wesleyan University, Middletown, Conn. Insabella, G. and Best, C. T. (1990). Four-year-olds' perception of nonnative contrasts differing in phonological assimilation. Paper presented at meeting of the Acoustical Society of America, San Diego, November 1990. Jusczyk, P. W. (1981). Infant speech perception: A critical appraisal. In P. D. Eimas and J. A. Miller (eds.), Perspectives in the study of speech. Hillsdale, N.J.: Erlbaum. Jusczyk, P. W. (1985). On characterizing the development of speech perception. In J. Mehler and R. Fox (eds.), Neonate cognition: Beyond the blooming, buzzing confusion (pp. 199-229). Hillsdale, N.J.: Erlbaum. Jusczyk, P. W. (1986). Toward a model of the development of speech perception. In J. S. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes. Hillsdale, N.J.: Erlbaum. Jusczyk, P. W., Shea, S. L., and Aslin, R. N. (1984). Linguistic experience and infant speech perception: A reexamination of Eilers, Gavin and Oller (1982). Journal of Child Language, 11, 453-466. Kewley-Port, D. and Preston, M. S. (1974). Early apical stop production: A voice onset time analysis. Journal of Phonetics, 2, 195-210. Kuhl, P. K. (1980). Perceptual constancy for speech-sound categories in early infancy. In G. H. Yeni-Komshian, J. F.
Kavanaugh, and C. A. Ferguson (eds.), Child phonology, vol. 2: Perception (pp. 41-66). New York: Academic Press. Kuhl, P. K. (1987). Perception of speech and sound in early infancy. In P. Salapatek and L. Cohen (eds.), Handbook of infant perception, vol. 2 (pp. 275-382). New York: Academic Press. Kuhl, P. K. (1991). Human adults and human infants show a "perceptual magnet effect" for the prototypes of speech categories, monkeys do not. Perception and Psychophysics, 50, 93-107. Kuhl, P. K. and Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138-1144. Kuhl, P. K. and Meltzoff, A. N. (1984). The intermodal representation of speech in infants. Infant Behavior and Development, 7, 361-381. Ladefoged, P. (1981). A course in phonetics (2nd ed.) New York: Harcourt, Brace, and Jovanovich. page_219 Page 220 Lalonde, C. and Werker, J. (1990). Cognitive/perceptual integration of three skills at 9 months. Paper presented at the International Conference on Infant Studies, Montreal, April 1990. Lasky, R. E., Syrdal-Lasky, A., and Klein, R. E. (1975). VOT discrimination by 4 to 6½ month old infants from Spanish environments. Journal of Experimental Child Psychology, 20, 215-225. Leonard, L. B., Fey, M. E., and Newhoff, M. (1981). Phonological considerations in children's early imitative and spontaneous speech. Journal of Psycholinguistic Research, 10, 123-133. Leonard, L. B. Schwartz, R. G., Folger, M. K., and Wilcox, M. J. (1978). Some aspects of child phonology in imitative and spontaneous speech. Journal of Child Language, 5, 403-415. Levitt, A., Jusczyk, P., Murray, J., and Carden, G. (1988). Context effects in two-month-old infants' perception of labiodental/interdental fricative contrasts. Journal of Experimental Psychology: Human Perception and Performance, 14, 361-368. Liberman, A. M. and Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21, 1-36. Liberman, A. M. and Mattingly, I. G. (1989). A specialization for speech perception. Science, 245, 489-494. Liberman, A. M., Cooper, F. S., Shankweiler, D. P., and Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431-461. Lieberman, P. (1975). On the origins of language: An introduction to the evolution of human speech. New York: Macmillan. Lindblom, B., MacNeilage, P., and Studdert-Kennedy, M. (1983). Self-organizing processes and the explanation of phonological universals. In B. Butterworth, B. Comrie, and O. Dahl (eds.), Universals workshop. The Hague: Mouton. Lisker, L. and Abramson, A. S. (1970). The voicing dimension: Some experiments on comparative phonetics. Proceedings of the 6th international congress of phonetic sciences, Prague, 1967 (pp. 563-567). Prague: Academia. Locke, J. L. (1983). Phonological acquisition and change. New York: Academic Press. MacDonald, J. and McGurk, H. (1978). Visual influences on speech perception processes. Perception and Psychophysics, 24, 253-257. McClasky, C. L., Pisoni, D. B., and Carrell, T. D. (1983). Effects of transfer of training on identification of a new linguistic contrast in voicing. Research on Speech Perception Progress Report (Indiana University), 6, 205-234. McCune, L. and Vihman, M. (1987). Vocal motor schemes. Papers and Reports in Child Language Development, 26. page_220
Page 221 McGurk, H. and MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264,746-748. MacKain, K.S. (1982). Assessing the role of experience on infants' speech discrimination. Journal of Child language, 9, 527-542. MacKain, K. S., Best, C. T., and Strange, W. (1981). Categorical perception of English /r/ and /1/ by Japanese bilinguals. Applied Psycholinguistics, 2, 369-390. MacKain, K. S., Studdert-Kennedy, M., Spieker, S., and Stern, D. (1983). Infant intermodal speech perception is a left hemisphere function. Science, 219, 1347-1349. Macken, M.A. (1979). Developmental reorganization of phonology: A hierarchy of basic units of acquisition. Lingua, 49, 11-49. Macken, M. A. and Ferguson, C. A. (1983). Cognitive aspects of phonological development: Model, evidence, and issues. In K. E. Nelson (ed.), Children's language. Hillsdale, N.J.: Erlbaum. Mann, V. A. (1980). Influence of preceding liquid on stop consonant perception. Perception and Psychophysics, 28, 407412. Mann, V. (1986). Distinguishing universal and language-dependent levels of speech perception: Evidence from Japanese listeners' perception of ''1" and "r." Cognition, 24, 169-196. Massaro, D. M. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J.: Erlbaum. Mattingly, I. G. and Liberman, A. M. (1988). Specialized perceiving systems for speech and other biologically significant sounds. In G. M. Edelman, W. E. Gall, and W. M. Cowan (eds.) Auditors function: Neurobiological bases of hearing (pp. 775-793). New York: Wiley. Mehler, J., Jusczyk, P., Lambertz, G., Halsted, N. Bertoncini, J., and Amiel-Tison, C. A. (1988). A precursor of language acquisition in young infants. Cognition, 29, 143-178. Menn, L. (1971). Phonotactic rules in beginning speech. Lingua, 26, 225-251. Menn, L. (1978). Phonological units in beginning speech. In A. Bell and J. Hooper (eds.), Syllables and segments. Amsterdam: North-Holland. Menn, L. (1986). Language acquisition, aphasia and phonotactic universals. In F. R. Eckman, E. A. Moravcsik, and J. R. Wirth (eds.), Markedness. (pp. 241255). New York: Plenum Press. Miyawaki, K., Strange, W., Verbrugge, R., Liberman, A. M., Jenkins, J. J., and Fujimura, O. (1975). An effect of linguistic experience: The discrimination of [r] and [1] by native speakers of Japanese and English. Perception and Psychophysics, 18, 331-340. Mochizuki, M. (1981). The identification of/r/ and /1/ in natural and synthesized speech. Journal of Phonetics, 9, 283303. page_221 Page 222 Moskowitz, B. A. (1980). Idioms in phonology acquisition and phonological change. Journal of Phonetics, 8, 69-83. Nittrouer, S., Studdert-Kennedy, M., and McGowan, R. S. (1989). The emergence of phonetic segments: Evidence from the spectral structure of fricative-vowel syllables spoken by children and adults. Journal of Speech and Hearing Research, 32(1), 120-132.
Oiler, D. K. (1980). The emergence of the sounds of speech in infancy. In G. Yeni-Komshian, J. F. Kavanaugh, and C. A. Ferguson (eds.), Child phonology, vol. 1: Production. New York: Academic Press. Oyama, S. (1976). A sensitive period for the acquisition of a nonnative phonological system. Journal of Psycholinguistic Research, 5, 261-283. Pisoni, D. B. and Lazarus, J. H. (1974). Categorical and noncategorical modes of speech perception along the voicing continuum. Journal of the Acoustical Society of America, 55, 328-333. Pisoni, D. B., Aslin, R. N., Perey, A. J., and Hennessy, B. L. (1982). Some effects of laboratory training on identification and discrimination of voicing contrasts in stop consonants. Journal of Experimental Psychology: Human Perception and Performance, 8, 297-314. Premack, D. (1971). Language in chimpanzee? Science, 172, 808-822. Rosenblum, L. (1987). Towards an ecological alternative to the motor theory of speech perception. Perceiving-Acting Workshop Review (Tech. Rep. CESPA, University of Connecticut), 2, 25-28. Sheldon, A. and Strange, W. (1982). The acquisition of /r/ and /1/ by Japanese learners of English: Evidence that speech production can precede speech perception. Applied Psycholinguistics, 3, 243-261. Stark, R. E. (1980). Stages of speech development in the first year of life. In G. H. Yeni-Komshian, J. F. Kavanaugh, and C. A. Ferguson (eds.), Child phonology, vol. 1: Production. New York: Academic Press. Strange, W. and Dittmann, S. (1984). Effects of discrimination training on the perception of /r-l/ by Japanese adults learning English. Perception and Psychophysics, 36, 131-145. Streeter, L. A. (1976a). Kikuyu labial and apical stop discrimination. Journal of Phonetics, 4, 43-49. Streeter, L. A. (1976b). Language perception of 2-month-old infants shows effects of both innate mechanisms and experience. Nature, 259, 39-41. Studdert-Kennedy, M. (1981). The emergence of phonetic structure. Cognition, 10, 301-306. Studdert-Kennedy, M. (1985). Perceiving phonetic events. In W. H. Warren and R. E. Shaw (eds.), Persistence and change. Hillsdale, N.J.: Erlbaum. page_222 Page 223 Studdert-Kennedy, M. (1986a). Development of the speech perceptuomotor system. In B. Lindblom and R. Zetterstrom (eds.), Precursors of early speech. New York: Stockton Press. Studdert-Kennedy, M. (1986b). Sources of variability in early speech development. In J. S. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes. Hillsdale, N.J.: Erlbaum. Studdert-Kennedy, M. (1987). The phoneme as a perceptuomotor structure. In A. Allport, D. MacKay, W. Prinz, and E. Scheerer (eds.), Language perception and production (pp. 67-84). London: Academic Press. Studdert-Kennedy, M. (1989). The early development of phonological form. In C. von Euler, H. Forssberg, and H. Lagercrantz (eds.), Neurobiology of early infant behavior (pp. 287-301). Basingstoke, England: Macmillan. Studdert-Kennedy, M. (1991). Language development from an evolutionary perspective. In N. Krasnegor, D. Rumbaugh, R. Scheifelbusch, and M. Studdert-Kennedy (eds.), Language acquisition: Biological and behavioral determinants. Hillsdale, N.J.: Erlbaum. Tahta, S., Wood, M., and Loewenthal, K. (1981). Foreign accents: Factors relating to transfer of accent from the first language to a second language. Language and Speech, 24, 265-272. Tees, R. C. and Werker, J. F. (1984). Perceptual flexibility: Maintenance or recovery of the ability to discriminate
nonnative speech sounds. Canadian Journal of Psychology, 38, 579-590. Terrace, H. S., Petitto, L. A., Sanders, R. J., and Bever, T. G. (1979). Can an ape create a sentence? Science, 206, 809902. Trehub, S. E. (1973). Infants' sensitivity to vowel and tonal contrasts. Developmental Psychology, 9, 91-96. Trehub, S. E. (1976). The discrimination of foreign speech contrasts by adults and infants. Child Development, 47, 466472. Waterson, N. (1971). Child phonology: A prosodic view. Journal of Linguistics, 7, 179-211. Werker, J. F. (1989). Becoming a native listener. American Scientist, 77, 54-59. Werker, J. F. and Lalonde, C. E. (1988). Cross-language speech perception: Initial capabilities and developmental change. Developmental Psychology, 24, 672-683. Werker, J. and Logan, J. (1985). Cross-language evidence for three factors in speech perception. Perception and Psychophysics, 37, 35-44. Werker, J. F. and Tees, R. C. (1984a). Cross-Language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7, 49-63. page_223 Page 224 Werker, J. F. and Tees, R. C. (1984b). Phonemic and phonetic factors in adult cross-language speech perception. Journal of the Acoustical Society of America, 75, 1866-1878. Werker, J. F., Gilbert, J. H. V., Humphrey, K., and Tees, R. C. (1981). Developmental aspects of cross-language speech perception. Child Development, 52, 349-355. page_224 Page 225
PART III Interactions of Linguistic Levels: Influences on Perceptual Development page_225 Page 227
Chapter 7 Infant Speech Perception and the Development of the Mental Lexicon Peter W. Jusczyk In the course of about two years, infants learn to recognize hundreds of words in the language spoken in their environments. How do they accomplish this? How does the ability to recognize fluent speech develop? To the average listener on the street, such questions might appear to be trivial rather than troublesome. In our everyday dealings with language, distinctions among different words seem clear enough. We are usually only conscious of hearing speech as a series of discrete words occurring sequentially in sentences. Only when we are faced with the prospect of trying to understand utterances in a foreign language do we confront the fact that talkers rarely pause between successive words. Similarly, the differences between utterances of the same words by different talkers usually go unnoticed unless they
have a marked accent or speak a dialect that differs from our own. Somehow, in mastering the sound structure of the language that we speak, we have managed to surmount these potential difficulties for word recognition. An answer to the question of how fluent speech recognition develops must take into account that infants begin with the potential to learn any language. Thus, any innate abilities that they have to aid them in acquiring fluency in word recognition must have a universal character. At the same time, natural languages differ greatly in how their sound structures are organized. The strings of consonant sounds that are permitted in syllables in Polish are not permitted in English. Alterations in the pitch contour of signals can signal a difference in which meanings of words are accessed in Chinese but serve a different function for English. In some languages, the location of the main stress in words is highly predictable (e.g., Czech), whereas in others it may be quite variable (e.g., Hebrew or English). In English, semantically related forms often share the same core sequence of consonants and vowels, whereas in Semitic languages, root morphemes exist as stable sequences of consonants with changes among page_227 Page 228 forms signaled by internal vowel differences (Berman 1986). Consequently, explaining the development of fluent speech recognition involves more than identifying a universal set of perceptual capacities. It requires explaining how such capacities lead to the extraction of the sound structure of particular languages. Thus, these capacities must be sensitive to the input in a way that allows for the rapid emergence of optimal strategies for dealing with the sound structure of the language found in the infant's environment. Much of what is known about infant speech perception capacities comes from studies involving the discrimination of minimal pair differences in syllables (see Aslin 1987; Aslin, Pisoni, and Jusczyk 1983; Kuhl 1987 for recent reviews). Such studies reveal something about the perceptual limits of infants to discriminate differences in speech sounds. Consequently, these studies provide information about what sorts of word contrasts are perceptible for infants. However, the implications of such findings for understanding the development of word recognition are not straightforward. For instance, although the distinctions tested can be described in terms of phonetic features or phonetic segments, it does not necessarily follow that infants perceive them in this way (Bertoncini et al. 1988; Jusczyk 1986; Kemler Nelson 1984; Studdert-Kennedy 1986). Instead, infants' discrimination of isolated syllables could rely on some holistic comparison process that does not involve the breakdown of the syllable into a detailed componential description. In addition, by presenting syllables in isolation in a quiet background, most infant speech perception studies bypass one of the critical stages in word recognition, namely, the speech segmentation problem. Under normal listening conditions, syllables are embedded in a fluent stream. Hence, responses to repeated presentations of isolated speech syllables may not be indicative of the ability to detect contrasts among words occurring in a stream of speech in a noisy environment. Prerequisites for Word Recognition in Fluent Speech To gain a better appreciation for the contribution made by infant speech-perception capacities to the development of the mental lexicon, let us consider what abilities are necessary for word recognition. Clearly, one requirement is to discriminate utterances of one word type from those of another word type (e.g., bat from pat). In addition, one should be able to ignore the acoustic differences that arise in uttering the same word from page_228 Page 229 one occasion to another or by one speaker or another. Thus, one must be able to categorize correctly all the different tokens of the same utterance type. An individual who can discriminate and categorize utterances in these ways can be said to recognize words in the language. Of course, a listener must be able to locate information relevant to the discrimination and categorization processes in the speech stream, that is, word recognition depends on some ability to achieve a reasonable segmentation of the signal into wordsized units. However, even the abilities to segment the speech stream into units that correspond to different word types in the language and to discriminate and to categorize utterances are insufficient to achieve word recognition. Some sort of representation of the sound pattern of the word must also be stored in memory along with its meaning. This representation must be distinctive enough to allow it to be picked out from similar sounding words during speech
recognition. Consequently, an important part of the recognition process depends on how the sound properties of the word are encoded in memory. Incoming speech signals must be matched against stored perceptual representations to be recognized and for their meanings to be recovered. Thus, knowledge of the nature of information from the speech signal that is encoded by the infant, as well as how the information is selected for encoding, are important in understanding how children come to recognize words. In order to address these developmental issues, we need to know something about attentional and memory processes in infants as they relate to speech input. More specifically, to which aspects of the speech signal do infants attend? Further, what information about speech sounds is retained in secondary memory? This chapter focuses on the representation of sound patterns in the mental lexicon and how they change as the child gains knowledge of his or her native language. We have identified a number of functions involved in the development of the mental lexicon, including discrimination, categorization, segmentation, attention, and representation and memory. In what follows, we review what is known about the nature of these functions during infancy, particularly as they relate to speech processing. We will consider which aspects of the stream of speech are likely to be extracted and why, how those sounds are perceived and encoded, and what sorts of linguistic units are stored in memory. We conclude with a discussion of a possible model of the way that the mental lexicon develops. page_229 Page 230 Discrimination This is the facet of speech perception that has been most thoroughly researched in studies with infants. We will not attempt a complete review of these results here. Instead, we consider those results that are most germane to understanding the development of the lexicon (for a more complete review, see Aslin, Pisoni, and Jusczyk 1983). The pioneering investigations of Eimas and his colleagues explored the extent to which infants in the first four months of life are capable of discriminating syllables that contrast along various phonetic dimensions (Eimas 1974, 1975; Eimas and Miller 1980b; Eimas et al. 1971). Eimas et al. (1971) studied the ability of infants to discriminate two consonant-vowel (CV) syllables, differing only in whether the initial consonant was voiced (e.g., [ba]) or voiceless (e.g., [pa]). By carefully selecting their stimuli, Eimas et al. were able to examine how infants respond to voicing differences between tokens from different phonetic categories (e.g., [ba] versus [pa]), as well as ones among tokens chosen from the same phonetic category (e.g., [bal] versus [ba2]). Using the high-amplitude sucking (HAS) procedure, they found that 1- and 4-monthold infants showed significant increases in sucking in response to voicing between syllables from different phonetic categories. By comparison, same-aged infants did not respond in the same way to equivalent voicing changes between tokens chosen from the same phonetic category. By responding to voicing changes only when they signaled a phonetically relevant contrast, infants behaved much like adults who demonstrate categorical perception of speech (Liberman et al. 1967). In subsequent studies, Eimas showed that infants displayed similar sensitivity to contrasts between syllables differing on phonetic dimensions, such as place of articulation (e.g., [b] versus [d]) (see Eimas 1974) and manner of articulation (e.g., [r] versus [1]) (see Eimas 1975). More recent research has shown that even newborn infants demonstrate this capacity for distinguishing syllables that differ minimally along phonetic dimensions (e.g., Bertoncini et al. 1987; Bertoncini et al. 1989). Infants are sensitive not only to acoustic differences that distinguish words in their native language environment but also to those that occur in other languages as well (e.g., Aslin et al. 1981; Best, McRoberts, and Sithole 1988; Lasky, SyrdalLasky, and Klein 1975; Streeter 1976; Trehub 1976; Werker et al. 1981). Hence, the basic perceptual capacities of infants are such as to allow them to detect the kinds of contrasts that could potentially distinguish words in any language. page_230 Page 231 Typically, most research on discriminative capacities has involved syllables that contrast only in their initial segments. Of course, words in natural languages are also distinguished in other ways. The English minimal pair bad and bag, differs in the syllable-final position. In Mandarin Chinese, some words are distinguished not by differences in phonetic segments but by differences in the tone (i.e., pitch contour) of the segments. Finally, in some languages, word distinctions can also
be cued by differences in the stress patterns of syllables, such as the distinction in English between con' tract and con tract'. It is noteworthy that there is evidence that, as early as the first few months of life, infants are sensitive to contrasts based on such distinctions. For example, 2-month-olds are able to distinguish speech sounds that contrast in their final segments (Jusczyk 1977). Similarly, it has been shown that young infants distinguish syllables that contrast solely in their pitch contours (Kuhl and Miller 1982; Morse 1972). In addition, there is evidence that infants discriminate multisyllabic stimuli that differ only in the location of syllable stress (Jusczyk and Thompson 1978; Spring and Dale 1977). The conclusion that emerges from studies of speech discrimination is that, right from birth, infants possess the perceptual capacities necessary to distinguish between different words in a language. The course that the development of these capacities takes is not one of progressive differentiation, whereby infants become able to make finer and finer distinctions between sounds. Rather, the developmental course is more consistent with what has been termed learning by selection (Changeux, Heidmann, and Patte 1984) or pruning (Cooper and Aslin, in press). That is, sensitivity to distinctions outside the native language appears to be attenuated as the language is acquired (Werker and Lalonde 1988; Werker and Tees 1984a, 1984b; cf. Best, McRoberts, and Sithole 1988). In any case, infants have the prerequisite perceptual capacities to distinguish between different lexical items long before they actually begin to learn words. Categorization In order to be able to learn the words of a language it is not sufficient to be sensitive to differences between utterances. One must also be selective about just which differences are important with respect to the meaning of the utterance. As noted above, there is evidence that some kinds of acoustic differences among syllables are not discriminated by infants, viz., differences between two tokens chosen from within the same category. (Eimas et al. 1971). This sort of case could be attributed to some sort of page_231 Page 232 perceptual limitation on infants' speech-processing capacities. In other words, it could be argued that infants are simply insensitive to these kinds of within-category differences.1 A more difficult problem is posed when the tokens to be grouped into the same word type clearly are discriminable. This is almost certainly so whenever we must deal with utterances produced by more than a single talker. To recognize that different talkers' utterances of dog are meant to refer to the same type of object, we must put aside perceptible differences in the tokens that result from the unique characteristics of each talker's vocal apparatus. Of course, the problem here is a familiar one in the study of perceptual development. It is the problem of perceptual constancy as it applies to speech-sound categories. Just as the apparent shape of an object undergoes changes with shifts in visual perspectives, so too does the acoustic shape of a word vary with changes in talkers. The extent to which infants are able to ignore variability introduced into the acoustic shapes of words by changes in speaking voices has been the subject of research conducted by Kuhl (1979, 1983). She trained 6-month-olds to respond to a distinction between two vowels such as [a] and [i]. Specifically, the infants had to turn their heads toward a display box whenever a repeating background stimulus (e.g., [a]) changed to a different vowel (e.g., [i]). Correct responses were rewarded by the presentation of an interesting visual display. Once the infants had been successfully trained on a pair of tokens produced by a single talker, new tokens from different talkers were added to both the background and change vowels. The question of interest was whether infants could continue to make the discrimination based on vowel identity despite the increased acoustic variation due to constantly changing talkers' voices. In fact, Kuhl's subjects succeeded in continuing to track the vowel change despite the variability due to changing talkers. Moreover, in a subsequent study, infants were able to detect a comparable change between [a] and [ ] despite the fact that varying the talkers produced a considerable acoustic overlap among the tokens from these two vowel categories (Kuhl 1983). The infants in Kuhl's studies were about 6 months of age so that one cannot say definitively whether infants are innately endowed with the ability to compensate for changes in talkers' voice or whether they have somehow learned to do so. However, other evidence suggests that the former alternative is probably the correct one. A recent study in my laboratory (Jusczyk, Pisoni, and Mullennix 1992; see also Jusczyk, in press) demonstrated that even 2-month-old infants display some capacity for page_232
Page 233 coping with talker variation. Using the HAS procedure, we found that 2-month-old infants were able to detect a distinction between [b^g] and [d^g] and ignore talker variation, even when listening to tokens produced by twelve different talkers (six males and six females). Moreover, we demonstrated that the infants' performance on this contrast was not attributable to an inability to distinguish among the tokens produced by different talkers. Thus, when confronted with tokens from the same syllable type (e.g., two different talkers saying the syllable [b^g]), the infants discriminated the syllables. So, the acoustic differences among the tokens were perceptible to the infants. Yet, they were still able to detect the phonetic contrast in the midst of the acoustic variation produced by continually changing talkers, that is, by 2 months of age, infants are flexible enough to shift levels of attention depending on the levels of variability in the signal. They are already sensitive to different levels of categorization and can change their focus of attention. In addition to the problems posed by changes in talkers' voices, there are potential problems for word recognition that are introduced by variations that occur within a single talker's voice. Perhaps, the best studied of these involves changes in speaking rate (Miller 1981, 1987). Certain speech contrasts, such as the one between [b] and [w], depend on the rate at which spectral cues undergo change. For this reason, perception of these cues could be affected by changes in speaking rate. Hence, whether a particular rate of spectral change is characteristic of a [b] or a [w] depends on the overall rate of speech (Miller and Liberman 1979). Eimas and Miller (1980a; Miller and Eimas 1983) found that 2- to 3-month-olds give evidence of making perceptual adjustments for changes in speaking rate. Changes correlated with differences in speaking rate produced significant shifts in the locus of infants' perceptual boundaries, similar to those reported for adult listeners (Miller and Liberman 1979). Thus, within the first few months of life, infants appear to process speech in a way that allows them to compensate for changes in speaking rate. Overall, then, there is evidence that infants possess the necessary means for coping with sources of acoustic variability in the speech signal. Specifically, they appear already to have the means for recognizing utterances of a particular word type, despite the acoustic variation that accompanies changes in speaking rate or in talkers' voices. This kind of ability is essential to developing a lexicon which correctly links sounds to meanings during fluent speech perception. page_233 Page 234 Segmentation One of the least-studied aspects of infant speech perception concerns the ability to segment the speech stream into units that correspond to linguistically meaningful groupings, such as clauses, phrases, and words. In part, the lack of information about this subject stems from the methods used to study infant speech perception. In an effort to obtain information about the way that infants respond to subtle acoustic variations in speech, most studies have presented infants with isolated syllables so as to control for possible extraneous variables. However, recently some investigations devoted to the way that infants respond to the longer stretches of speech that are characteristic of typical conversations have been reported (Fernald 1985; Hirsh-Pasek et al. 1987; Mehler et al. 1988). The ability to segment fluent speech into meaningful units is critical for the development of word recognition. In fluent speech, boundaries between distinct words are seldom observed (Cole and Jakimik 1980; Klatt 1974, 1977). There are no clear cues marking the location of discrete words. For this reason, it has been argued that lexical retrieval during fluent speech perception involves first parsing the signal and then matching the obtained units to existing items in the lexicon (Church 1987; Greenspan, Nusbaum, and Pisoni 1988). Indeed, it is hard to see how the language learner could actually learn lexical items without first being able to achieve some reasonable parsing of the signal into wordlike units. Even an alternative view that holds that words are learned by rote in isolation would have to explain how the child could eventually match the acoustic-phonetic representations of some items to the appropriate stretches of speech when the words occur in sentence contexts. The speech segmentation problem is well known to designers trying to develop systems of speech recognition by machines (Klatt 1974; Reddy 1974, 1976; Vassiere 1981; Waibel 1986) in addition to those working on human speech recognition (Bond and Garnes 1980; Cutler and Carter 1987; Cutler and Norris 1988; Grosjean and Gee 1987; Nakatani and Dukes 1977; Nakatani and Schaffer 1978). One general line of investigation has focused on the role of prosodic cues in the process of speech segmentation. The notion is that changes in intonation contour, stress patterns, syllable durations, and pausing might serve as sources of information in segmenting the speech signal into linguistically relevant packets,
such as words and phrases (Cooper and Paccia-Cooper 1980; Cutler and Carter 1987; Grosjean and Gee 1987; Klatt 1976; Nakatani and Dukes 1977). page_234 Page 235 The role of prosody in the acquisition of language in general and in the development of speech perception in particular is beginning to receive more attention. Fernald (1985; Fernald and Kuhl 1987) has demonstrated that infants are especially attracted to the wide ranges in pitch and intonation that are found in speech directed to children. Moreover, others have noted that child-directed speech tends to be slower, more carefully enunciated and contains more pauses than adultdirected speech (Broen 1972; Garnica 1977; Morgan 1986; Snow 1972). Thus, prosodic cues tend to be exaggerated in the kind of speech that is directed toward language learners. The exaggerated nature of the cues may render them more detectable for the infant. This has led some (Hirsh-Pasek et al. 1987; Jusczyk and Bertoncini 1988; Kemler Nelson et al. 1989) to argue that such cues may help infants perceptually to segment speech into units that correspond roughly to important grammatical units. Some empirical support for the idea that prosody may help infants to segment speech comes from research conducted by Jusczyk, Kemler Nelson, Hirsh-Pasek, and their colleagues (Hirsh-Pasek et al. 1987; Jusczyk 1989; Jusczyk et al. 1992; Kemler Nelson et al. 1989). In particular, Hirsh-Pasek et al. (1987) found that infants as young as 6 months of age are sensitive to prosodic correlates of clausal units in speech. Speech samples presented to the infants were selected from recordings of child-directed speech. The samples included a series of artificial 1-sec pauses that were placed in either the middle or at the ends of clauses. Hirsh-Pasek et al. reasoned that, if infants detect in the prosody that some points in the flow of speech provide a more appropriate stopping point than others, then having a pause at the end of a clause would seem more natural than one appearing between two words in the middle of a clause. The results confirmed their hypothesis. The infants showed a significant preference for listening to the samples with the pauses placed at the ends. Subsequent research demonstrated that the effects are much more pronounced for childdirected speech than they are for adult-directed speech (Kemler Nelson et al. 1989). Moreover, the pattern of preferences holds even when the phonetic content is removed by low-pass filtering, suggesting that prosody is playing the key role (Jusczyk 1989). More pertinent for the present discussion are the results of an additional study on the perception of cues to word boundaries (Kemler Nelson 1989). Once again, artificial 1-sec pauses were inserted into child-directed word samples, but this time the insertions were made according to lexical structure. The pauses were placed either between two different words or between two syllables of the same word. The issue was whether page_235 Page 236 or not infants are sensitive to cues in the speech stream that might distinguish boundaries of words from within-word locations. Three different age groups of subjects were tested on these materials, 4½-, 9-, and 11-month-olds. Only the latter group gave evidence of a reliable preference for the samples in which the pauses occurred between different words. This finding suggests that infants may become sensitive to cues for segments in the speech stream such as words at an age considerably later than they are for larger segments such as clauses. Moreover, there are indications that the bases for segmenting words from the speech stream may differ from larger units, such as clauses and phrases. Specifically, an additional experiment was conducted in which the materials for the word study were low-pass filtered at 400 Hz. This effectively removed most of the phonetic content, leaving intact only the prosody. Under these circumstances, even the 11-month-olds did not show a significant preference for the samples with the pauses between two words. This suggests that some information other than prosody may be critical for infants to segment speech into words. Besides prosodic cues, another potentially exploitable source of information for segmenting speech into words has to do with the sequencing and positioning of sounds that the language permits within the same words and syllables (i.e., the phonotactics of the language). Church (1987) points out that the phonetic environments in which one finds a particular sound, an allophone, or a variant of a phoneme are often quite restricted in a given language. For example, in English, one finds that /t/ is aspirated [th] when it occurs in the initial position of a syllable but not when it occurs in a syllable-
initial cluster following /s/. Familiarity with these environments and with other constraints, such as the types of syllable structures permitted in the language, could greatly aid in parsing the incoming signal into distinct words. In order to understand whether phonotactic constraints play a role in infants' segmentation of wordlike units from fluent speech, it is necessary to determine first when infants become sensitive to the restrictions in the sequencing of sounds in language. In fact, there are two aspects to this issue of restrictions on the sequences of sounds that are permitted. One aspect concerns any general restrictions that might hold across sequences of sounds in any language (e.g., universal constraints on syllable structure). The other aspect has to do with constraints on the sequences of sounds that can appear in words in a particular language. Bertoncini and Mehler (1981) conducted a study that bears on the issue of sensitivity to general constraints on syllable structure. They examined page_236 Page 237 whether 2-month-olds discriminate contrasts embedded in strings that had either a permissible or impermissible syllable form. They found that French infants were able to discriminate a contrast involving the permissible syllable forms [taep] and [paet]. By comparison, a contrast involving impermissible syllable forms [tsp] and [pst] were not discriminated. Bertoncini and Mehler took this to be evidence that syllables are perceptual units for infants. In the present context, the finding may be an indication that only sounds having certain acoustic properties (e.g., the presence of a vowel nucleus) will be treated as possible candidates for syllables in language. The tendency to treat only signals meeting certain acoustic descriptions as potential syllables would count as being both innate and universal in that it applies to learning any natural language. However, as noted above, individual languages also differ on just what information can be included in a potential syllable. Thus, cues to the particular syllable structures permitted by the infant's native language must be provided in the input that is received. For example, Polish permits syllables to begin with clusters involving two stop consonants (e.g., [kto] and [dba]), whereas English does not allow such sequences syllable-initially. Thus, in the course of segmenting speech, the Polish child groups the two stops in the same syllable, but the infant acquiring English must learn to associate them with different syllables. For this reason, it is important to know when the infant becomes sensitive to the particular restrictions on ordering segments (i.e., the phonotactic constraints) that exist in the target language. We have recently begun to explore this issue in my laboratory (Jusczyk et al. 1993) by testing when infants are able to distinguish words from their native language from ones belonging to a foreign language. American infants at two ages, 6 and 9 months, were tested using a listening preference procedure (Fernald 1985; Hirsh-Pasek et al. 1987). The infants heard lists of unfamiliar abstract words recorded by a bilingual speaker in either English or Dutch. Low-Frequency, abstract words were chosen to ensure that any preferences that emerged would not be because some of the list included items that the infant was likely to recognize from daily interactions with adults. There were 16 lists in each language. Each list was 15 words long and consisted of 10 two-syllable and 5 three-syllable items. Many of the words occurring in the lists for each language contained phones and phonetic sequences that were unacceptable in the other language. For each infant, the English words were presented over a loudspeaker on one side of the room and the Dutch words were presented on the opposite side. The amount of time that the infants oriented to the page_237 Page 238 loudspeakers during the presentation of lists was recorded and used as a measure of preference for the Dutch or English words. At 6 months of age, there was no significant preference for either the English or the Dutch word lists. In contrast, 9month-olds had significantly longer average listening times to the English (8.93 sec) than to the Dutch (5.03 sec) lists. Overall, 22 of 24 infants at this age had longer listening times for the English lists. Although the prosodic features of English and Dutch are quite similar, we considered the possibility that the infant's preferences might have been based on prosody rather than on the phonetic content of the utterances. For this reason, we prepared versions of the lists which were low-pass filtered at 400 Hz to remove most phonetic information. The 9-month-
olds failed to show a significant preference for the English with these filtered stimuli. Hence, it appears that phonetic content played a major role in determining infants' preferences for the English lists. To sum up, at 6 months of age, infants are sensitive to acoustic correlates of clausal units in the speech stream. By 10-12 months of age, they show some sensitivity to the way that speech is segmented into words. Knowledge that infants have picked up about typical sound patterns in the native language appears to play an important role in segmenting speech into words. Certainly, recognizing the familiar patterns of organizing speech sounds in the native language is a necessary prerequisite to using phonotactic cues in speech segmentation. Moreover, it is interesting that sensitivity to these aspects of phonological structure are developing at a point when the lexicon is beginning to develop. Attention to Speech Sounds As noted earlier, to understand how word recognition processes develop, we need to know something about how infants' attentional capacities are engaged by speech. For instance, under what conditions do infants attend to speech? Which aspects of the signal are most likely to engage their attentional processes and lead to more detailed analysis and encoding? Information concerning the way that attentional processes affect speech perception in infants comes chiefly from a variety of indirect sources. For example, a number of studies have investigated preferences that infants have for different kinds of speech patterns. These include studies demonstrating that infants prefer to listen to speech produced by their own mother as opposed to another infant's mother (DeCasper and Fifer 1980; Mehler et al. 1978; Mills and Meluish 1974). Similarly, other studies have page_238 Page 239 shown that by about 4 months of age, infants display a clear preference for listening to child-directed speech rather than adult-directed speech (Fernald 1985; Fernald and Kuhl 1987; Werker and McLeod 1989) and that this preference is due to the fundamental frequency characteristics of child-directed speech (Fernald and Kuhl 1987). These studies imply that certain speech patterns are more apt to engage the infant's attention than others. Demonstrations that infants prefer to listen to speech as opposed to other kinds of acoustic patterns (Columbo and Bundy 1981; Friedlander and Wisdom 1971; Glenn, Cunningham, and Joyce 1981) or that one type of speech is preferred to another (Fernald 1985; Fernald and Kuhl 1987; Werker and McLeod 1989) provide only a rough estimate of the kinds of signals likely to engage infants' attentional processes. The preference measures used in such studies are global ones that relate to the attractiveness of a whole phrase or passage. These measures cannot provide information about attention to a particular word or syllable. However, this sort of information is exactly what is needed in order to understand the way that word recognition processes develop. In particular, it is necessary to determine just what kind of information infants extract from the speech signal under normal circumstances. Studies manipulating talker variation do provide some indication of what information infants are able to attend to (Jusczyk, Pisoni, and Mullennix 1992; Kuhl 1979, 1983). At least, they show that infants can ignore some irrelevant variation in speaking voice and focus on a phonetic dimension that is critical for distinguishing words from each other. Another approach has been to explore the ability of infants to perceive speech imbedded in a background of noise (Nozza et al. 1990; Trehub, Bull, and Schneider 1981). These studies show that infants are capable of discriminating speech sounds under such conditions, although the infants require signal-to-noise ratios anywhere from 6-12 dB higher than do adults. However, these investigations have been confined to stop consonants. Hence, the range of speech information to which infants attend to in noisy conditions is not clear. One line of research that bears a little more directly on the role of attentional processes in speech perception has to do with exploring the role that syllable stress plays in discriminating phonetic contrasts. For example, are infants better able to discriminate phonetic contrasts when they occur in a stressed, as opposed to an unstressed, syllable? Early research using two-syllable stimuli (e.g., [daba] versus [daga]) suggested page_239
Page 240 that infants were equally adept at detecting a contrast, regardless of syllable stress (Jusczyk, Copan, and Thompson 1978; Jusczyk and Thompson 1978). However, a later study by Karzon (1985) using three-syllable stimuli (e.g., [malana] versus [marana]) reported that infants were able to pick up the contrast only when the critical syllables received stress characteristic of child-directed speech. Perhaps, when processing load is sufficiently great, exaggerating the syllable stress helps direct attention to a phonetic contrast. Further research along these lines may provide some insights as to the kinds of information that infants encode in their representations of words. Certainly, there have been a number of suggestions made in the language-acquisition literature that children may attend to and imitate information in stressed syllables (see Brown and Fraser 1964; Gleitman and Wanner 1982; cf. Gerken, Landau, and Remez 1989, this volume; Gerken). An indication of the important role that attentional focus can play in infant speech perception comes from a recent study by Jusczyk et al. (1990). They attempted to manipulate the attentional focus of their subjects by varying the perceptual similarity of their stimuli. In some instances, infants were habituated to a set of three syllables that included perceptually similar consonants (e.g., [pa], [ta], and [ka]), whereas in other cases, the set was composed of syllables containing three very dissimilar vowels (e.g., [bi], [ba], and [bu]). Jusczyk et al. hypothesized that infants would be led to focus on fine distinctions among the syllables for sets of the first type but on coarser distinctions for the second type. They predicted that, when focused on fine distinctions (i.e., for items that cluster closely in perceptual similarity space), the infants would be better able to detect the addition of a new member to the familiarization set than when focused on coarser distinctions (i.e., for items greatly separated in perceptual similarity space). These predictions were born out for 4-day-old infants. When infants were exposed to the coarse distinction set ([bi], [ba], and [bu]), they did not detect the addition of a new syllable [b^]which is perceptually very similar to one of the set members ([ba]). However, when exposed to a set containing some fine-grained distinctions (e.g., [pa], [ka], and [ma]), the infants were able to detect even the addition of a perceptually very similar syllable [ta] to the set. By 2 months of age, the infants seemed to be more resistant to the attentional manipulation (i.e., they detected the new syllable regardless of whether they were focused on fine or coarse distinctions during the familiarization period). page_240 Page 241 One possible explanation for the difference between the two age groups is that the older groups were better able to cope with the processing demands of the task. If so, then with increased processing demands, they might show the same pattern as the younger infants. Another possibility is that the experience that the 2-month-olds already have had with the input from the native language, and the kinds of fine distinctions that occur therein, may make them more resistant to the attentional focus manipulation. This issue is currently under investigation in our laboratories. In any case, the results to date demonstrate how attentional focus can affect the kind of information that infants encode from speech. In conclusion, much of what is known about the role of attention in the development of speech-perception processes has been a byproduct of investigations directed at other issues. It is clear that speech is a salient signal for infants relative to other sounds in the environment and that infants have some ability to pick it out of a noisy background. Nevertheless, the kinds of investigations necessary to understand which information that infants select from the speech signal are only just beginning. Much more detailed knowledge of what the infant attends to is necessary in order to understand the way in which the lexicon develops. One potentially interesting implication of the research manipulating the attentional focus of the infants is that frequently experienced sound contrasts may play an important role in determining the dimensions along which the developing lexicon is organized (see also Lindblom 1986 for a similar suggestion). This issue will be discussed later in conjunction with the proposed developmental model of word recognition. Representation and Memory In the preceding section, we raised the possibility that infants are selective of the information to which they choose to attend in the speech signal. Attention to certain details at the expense of others suggests that these are more likely to be encoded in some auditory representation of the sound and to be stored in secondary memory. However, this assumes that the infant attends to, encodes, and remembers only a portion of those acoustic distinctions that they are capable of making. What justification is there for making such an assumption? Why should we not suppose that infants' representations are as detailed as their sensory capacities could allow them to be? In fact, the available data are not
sufficient to exclude the possibility that infants do store fully specified, detailed copies of the speech sounds to which they listen. page_241 Page 242 However, there is evidence from related domains that make such detailed storage seem implausible. For example, studies of categorical perception provide evidence that adults are sensitive to differences among tokens chosen from within the same category (e.g., Carney, Widin, and Viemeister 1977; Pisoni and Tash 1974; Samuel 1977). Nevertheless, unless extraordinary measures are taken, there is little evidence that such differences are encoded in the course of speech perception. Instead, only those differences sufficient to distinguish among words in the native language appear to be encoded. Similarly, studies of speech perception in infants suggest that, at an early point in development, infants are sensitive to distinctions that do not occur in their native language but that, later on, only those contrasts that distinguish among native language words are encoded (Werker and Lalonde 1988; Werker and Tees 1984). So, there is some reason to believe that infants are not storing everything that they could potentially store. Furthermore, we must examine the nature of the situations in which infants do display fine sensitivity to subtle distinctions between speech sounds. The test environment in most speech discrimination experiments may not provide an accurate estimate of speech perception abilities in the real world. Procedures like high-amplitude sucking (HAS) provide many repetitions of a single syllable with little variation or other distraction. In this sense, they are apt to yield idealized estimates of discriminative capacities. Such measures may tell us more about what infants can do in the limit than what they do in normal circumstances. To obtain a better indication of the nature of infants' representations of speech sounds, Jusczyk and Derrah (1987) modified the HAS procedure. Instead of using a single stimulus during the habituation phase of the experiment, they employed a randomized set of different stimuli. In order to detect the addition of a new item to the familiar set, the infant must do more than simply register whether the two successive stimuli are the same or different. Because all of the syllables in the habituation set differ from each other, the infant has to encode enough distinctive information about each syllable to differentiate it from the others. Moreover, the representations of the familiar syllables must be sufficiently detailed to distinguish each from the novel syllable that is added to the set. Thus, by varying the similarity of the novel syllable to the familiar ones, it is possible to obtain some estimate of the kind of detail that is encoded in the infants' representations of the syllables. Jusczyk and Derrah (1987) used this procedure in their study of whether or not young infants' representations of syllables are structured in terms page_242 Page 243 of sequences of phonetic segments. During the habituation phase, infants were exposed to a randomized series of syllables that shared a common phonetic segment (e.g., [bi], [ba], [bo], and [b ]). Then a new symbol was added which either shared (e.g., [bu]) or did not share (e.g., [du]) a phoneme in common with the other set members. Jusczyk and Derrah reasoned that if 2-month-olds encoded the syllables as sequences of phonetic segments, they might recognize that [bu] shared a segment in common with the other set members. If so, then they might display greater sucking rates for the more novel change, that is, the one that did not share the common segment, [du]. In fact, both types of changes were easily detected, and there was no evidence of increased response to the member from the novel category. Hence, the study produced no evidence that 2-month-olds have representations of syllables structured in terms of phonetic segments. These basic findings were replicated and extended in a subsequent study by Bertoncini et al. (1988). Not only did they obtain the same pattern of results for syllables that shared a common consonant segment, but they also found the comparable results for syllables that shared a common vowel (e.g., [bi], [si], [li], and [mi]). In the latter case, the new syllable, which was added, either shared ([di]) or did not share ([da]) the same vowel. However, another finding also pertinent to the present discussion concerns a developmental difference that Bertoncini et al. found between 2-month-old and 4-day-old infants. The older infants appeared to incorporate more detail in their representations than did the younger infants. Specifically, the newborns' representations did not appear to be sufficiently detailed to differentiate among the syllables when only consonantal information was changed. However, when the change
involved vowel differences, the newborns performed as well as the 2-month-olds. Although Bertoncini et al. concluded that vowels might be more salient for the younger infants than are consonants, an alternative explanation based on attentional factors is also possible. Because very distinctive consonants were included in the habituation set, the 4-dayolds may have focused on coarse distinctions and overlooked the finer distinction between [bi] and [di]. Note that, in the study by Jusczyk et al. (1990), when the familiarization set for the 4-day-olds included highly similar consonants, then the infants did detect consonantal changes. In any case, the data from the Bertoncini et al. study suggest that 2-montholds are better able to form more detailed representations of speech sounds than are 4-day-olds. page_243 Page 244 Of course, developing a mental lexicon requires long-term storage of the sound patterns characteristic of the words. Thus, any immediate representation of speech sounds must eventually be encoded in secondary memory for subsequent retrieval during word recognition. To begin to understand what kind of information gets encoded into the representations stored in memory, Jusczyk, Kennedy, and Jusczyk (submitted) explored the effects of introducing a delay period between habituation to the familiar syllables and the introduction of a new one to the set. After extensive pilot testing, a twominute delay period filled with distracting visual materials was inserted between the habituation and test phases of the modified high-amplitude sucking procedure. For example, one group of 2-month-olds was exposed to a randomized series consisting of [bi], [ba], and [bu] and then tested after the delay on a randomized series of [di], [da], and [du]. Had the infants' representations of the original series not contained sufficiently precise information about the nature of the syllables, then the slight change that occurred in the test set (i.e., a single phonetic feature) might have gone unnoticed. In fact, the infants appeared to have retained considerable detail about the syllables during the delay period. Even minimal changes of a single phonetic feature were detected. Although Jusczyk, Kennedy, and Jusczyk's results indicated that infants preserve considerable detail in their representations, there was little evidence that these are structured in terms of phonetic segments at this stage of development. For example, performance was unaffected by whether members of the familiarization set shared a common consonant (e.g., [bi], [ba], and [bu]) or not ([si], [ba], and [tu]). In contrast, a more recent study by Jusczyk et al. (in preparation) used a familiarization set of bisyllabic utterances that either shared a common syllable (i.e., [ba'lo], [ba'zi], [ba'des], and [ba'mIt]) or did not (i.e., [ne'lo], [pae'zi], [chu'des], [ko'mIt]). In this case, the presence of a common syllable in the familiarization set led to a significant improvement in detecting the presence of a new item in the test set. Only infants familiarized with the set containing the common syllable detected the addition of a new bisyllable ([ba'n l] or [na'b l]) during the test phase. Apparently, the presence of the common syllable enhanced the encoding of these items during the delay period and allowed them to be better remembered by the infants. Nevertheless, proponents of representations structured in terms of phonetic segments could argue that there is an alternative explanation for these results that fits their view. From their perspective, the items shared not only a common syllable but also two common phonetic segments [b] and [a]. page_244 Page 245 To test this possibility, Jusczyk et al. ran an additional experiment in which the items shared two common phonetic segments but in different syllables ([za'bi], [la'bo], [da'bez], [ma'bIt]) during the familiarization phase. After the delay period, a new bisyllabic item such as [ba'n^l] or [na'b^l], was added during the test phase. Under these circumstances, there was no evidence that infants detected the presence of the new item. Thus, the implication is that it is the presence of the common syllable, and not the individual phonetic segments, which is critical for the infants' encoding of speech. The picture that emerges concerning the representation of speech during early infancy is that infants are able to encode rather detailed information in their perceptual representations and that they can maintain it for, at least, a relatively brief interval. There is some suggestion that the amount of detail that is encoded may increase over the first couple of months of life. However, there is little evidence for representations structured in terms of phonetic segments at this time. Instead, any matching that takes place during input and perceptual representations may be done using syllable-sized units. More data is necessary to gain a clearer picture of just what sort of information infants incorporate in their long-term
representations of speech. For example, it would be useful to attempt to explore how well distinctions that are relevant to phonetic contrasts are represented relative to other kinds of distinctions. There is some suggestion in recent work by Jusczyk, Pisoni, and Mullennix (1992) that infants may not retain information about talker identity over a two-minute delay period. Should this finding be sustained in studies that directly compare infants' memory for talker identity with their memory for phonetic distinctions, then it would suggest something interesting about the nature of the information likely to be included in infant's representations of speech. How Word Recognition Develops from Speech Perception Capacities Given the evidence reviewed concerning these important functions, what can we say about the way that the lexicon and word recognition processes develop? In previous discussions of this subject, I put forth a tentative model of the way in which speech-perception capacities develop to support word recognition in fluent speech (e.g., Aslin, Pisoni, and Jusczyk 1983; Eimas, Miller, and Jusczyk 1987; Jusczyk 1985, 1986). This model was essentially a developmental version of the LAFS model put forth by Klatt (1979). In the interim, it has become clear that there are problems page_245 Page 246 associated with the LAFS model, a number of which have been noted by Klatt (1988) himself. For this reason and because of more recent findings concerning infant speech perception, I have revised the earlier developmental model. For convenience sake, I will designate this revised version the Word Recognition And Phonetic Structure Acquisition Model (or WRAPSA, for short). One of the guiding assumptions behind WRAPSA is that the endpoint in the development of speech perception processes is the ability to recognize words in fluent speech. Because the flow of information in continuous speech is great, a premium is placed upon developing the means to identify words in the speech stream rapidly. One of the attractions of Klatt's LAFS model was that it attempted to access words directly from the input without a stage of computing an intermediate phonetic representation for most online word recognition. WRAPSA generally preserves this feature, although it delivers an input that is roughly segmented into syllables. Briefly put, word recognition is held to take place in the following manner. The input undergoes a preliminary stage of auditory analysis that extracts an array of spectral and temporal properties present in the signal grouped into syllablesized units. These properties are then weighted in terms of their importance in signaling meaningful distinctions in the language, and the resulting weighted representation is matched against lexical representations stored in secondary memory. The critical process comes in the way that the properties are weighted. The weighting scheme that is developed is not only particular to a given language but also, in all likelihood, to a particular dialect. Thus, mastery of the sound structure of the native language entails acquiring the appropriate weighting scheme. The main components of the model, their functions and their development are elaborated in the remainder of the section. I will consider what the model has to say about some of the important developmental changes noted in speech perception. Preliminary Analysis of the Signal Processing at this stage involves passing the signal, transformed by the peripheral auditory system, through an array of analytical processes which yield decisions about the acoustic properties present. The analytic processes provide a description of the signal in terms of a number of spectral and temporal features. Following Sawusch (1986), I assume that extraction of these features occurs in particular frequency regions. In this sense, the analytic processes are spectrally specific. They record such information as: (1) the presence of noise in some spectral region, (2) whether page_246 Page 247 it is periodic or aperiodic, (3) durations and intensities, (4) bandwidths, (5) presence, direction, and degrees of spectral changes. Although the extraction of features takes place independently for each analytic process, some temporal tagging of features as occurring within the same syllablesized unit is also hypothesized to occur. In effect, the syllables are the elementary temporal slices for the input. This fits well with the key role given by many theorists to syllables in the
organization of prosodic factors in speech, such as rhythm, stress, and tone (Goldsmith 1976; Liberman and Prince 1977; Selkirk 1984). In principle, the analytic processes provide a great deal of fine-grained information that could be used to discriminate different sounds from each other. These processes represent the sensory limits on the perception of speech and nonspeech signals. They provide the dimensions along which acoustic signals can ultimately be arranged and classified. They are an important part of the innate endowment that the infant brings to speech perception. It is these capacities that are often tapped in experiments with infants during the first few months of life, demonstrating that they are sensitive to distinctions even outside of ones in the native language spoken in their environment. Also, it is the existence of the analytic processes that make it possible for adults learning foreign languages to acquire distinctions not present in their native language (Flege, in press; Logan, Lively, and Pisoni 1989; Werker and Tees 1984b). In addition, the sensitivity to within-category differences that shows up under certain conditions in categorical perception experiments can be attributed to this level of processing (Carney, Widin, and Viemeister 1977; Pisoni and Tash 1974; Samuel 1977). Pattern Extraction and the Interpretive Scheme Because the analytic processes are constantly monitoring the input, any descriptions that they provide are necessarily short-lived. Thus, some pattern extraction process would be required to retain a representation of the information present in the signal at a given moment.2 In principle, a number of different descriptive patterns could be extracted from the same input. Which of the patterns is selected for the representation that undergoes further processing depends upon attentional factors. During the first months of life, patterns extracted are very general and apply to speech and nonspeech input alike. These patterns would provide the infant with a preliminary categorization of the input. The role of experience is to refine the description-selection process in ways that will yield a classification of the input that corresponds best to the structure of the native page_247 Page 248 language. The application of descriptions to the input constitutes a critical stage in perception and representation of the speech signal. In fact, I would argue that the speech mode of perception is simply the application of certain kinds of descriptive patterns to the output of the analytic processes. Just what is involved in the descriptive pattern selection process? Elsewhere I have suggested that the key is in learning an interpretive scheme that weights the information returned by the analytic processes so as to give prominence to those dimensions that are critical in signaling meaningful distinctions in the language (Jusczyk 1981, 1985). Consequently, the interpretive scheme acts to constrain the possible patterns that could be used to describe the input. One interesting implication of the attentional study conducted by Jusczyk et al. (1990) is that there may be a passive component to the way that the interpretive scheme is derived during language acquisition. Distributional properties of the input could play some role in determining which of the analytical processes receive greater weighting. However, another critical point is reached when the infant starts to attach linguistic significance to speech. It is the operation of trying to assign a meaningful interpretation to a given set of utterance tokens that encourages the child to attend to similarities and differences that exist in the acoustic attributes of these tokens. Feedback in the form of responses to misperceptions could play a role in adjusting the weighting scheme to provide a more accurate decoding of speech into the correct lexical items. As development proceeds, the infant begins to pick up which properties furnished by the analytic processes are most relevant to signaling meaningful differences in the native language. Hence, learning the interpretive scheme is a matter of becoming most sensitive to these properties. What the interpretive scheme amounts to is a formula, or automatic way, to set the focus of attention to these properties rather than others. The role that I have attributed to selective attention in this process is similar to one recently proposed by Nosofsky (1986, 1987; Nosofsky, Clark, and Shin 1989). Nosofsky's Generalized Context Model for category representation can account for a number of phenomena associated with classification behavior, including rule-based categorizations and recognition judgments. In the model, selective attention is represented in terms of the stretching and shrinking of distances in psychological space. Selectively attending to a particular dimension tends to distort the overall psychological space in certain ways. Distances between points along an attended dimension are ''stretched", making them more discriminable, page_248
Page 249 whereas distances along unattended dimensions are "shrunk" making them less discriminable. These notions of stretching and shrinking are equivalent to what I have in mind for WRAPSA when I refer to weighting the information returned by the analytic processes. The purpose of the interpretive scheme is to focus attention on the outputs of those processes which are most relevant to meaningful distinctions in a given language and to provide a description that offers the best fit with respect to them. One caveat needs to be added to the picture presented thus far. This concerns what features go into the description that is sent on for further processing. Because my focus has been on word recognition here, I have stressed attending to those features that serve to distinguish words in the language. Clearly, it is important that these be encoded in the perceptual representation in order for recognition to occur. However, as will be discussed in the next section, there is some reason to believe that the encoding also includes properties other than those necessary to discriminate lexical items. For example, characteristics relevant to distinguishing among talkers, their emotional states, and the acoustic surrounds, may also be included. Therefore, I assume that in addition to the critical properties for distinguishing among words in the language, the description will contain some random selection of other properties available from the analytic processes (see also Nusbaum and Goodman, this volume, for a similar view of the way that attentional processes must be included in a theory of speech perception). I believe that there is good reason to take seriously the notion that the attainment of fluent word recognition involves the development of an interpretive scheme. First, the infant data suggest that early speech-sound categories are universal in that they do not appear to vary from one language background to another. Moreover, there are a number of parallels that appear in the way in which young infants process speech and complex nonspeech sounds, suggesting that general auditory mechanisms underlie speech perception at these early stages (Jusczyk et al. 1980, 1983, 1989). Yet, toward the end of the first year of life, there are indications that sensitivity to some speech contrasts which do not appear in the native language begins to decline (Werker and Lalonde 1988; Werker and Tees 1984a). Nevertheless, the decline is not a permanent one because listeners can be retrained to perceive such distinctions (Logan, Lively and Pisoni 1989; McClasky, Pisoni, and Carrell 1980; Pisoni et al. 1982; Werker and Tees 1984b). Consequently, one interpretation of these results is that the declines observed are attributable to shifts in attention away from the page_249 Page 250 dimensions that distinguish the foreign language contrasts and the concomitant reorganization that occurs in the psychological spacing of the sounds. Additional evidence suggesting that an interpretive scheme is involved in fluent speech perception comes from studies demonstrating that the same acoustic information can be perceived in more than one way. For example, Carden et al. (1981) demonstrated that merely changing the instruction set for listeners (telling them to label sounds as either stops or fricatives) was sufficient to induce shifts in the location of a perceptual category boundary. These shifts, which occurred both in identification and discrimination tests, were comparable to those observed when frication noises were actually added or subtracted from the stimuli. Similarly, studies using sinewave analogues to speech syllables have shown significant shifts in perceptual category boundary locations, depending on whether subjects are instructed to hear the sounds as speech or nonspeech (Bailey, Summerfield, and Dorman 1977). Finally, there is some evidence that bilingual speakers will classify the same speech token differently depending on the language used in an accompanying carrier phrase. Thus, Elman, Diehl, and Buchwald (1977) reported that Spanish-English bilinguals changed the voicing category associated with a particular token according to whether the carrier phrase was in Spanish or in English. In all of the situations just considered, the physical properties of the stimuli remain the same across the different instruction sets. Consequently, the only reasonable explanation for the results observed is that subjects are weighing the available acoustic information differently in the various situations. A natural question to ask at this point is what determines how the infant arrives at the correct interpretive scheme? Elsewhere we have argued that the development of speech perception should be viewed as an innately guided learning process (Jusczyk and Bertoncini 1988). This notion suggests that the infant is primed in certain ways to seek out some types of signals in the environment as opposed to others. Such signals have an inherent importance for the infant and are apt to have a high priority for further processing and memory storage. The innate prewiring underlying the infant's
speech perception abilities allows for development to occur in one of several directions. The nature of the input helps to select from among these possibilities the direction that development will take. Thus, learning the sound properties of the native language takes place rapidly because the system is innately structured to be sensitive to correlations of certain distributional properties but not others. page_250 Page 251 Recognizing and Storing the Representation Once a description has been assigned to the input, it can be compared to previously stored representations so that its meaning may be accessed. At this juncture, it is worth considering the nature of the representations to which the infant is comparing the processed input. In my previous model, I suggested that these representations would take the form of prototypes (Jusczyk 1985, 1986). The notion was that the initial representations that infants develop for words are rather global. The infant simply stored a sufficient number of highly salient features from the input to distinguish each word from other words currently in the infant's recognition lexicon. Hence, developing a lexical entry involved selecting from the information that is available through the application of the analytic processes. The prototype formed as a result of this process underwent continual refinement as more and more entries were added to the lexicon.3 The refinement was conceived to be a process of developing a more detailed representation which would be sufficient to distinguish among different lexical items. The interpretive weighting scheme was hypothesized to develop in conjunction with this process of forming and refining the prototypes. This view of the development of representations implicitly assumes the existence of some sort of generic memory system wherein representations of categories are stored (Tulving 1983). Thus, the description of the sound structure of the lexical item in memory is a general one, as opposed to a particular one that corresponds to an utterance that the child has actually encountered. In most cognitive models, it is assumed that some sort of unitary, abstract representations of categories stored in memory are matched to incoming signals during recognition. However, recently there has been considerable discussion of an alternative conceptualization of this process. It has been suggested that accounts based on the storage of individual exemplars may provide a more accurate view of the way that category information is recognized and remembered (Hintzman 1986, 1988; Jacoby and Brooks 1984; Nosofsky 1988). The notion behind such models is that "only traces of individual episodes are stored and that aggregates of traces acting in concert at the time of retrieval represent the category as a whole" (Hintzman 1986, 411). Jacoby and Brooks (1984) argue that, if the available representations are of specific episodes, then generality would come from treating similar situations analogously. What they have called "nonanalytic generalization" could be used to explain word recognition: "A word, could be identified to a previous occurrence of a word in a similar context, from a similar source and in a familiar format, rather than by reference to a generalized representation of the page_251 Page 252 word . . ." (Jacoby and Brooks 1984, 3). Hintzman's Minerva 2 model (1986), which stores only descriptions of individual episodes, demonstrates that computing a local average from stored instances can serve as an effective recognition routine. Multiple-Trace models of memory have much to recommend them. They are able to account for important findings in the literature that have been associated with prototypes (e.g., differential forgetting of prototypes and old instances, typicality, and category size effects). In addition, these models are able to explain context-dependent effects that have been reported in the memory and concept-learning literature (Craik and Kirsner 1974; Jacoby 1983; Osgood and Hoosain 1974; Potter and Faulconer 1979; Roth and Shoben 1983). Recently, Logan (1988) has demonstrated that an instancebased model can provide an account of automaticity in skill learning. This last demonstration may have important ramifications for theorizing about fluent word recognition because, as Logan notes, automatic processing is defined as fast, effortless, autonomous, and unavailable to conscious awareness. These same properties are characteristic of fluent speech recognition. Hence, the demonstration that such automaticity could be derived from a system that stores only individual instances makes it reasonable to consider an instance-based account of word recognition. The notion that the matching process during word recognition involves representations of previous exemplars, as opposed
to an abstract prototype, has certain advantages for the developmental model that we have been considering. It could provide a clearer account of the way that knowledge of the sound structure is modified by experience. To see this experential modification, consider first what happens when a memory trace is stored. The trace itself preserves the overall organization of the properties in the perceptual representation. However, not every utterance that an infant hears will be recorded as an episodic trace. Although some random storage of the sound structure of the processed input may occur, in general storage of sound patterns requires that some extra effort be given to processing the sounds. This may involve rehearsing the perceptual representation, an effort to associate it with the meaning or the context or some other such process. The role of experience is not to modify previously stored traces. Rather, new traces are added to those already in secondary (i.e., long-term) memory. The addition of a new trace modifies the way that the whole memory system behaves. The more the new trace differs from preceding ones, the greater is the change in the behavior of the memory system during sub page_252 Page 253 sequent efforts at identification of new items. Recognition in a system like this occurs when a new input, or probe, is broadcast simultaneously to all traces in secondary memory. Each trace is activated according to its similarity to the probe, with traces having the greatest overlap being the most strongly activated. The reply that is returned from secondary memory to primary memory has been described as an "echo" (Hintzman 1986). All the traces in secondary memory contribute to this echo. If several traces are strongly activated, then the content of the echo will primarily reflect their common properties. Traces that are most similar to the probe are held to produce a more intense response and, thus, contribute more to the echo. This ensures that characteristics that distinguish one trace from the others will tend to be masked in the echo. In comparing the echo to the probe, the echo might be used to fill in gaps that exist in the information supplied by the probe. Phenomena like phonemic restoration effects (Samuel 1981a, 1981b, 1986; Warren 1970) might be explained in this way. In any case, should the particular instance be stored in memory, then the trace will take the place of the processed probe. The structure of the probe in a system like this is a critical determinant of which combination of traces contributes most heavily to the echo. Depending upon how specific it is, the set of traces that are most strongly activated may be large or small. The interpretive scheme discussed earlier helps constrain the perceptual representation which serves as the probe. Thus, prior to the point at which the infant engages in any meaningful encoding of speech sounds, one might expect to find that the configuration of properties selected for speech probes varies, more or less, randomly. When the infant begins to establish an interpretive scheme, then the configuration of properties that appear in the probe for a particular word should show considerably more stability. What variance there is in these properties should primarily reflect the encoding of characteristics relating to the particular utterance context. There is good reason to suppose that the perceptual representations that serve as probes are structured in terms of syllable-like units. As noted earlier, many of the most important prosodic features of language including intonation, stress patterns, and tone are organized in syllable-sized units (Goldsmith 1976; Halle and Vergnaud 1987; Hayes 1982; Liberman and Prince 1977; Selkirk 1984). Moreover, there is considerable evidence that infants are attentive to prosodic features of the native language at an early age (DeCasper and Fifer 1980; Fernald 1985; Mehler et al. 1988). Thus, attention to the prosody of speech could help to segment the input page_253 Page 254 into syllable-sized chunks. This would then serve as a basis for structuring the perceptual representation. Organizing the representations in terms of syllable-sized units could also have certain payoffs for achieving a correct segmentation of speech into words. For languages with regular stress patterns such as Czech or Polish (or perhaps even English, as Cutler notes in her chapter in this volume), locating the main stress can provide information about the location of word boundaries in the input. Furthermore, as knowledge grows of the syllable forms permitted by the language, this, too, can aid in segmenting fluent speech into words (Church 1987). For this reason, WRAPSA assumes
that among the global properties included in both the probe and the representations encoded in secondary memory would be an indication of the number of syllablesized units present in the word. How WRAPSA Handles Two Developments in Speech Perception Capacities The Decline in Sensitivity to Foreign Language Contrasts One of the most remarkable findings concerning the development of speech perception capacities has to do with infants responsiveness to contrasts that occur outside of the native language spoken in their environment. During the first 6 months of life, infants are apparently quite proficient in their ability to discriminate foreign language contrasts (Aslin et al. 1981; Best, McRoberts, and Sithole 1988; Lasky, Syrdal-Lasky, and Klein 1975; Streeter 1976; Trehub 1976; Werker et al. 1981). However, shortly thereafter, they appear to show decreased sensitivity to phonetic contrasts that do not appear in their native language. Werker and her colleagues (Werker and Lalonde 1988; Werker and Tees 1984) reported evidence that as early as 8 to 10 months of age, some Canadian infants already showed significant declines in their ability to detect certain Hindi and Salish contrasts. By 10 to 12 months of age, the infants were performing as poorly as monolingual English-speaking adults on these contrasts. At the same time, the infants' ability to discriminate their nativelanguage English contrasts remained high. One implication of these results is that by 12 months of age, infants have already taken steps towards phonological categories in the native language such that contrasts that fall within the same category are no longer discriminated. Nevertheless, there is other evidence that indicates that any decline in sensitivity to foreign language contrasts is less than complete for 12 page_254 Page 255 month-olds. Best, McRoberts, and Sithole (1988) found that both adults and 12- to 14-month-old infants from Englishspeaking homes were able to discriminate Zulu click contrasts. There was no indication of any decreased sensitivity to these contrasts despite the fact that they do not occur in English. Best, McRoberts, and Sithole concluded that their findings demand a modification in the way that the previous results with nonnative contrasts have been interpreted. In particular, they argued that any decline that occurs in sensitivity to foreign language contrasts could not be attributed solely to their absence in the input to the child. Zulu clicks are not present in the language input directed to American infants, yet they still were able to discriminate these. Is there any way to explain the foreign language findings within the framework of WRAPSA? Recall that the model postulates that a special mode of speech perception only evolves when the infant starts to attach linguistic significance to speech. It is at this point that the development of the interpretive scheme for speech sounds begins. Prior to this time, the perception of speech sounds relies on the analytic processes and whatever general schemes exist for processing acoustic signals. Hence, during the first months of life, foreign and native language contrasts are treated alike and processed in the same way that certain complex nonspeech contrasts are (Jusczyk et al. 1980, 1983, 1989; also Studdert-Kennedy 1986 for a similar suggestion concerning early speech discrimination abilities). Once the speech mode of perception begins to develop, its interpretive scheme takes precedence over any general auditory processing schemes for speech sounds. One implication of this position is that, when foreign language contrasts drop out, it is because they are distinguished along an unattended dimension. Because distances along such dimensions are shrunk, two previously discriminable points can no longer be resolved, and therefore, the distinction between them is not detected. If, at some point, the infant begins to attend once again to this dimension, such as when learning a foreign language that employs this distinction, then the distances will be stretched again allowing it to be detected. In this way, WRAPSA can handle those cases in which sensitivity to foreign contrasts declines. How does the model account for the Zulu click case? The argument here is that such sounds fall outside the range of ones that American infants frequently hear in the context of conversations. Hence, these sounds are treated, more or less, like nonspeech sounds. To the extent that sufficient information is available from the analytic processes to distinguish them, the infants will be able to discriminate them. page_255
Page 256 Where Phonological Categories Come From WRAPSA assumes that, in most cases, a description of the input in terms of phonetic segments is not necessary in order for fluent speech recognition to occur. Instead, the description that is matched against representations stored in secondary memory provides an indication of acoustic properties present in syllable-sized chunks. Nevertheless, even though the recovery of a phonetic description might be avoided in most instances of fluent speech recognition, the existence of representations in terms of phonetic segments is believed to be important to explain many other phenomena connected with the sound structure of language. For example, accounts of errors in speech production, such as slips of the tongue, rely on the notion that the speaker has available a phonetic description of an utterance (Fromkin 1973; Shattuck-Hufnagel and Klatt 1979). In addition, many linguistic accounts of the native speaker's understanding of the phonological features of language, including the ability to produce the correct phonetic realization of certain inflexions, assume the availability of a phonetic description of the input (Baudouin de Courtenay 1972; Bromberger and Halle 1986; Chomsky and Halle 1968; Jakobson 1971; Trubetzkoy 1939). Finally, the ability to recover some sort of phonetic description is clearly necessary to explain how listeners could ever identify unfamiliar words or nonwords in the speech stream (Klatt 1979). Therefore, at some point in development, the listener must be able to access a phonetic representation. However, whether the system underlying fluent speech recognition typically requires an intermediary stage of phonetic segment recovery is another matter, as is the issue of whether fluent-speech recognition can be learned only if infants first accomplish an analysis of utterances into sequences of phonetic segments. One argument that can be raised concerning the possibility that infants derive a phonetic representation of the input very early on in development relates to findings mentioned earlier. For instance, 9-month-old infants are apparently able to distinguish words in the native language from ones in a foreign language (Jusczyk et al. 1993). Furthermore, their ability to make such distinctions is mediated by some information other than that purely available in prosody. Does this mean that, by 9 months, infants are performing a phonetic analysis of the input? Similarly, does the fact that, by 12 months of age, infants' perceptual categorization of the input appears to correspond to the phonemic categories of the native language indicate that they have phonetic representations of speech (Werker and Lalonde 1988)? page_256 Page 257 As we noted earlier, there is little evidence that, during the first few months of life that the infant's perceptual representations of speech are analyzed into phonetic segment-sized units (Bertoncini et al. 1988; Jusczyk and Derrah 1987). Nor is such a representation required to explain the nature of speech production during this phase of development. Is there a way of providing the necessary information for distinguishing native language words from foreign ones without invoking a phonetically segmented representation? In fact, the relevant information would also be available to the infant who simply takes account of what syllables appear in the input. The syllables contain the necessary information both about the phones used in the language and the phonotactic constraints. Familiarity with the inventory of syllables used by speakers of the language would allow the learner to detect ones that violate the syllable structure that the language permits either because of the appearance of certain phonetic segments or of illegal sequences of segments. Once the child has gained access to syllable units, he or she is in a position to begin analyzing their internal structure. As Clements and Keyser have noted, . . . syllable structure provides an organizational principle which permits a significant degree of simplification in the task of the language learner. Syllable structure is a readily discoverable principle of classification which allows the formulation of phonological rules in terms of language particular classes of sounds such as "syllablefinal consonant," "syllabic nucleus," "extrasyllabic consonant" and the like. Once the principles of syllabic organization have been worked out for a given language, many formally arbitrary rule statements can be replaced by internally motivated statements involving syllable-based categories such as these. (Clements and Keyser 1983, 116) At some point the language learner does become capable of deriving a phonetic representation of utterances. Certainly, some such representation underlies the ability to read (Liberman et al. 1974; Morais et al. 1979; Rozin and Gleitman 1977; Treiman 1987; Walley, in press; Walley, Smith, and Jusczyk 1986). Given the kind of system that we have been
considering, how does phonetic representation come about? The nature of the word recognition process may play a role in this development. Specifically, there is an initial-to-final bias in speech processing in that speech is organized temporally. Systems, such as LAFS, take advantage of this in organizing the word recognition network so as to begin processing with acoustic onset properties. Thus, words that share common initial segments tend to be grouped together in that they are similar in their acoustic characteristics at onset. Common characteristics shared by page_257 Page 258 a large number of nearby items, although not shared by more distant ones, could serve as an initial basis for developing representations corresponding to phonological categories. Moreover, this process could occur with an instance-based system as Hintzman (1986) has demonstrated. If such an abstraction were made the object of conscious reflection, then a trace of the phonological category would be stored in secondary memory. The system would converge on categories consistent with those in the native language simply because the frequency of utterances to which the learner is exposed conform to these categories a much greater percentage of the time than they do not.4 One limitation of the account provided here is that it only offers a solution for part of the problem. It explains how allophones that are in free variation with one another (i.e., can be substituted for one another in the same syllable position) could come to be linked to the same phonological category. In such a case, one can appeal to the fact that the meaning of the utterance in the language does not change when one element is substituted for the other. What is not explained, however, is how elements occupying different positions in syllables (i.e., ones that are in complementary distribution with one another) could ever become linked to the same phonological category. How does the infant learn that a phonetic segment in one situation is to be treated as a variant of another phone in an entirely different context? Appeals to perceptual similarity apparently will not suffice because which allophones get assigned to the same phoneme vary from language to language (Fodor, Bever, and Garrett 1974). The response that I have to offer here is that, at the present time, there are no data that demonstrate that infants actually do perceive similarities between segments occurring in different positions. Thus far, what limited data there are showing that different phonetic segments are categorized together involve cases occupying the syllable-initial position (Werker and Lalonde 1988). Hence, there is no reason to assume that infants under a year of age do perceive similarities between segments in complementary distribution. In fact, as I have suggested previously (Jusczyk 1985), it may not be the case that children make such associations much before they learn to read. As to what could provide the eventual support for recognizing such similarities, I can only speculate that this may be an instance in which reference to articulatory gestures may provide the necessary link. Although phonetic representations might arise in the way described here, it does not necessarily follow that the wordrecognition system is then reorganized to include phonetic representations in online speech rec page_258 Page 259 ognition. Instead, a process involving a detailed phonetic representation of the input could evolve as a separate process employed in special circumstances, such as when a nonword or unfamiliar word is encountered. Conclusion I have tried to put forth a model of the way in which infant speech-perception capacities may develop into a system that underlies word recognition in fluent speech. Although the WRAPSA model focuses on the way that word-recognition processes develop during infancy, it also attempts to deal with a number of issues relating to the infant's knowledge of the sound structure of the native language. As with my previous model, I want to stress the preliminary nature of the WRAPSA model as an explanation of the development of speech perception. Although more details have been incorporated in the model and more progress has been made in understanding the development of speech perception, I think that we still have a considerable way to go before we can provide definitive answers to all the important questions. In the meantime, I hope that the model serves as a useful framework for viewing the changes that take place in the
infant's processing of speech. Acknowledgments This chapter was written in 1989. It is dedicated to the memory of Dennis Klatt, whose work stimulated my own thinking about the way that word recognition processes develop. Dennis's invention and distribution of his software synthesizer made possible much of my own research. In addition, I learned much from talking with him over t he years. Preparation of this paper was supported by a grant from NICHD (HD-15795). I would like to thank Douglas Hintzman, Asher Cohen, David Pisoni, Charles Ferguson, and Ann Marie Jusczyk, as well as the participants of the workshop, for comments they made on earlier versions of the manuscript. Notes 1. For purposes of argument, we are overstating the case a bit. There are reports in the adult literature suggesting that insensitivity to within-category depends greatly on the nature of the task used (Carney, Widin, and Veimeister 1977; Pisoni and Tash 1974; Samuel 1977). In addition, there have been some reports that infants are sensitive to withincategory differences for certain types of contrasts (Eimas and Miller 1980). page_259 Page 260 2. This sort of assumptionthat some sort of recoding into a more efficient form is necessary in order to retain the information about the inputis at the heart of two-stage models of categorical perception (Fujisaki and Kawahima 1969; 1970; Pisoni 1971, 1973). 3. The notion that global representations could suffice for word recognition during early stages of development with more detailed representations added as vocabulary grows receives some empirical support from a recent study by Charles-Luce and Luce (1990). They calculated the number of near neighbors (perceptually similar words) for words in children's lexicons at various ages. They found that a vocabulary typical for a 7-year-old contains a much lower proportion of highly confusable items than does that for an average adult. 4. I acknowledge MacKain's (1982) point that the input is likely to contain instances that fall outside the bounds of phonemic categories as well (e.g., the inclusion of prevoiced /b/ along with voiced /b/ in English). However, it seems likely that these will occur much less frequently than ones that fall within the range for the category. References Aslin, R. N. (1987). Visual and auditory development in infancy. J. D. Osofsky (ed.), Handbook of infant development (2nd ed., pp. 5-97). New York: Wiley. Aslin, R. N., Pisoni, D. B., Hennessy, B. L., and Perey, A. J. (1981). Discrimination of voice onset time by human infants: New findings and implications for the effects of early experience. Child Development, 52, 1135-1145. Aslin, R. N., Pisoni, D. B., and Jusczyk, P. W. (1983). Auditory development and speech perception in infancy. In M. Haith and J. Campos (eds.), Handbook of child psychology, vol. 2: Infancy and developmental psychobiology. New York: Wiley. Bailey, P. J., Summerfield, A. Q., and Dorman, M. F. (1977). On the identification of sine-wave analogs of certain speech sounds. Status Report on Speech Research (SR-51-52). New Haven, Conn.: Haskins Laboratories. Baudouin de Courtenay, J. (1972). Selected writings of Baudouin de Courtenay (E. Stankiewicz, ed.). Bloomington, Ind.: Indiana University Press. Berman, R. (1986). A cross-linguistic perspective: Morphology and syntax. In P. Fletcher and M. Garman (eds.), Language acquisition (2nd ed., pp. 429-447). Cambridge: Cambridge University Press. Bertoncini, J. and Mehler, J. (1981). Syllables as units in infant speech perception. Infant Behavior and Development, 4, 247-260.
Bertoncini, J., Bijeljac-Babic, R., Blumstein, S. E., and Mehler, J. (1987). Discrimination in neonates of very short CV's. Journal of the Acoustical Society of America. 82, 31-37. Bertoncini, J., Bijeljac-Babic, R., Jusczyk, P. W., Kennedy, L. J., and Mehler, J. (1988). An investigation of young infants' perceptual representations of speech sounds. Journal of Experimental Psychology: General, 117, 21-33. page_260 Page 261 Bertoncini, J., Morais, J., Bijeljac-Babic, R., Mehler, J., MacAdams, S., and Peretz, I. (1989). Dichotic perception and laterality in neonates. Brain and Language, 37, 591-605. Best, C. T., McRoberts, G. W., and Sithole, N. M. (1988). Examination of the perceptual re-organization for speech contrasts: Zulu click discrimination by English-speaking adults and infants. Journal of Experimental Psychology: Human Perception and Performance, 14, 245-360. Bond, Z. S. and Garnes, S. (1980). Misperceptions of fluent speech. In R. A. Cole (ed.), Perception and production of fluent speech (pp. 115-132). Hillsdale, N.J.: Erlbaum. Broen, P. (1972). The verbal environment of the language learning child. ASHA Monographs, 17. Bromberger, S., and Halle, M. (1986). On the relationship between phonology and phonetics. In J. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes (pp. 510-520). Hillsdale, NJ: Erlbaum. Brown, R. and Fraser, C. (1964). The acquisition of syntax. Monographs of the Society for Research in Child Development, 29, 9-34. Carden, G. C., Levitt, A., Jusczyk, P. W. and Walley, A. C. (1981). Evidence for phonetic processing of cues to place of articulation: Perceived manner affects perceived place. Perception and Psychophysics, 29, 26-36. Carney, A. E., Widin, G. P., and Viemeister, N. F. (1977). Noncategorical perception of stop consonants differing in VOT. Journal of the Acoustical Society of America, 62, 961-970. Changeux, J. P., Heidmann, T., and Patte, P. (1984). Learning by selection. In P. Marler and H. S. Terrace (eds.), The biology of learning (pp. 115-133). Berlin: Springer-Verlag. Charles-Luce, J. and Luce, P. A. (1990). Similarity neighborhoods of words in young children's lexicons. Journal of Child Language, 17, 205-215. Chomsky, N. and Halle, M. (1968). The sound pattern of English. New York: Harper and Row. Church, K. (1987). Phonological parsing and lexical retrieval. Cognition, 25, 53-69. Clements, G. N. and Keyser, S. J. (1983). CV Phonology. Cambridge, Mass.: MIT Press. Cole, R. A. and Jakimik, J. (1980). A model of speech perceptions. In R. A. Cole (ed.), Perception and production of fluent speech (pp. 133-163). Hillsdale, N.J.: Erlbaum. Colombo, J. and Bundy, R. S. (1981). A method for the measurement of infant auditory selectivity. Infant Behavior and Development, 4, 219-223. page_261 Page 262 Cooper, R. P. and Aslin, R. N. (1989). The language environment of the young infant: Implications for early perceptual development. Canadian Journal of Psychology, 43, 247-265. Cooper, W. E. and Paccia-Cooper, J. (1980). Syntax and speech. Cambridge, Mass.: Harvard University Press.
Craik, F. I. M., and Kirsner, K. (1974). The effect of speaker's voice on word recognition. Quarterly Journal of Experimental Psychology, 26, 274-284. Cutler, A. and Carter, D. M. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. Cutler, A. and Norris, D. G. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. DeCasper, A. J. and Fifer, W. P. (1980). Of human bonding: Newborns prefer their mothers' voices. Science, 208, 11741176. Eimas, P. D. (1974). Auditory and linguistic units of processing of cues for place of articulation by infants. Perception and Psychophysics, 16, 513-521. Eimas, P. D. (1975). Auditory and phonetic coding of the cues for speech: Discrimination of the [r-l] distinction by young infants. Perception and Psychophysics, 18, 341-347. Eimas, P. D. and Miller, J. L. (1980a). Contextual effects in infant speech perception. Science, 209, 1140-1141. Eimas, P. D. and Miller, J. L. (1980b). Discrimination of the information for manner of articulation. Infant Behavior and Development, 3, 367-375. Eimas, P. D., and Miller, J. L., and Jusczyk, P. W. (1987). On infant speech perception and the acquisition of language. In S. Harnad (ed.), Categorical Perception (pp. 161-195). New York: Cambridge University Press. Eimas, P. D., Siqueland, E. R., Jusczyk, P., and Vigorito, J. (1971). Speech perception in infants. Science, 171, 303-306. Elman, J. L., Diehl, R. L., and Buchwald, S. E. (1977). Perceptual switching in bilinguals. Journal of the Acoustical Society of America, 62, 971-974. Fernald, A. (1985). Four-month-old infants prefer to listen to motherese. Infant Behavior and Development, 8, 181-195. Fernald, A. and Kuhl, P. K. (1987). Acoustic determinants of infant preference for motherese speech. Infant Behavior and Development, 10, 279-293. Flege, J. E. (1991). Perception and production: The relevance of phonetic input to L2 phonological learning. In C. A. Ferguson and T. Heubner (eds.), Crosscurrents in second language acquisition and linguistic theories. Philadelphia: John Benjamins. Fodor, J. A., Bever, T. G., and Garrett, M. F. (1974). The psychology of language. New York: McGraw-Hill. page_262 Page 263 Friedlander, B. Z., and Wisdom, S. S. (1971). Preverbal infants' selective operant responses for different levels of auditory complexity and language redundancy. Paper presented at Annual General Meeting of the Eastern Psychological Association, New York. Fromkin, V. A. (1973). Speech errors as linguistic evidence. The Hague: Mouton. Fujisaki, H. and Kawashimi, T. (1969). On the modes and mechanisms of speech perception. Annual Report of the Engineering Institute, 28. Tokyo: Faculty of Engineering, University of Tokyo. Fujisaki, H. and Kawashima, T. (1970). Some experiments on speech perception and a model for the perceptual mechanism. Annual Report of the Engineering Institute, 29. Tokyo: Faculty of Engineering, University of Tokyo. Garnica, O. K. (1977). Some prosodic and paralinguistic features of speech to young children. In C. Snow and C. A. Ferguson (eds.), Talking to children: Language input and acquisition (pp. 63-88). Cambridge: Cambridge University Press.
Gerken, L. A., Landau, B., and Remez, R. E. (1989). Function morphemes in young children's speech perception and production. Developmental Psychology, 25, 204-216. Gleitman, L., and Wanner, E. (1982). The state of the state of the art. In E. Wanner and L. Gleitman (eds.), Language acquisition: The state of the art (p. 3-48). Cambridge: Cambridge University Press. Glenn, S. M., Cunningham, C. C., and Joyce, P. F. (1981). A study of auditory preferences in nonhandicapped infants and infants with Down's Syndrome. Child Development, 52, 1303-1307. Goldsmith, J. (1976). An overview of autosegmental phonology. Linguistic Analysis, 2, 23-68. Greenspan, S. L., Nusbaum, H. C., and Pisoni, D. B. (1988). Perceptual learning of synthetic speech produced by rule. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 421-433. Grosjean, F. and Gee, J. P. (1987). Prosodic structure and spoken word recognition. Cognition, 25, 135-155. Halle, M. and Vergnaud, J. R. (1987). An essay on stress. Cambridge, Mass.: MIT Press. Hayes, B. (1982). Extrametricality and English stress. Linguistic Inquiry, 13, 227-276. Hintzman, D. L. (1986). ''Schema Abstraction" in a multiple-trace memory model. Psychological Review, 93, 411-428. Hintzman, D. L. (1988). Judgments of frequency and recognition memory in a multiple-trace memory model. Psychological Review, 95, 528-551. page_263 Page 264 Hirsh-Pasek, K., Kemler Nelson, D. G., Jusczyk, P. W., Wright Cassidy, K., Druss, B., and Kennedy, L. (1987). Clauses are perceptual units for young infants. Cognition, 26, 269-286. Jacoby, L. L. (1983). Perceptual enhancement: Persistent effects of an experience. Journal of Experimental Psychology: Learning, Memory and Cognition, 9, 21-38. Jacoby, L. L. and Brooks, L. R. (1984). Nonanalytic cognition: Memory, perception, and concept learning. In G. H. Bower (ed.), The psychology of learning and motivation, vol. 18 (pp.1-47). New York: Academic Press. Jakobson, R. (1971). Selected writings. The Hague: Mouton Jusczyk, P. W. (1977). Perception of syllable-final stops by two-month-old infants. Perception and Psychophysics, 21, 450-454. Jusczyk, P. W. (1981). The processing of speech and nonspeech sounds by infants: Some implications. In R. N. Aslin, J. R. Alberts, and M. R. Petersen (eds.), Development of perception: Psychobiological perspectives, vol. 1 (pp. 191-217). New York: Academic Press. Jusczyk, P. W. (1985). On characterizing the development of speech perception. In J. Mehler and R. Fox (eds.), Neonate cognition: Beyond the blooming, buzzing confusion (pp. 199-229). Hillsdale, N.J.: Erlbaum. Jusczyk, P. W. (1986). Towards a model for the development of speech perception. In J. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes (pp. 1-19). Hillsdale, N.J.: Erlbaum. Jusczyk, P. W. (1989). Perception of cues to causal units in native and non-native languages. Paper presented at the biennial meeting of the Society for Research in Child Development, Kansas City, Mo., April 1989. Jusczyk, P. W. (in press). How talker variation affects young infants' perception and memory of speech. In J. CharlesLuce, P. A. Luce, and J. R. Sawusch (eds.), Theories of spoken language: Perception, production, and development. Norwood, N.J.: Ablex. Jusczyk, P. W. and Bertoncini, J. (1988). Viewing the development of speech perception as an innately guided learning process. Language and Speech, 31, 217-238.
Jusczyk, P. W. and Derrah, C. (1987). Representation of speech sounds by young infants. Developmental Psychology, 23, 648-654. Jusczyk, P. W. and Thompson, E. J. (1978). Perception of a phonetic contrast in multisyllabic utterances by 2-month-old infants. Perception and Psychophysics, 23, 105-109. Jusczyk, P. W., Bertoncini, J., Bijeljac-Babic, R., Lennedy, L. J., and Mehler, J. (1990). The role of attention in speech perception by young infants. Cognitive Development, 5, 265-286. page_264 Page 265 Jusczyk, P. W., Copan, H., and Thompson, E. (1978). Perception by 2-month-old infants of glide contrasts in multisyllabic utterances. Perception and Psychophysics, 24, 515-520. Jusczyk, P. W., Friederici, A. D., Wessels, J. M. I., Svenkerud, V. Y., and Jusczyk, A. M. (1993). Infants' sensitivity to the sound patterns of native language words. Journal of Memory and Language 32, 402-420. Jusczyk, P. W., Jusczyk, A. M., Schomberg, T., and Koenig, N. (in preparation). An investigation of the infant's representation of information in bisyllabic utterances. Jusczyk, P. W., Hirsh-Pasek, K., Kemler Nelson, D. G., Kennedy, L. J., Woodward, A., and Piwoz, J. (1992). Perception of acoustic correlates of major phrasal units by young infants. Cognitive Psychology, 24, 252-293. Jusczyk, P. W., Kennedy, L. J., and Jusczyk, A. M. (submitted). Young infants' retention of information about syllables. Jusczyk, P. W., Pisoni, D. B., and Mullennix, J. W. (1992). Some consequences of stimulus variability on speech processing by 2-month-old infants. Cognition, 43, 253-291. Jusczyk, P. W., Pisoni, D. B., Reed, M., Fernald, A., and Myers, M. (1983). Infants' discrimination of the duration of a rapid spectrum change in nonspeech signals. Science, 222, 175-177. Jusczyk, P. W., Pisoni, D. B., Walley, A. C. and Murray J. (1980). Discrimination of the relative onset of two-component tones by infants. Journal of the Acoustical Society of America, 67, 262-270. Jusczyk, P. W., Rosner, B. S., Reed, M., and Kennedy, L. J. (1989). Could temporal order differences underlie 2-montholds' discrimination of English voicing contrasts? Journal of the Acoustical Society of America, 85, 1741-1749. Karzon, R. G. (1985). Discrimination of a polysyllabic sequence by one- to four-month-old infants. Journal of Experimental Child Psychology, 39, 326-342. Kemler Nelson, D. G. (1984). The effect of intention on what concepts are acquired. Journal of Verbal Learning and Verbal Behavior, 23, 734-759. Kemler Nelson, D. G. (1989). Developmental trends in infants' sensitivity to prosodic cues correlated with linguistic units. Paper presented at the biennial meeting of the Society for Research in Child Development, Kansas City, Mo., April 1989. Kemler Nelson, D. G., Hirsh-Pasek, K., Jusczyk, P. W., and Wright Cassidy, K. (1989). How the prosodic cues in motherese might assist language learning. Journal of Child Language, 16, 53-68. Klatt, D. H. (1974). Word verification in a speech understanding system. In R. Reddy (ed.), Speech recognition (pp. 321344). New York: Academic Press. Klatt, D. H. (1976). Linguistic uses of segment duration in English: Acoustic and perceptual evidence. Journal of the Acoustical Society of America, 59, 1208-1221. page_265
Page 266 Klatt, D. H. (1979). Speech perception: A model of acoustic-phonetic analysis and lexical access. Journal of Phonetics, 7, 279-312. Klatt, D. H. (1988). Review of selected models of speech perception. In W. Marslen-Wilson (ed.), Lexical representation and process. Cambridge, Mass.: MIT Press. Kuhl, P. K. (1979). Speech perception in early infancy: Perceptual constancy for spectrally dissimilar vowel categories. Journal of the Acoustical Society of America, 66, 1668-1679. Kuhl, P. K. (1983). Perception of auditory equivalence classes for speech in early infancy. Infant Behavior and Development, 6, 263-285. Kuhl, P. K. (1987). Perception of speech and sound in early infancy. In P. Salapatek and L. Cohen (eds.), Handbook of infant perception, vol 2 (pp. 275-381). New York: Academic Press. Kuhl, P. K. and Miller, J. D. (1982). Discrimination of auditory target dimensions in the presence or absence of variation in a second dimension by infants. Perception and Psychophysics, 31, 279-292. Lasky, R. E., Syrdal-Lasky, A., and Klein, R. E. (1975). VOT discrimination by four to six and a half month old infants from Spanish environments. Journal of Experimental Child Psychology, 20, 215-225. Liberman, A. M., Cooper, F. S., Shankweiler, D. P., and Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431-461. Liberman, I. Y., Shankweiler, D., Fisher, F. W., and Carter, B. (1974). Explicit syllable and phoneme segmentation in the young child. Journal of Experimental Child Psychology, 18, 201-212. Liberman, M. and Prince, A. (1977). On stress and linguistic rhythm. Linguistic Inquiry, 8, 249-336. Lindblom, B. (1986). On the origin and purpose of discreteness and invariance in sound patterns. In J. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes (pp. 493-510). Hillsdale, N.J.: Erlbaum. Logan, G. D. (1988). Towards an instance theory of automatization. Psychological Review, 95, 492-527. Logan, J. S., Lively, S. E. and Pisoni, D. B. (1989). Training Japanese listeners to identify /r/ and /1/. Journal of the Acoustical Society of America, 85, 137-138. MacKain, K. S. (1982). Assessing the role of experience on infants' speech discrimination. Journal of Child Language, 9, 527-542. McClasky, C. L., Pisoni, D. B., and Carrell, T. D. (1980). Effects of transfer of training on identification of a new linguistic contrast in voicing. In Research on Speech Perception, Progress Report No. 6. Bloomington, Ind.: Indiana University. Mehler, J., Bertoncini, J., Barriere, M., and Jassik-Gershenfekd, D. (1978). Infant recognition of mother's voice. Perception, 7, 491-497. page_266 Page 267 Mehler, J., Jusczyk, P. W., Lambertz, G., Halsted, N., Bertoncini, J., and AmielTison, C. (1988). A precursor of language acquisition in young infants. Cognition, 29, 143-178. Miller, J. L. (1981). Effects of speaking rate on segmental distinctions. In P. D. Eimas and J. L. Miller (eds.), Perspectives on the study of speech (pp. 39-74). Hillsdale, N.J.: Erlboum. Miller, J. L. (1987). Mandatory processing in speech perception. In J. L. Garfield (ed.), Modularity in knowledge and natural language understanding. Cambridge, Mass.: MIT Press.
Miller, J. L. and Eimas, P. D. (1983). Studies on the categorization of speech by infants. Cognition, 13, 135-165. Miller, J. L. and Liberman, A. M. (1979). Some effects of later-occurring information on the perception of stop consonant and semivowel. Perception and Psychophysics, 25, 457-465. Mills, M. and Meluish, E. (1974). Recognition of the mother's voice in early infancy. Nature, 252, 123-124. Morais, J., Cary, L., Alegria, J., and Bertelson, P. (1979). Does awareness of speech as a sequence of phones arise spontaneously? Cognition, 7, 323-331. Morgan, J. L. (1986). From simple input to complex grammar. Cambridge, Mass.: MIT Press. Morse, P. A. (1972). The discrimination of speech and nonspeech stimuli in early infancy. Journal of Experimental Child Psychology, 13, 477-492. Nakatani, L. H. and Dukes, K. D. (1977). Locus of segmental cues for "word" juncture. Journal of the Acoustical Society of America, 62, 714-719. Nakatani, L. H., and Schaffer, J. A. (1978). Hearing words without words: Prosodic cues for word perception. Journal of the Acoustical Society of America, 63, 234-245. Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39-57. Nosofsky, R. M. (1987). Attention and learning processes in the identification and categorization of integral stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 87-113. Nosofsky, R. M. (1988). Exemplar-based accounts of relations between classification, recognition and typicality. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 700-708. Nosofsky, R. M., Clark, S. E., and Shin, H. J. (1989). Rules and exemplars in categorization, identification and recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 282-304. page_267 Page 268 Nozza, R. J., Rossman, R. N. F., Bond, L. C., and Miller, S. L. (1990). Infant speech sound discrimination in noise. Journal of the Acoustical Society of America, 87, 339-350. Osgood, C. E. and Hoosain, R. (1974). Salience of the word as a unit in the perception of language. Perception and Psychophysics, 15, 168-192. Pisoni, D. B. (1971). On the nature of categorization of speech sounds. Supplement to Status Report on Speech Research (SR-27). New Haven, Conn.: Haskins Laboratories. Pisoni, D. B. (1973). Auditory and phonetic memory codes in the discrimination of consonants and vowels. Perception and Psychophysics, 13, 253-260. Pisoni, D. B. and Tash, J. (1974). Reaction times to comparisons within and across phonetic categories. Perception and Psychophysics, 15, 285-290. Pisoni, D. B., Aslin, R. N., Perey, A. J., and Hennessy, B. L. (1982). Some effects of laboratory training on identification and discrimination of voicing contrasts in stop consonants. Journal of Experimental Psychology: Human Perception and Performance, 8, 297-314. Potter, M. C. and Faulconer, B. A. (1979). Understanding noun phrases. Journal of Verbal Learning and Verbal Behavior, 18, 509-521. Reddy, D. R. (1974). Speech recognition. New York: Academic Press. Reddy, D. R. (1976). Speech recognition by machine: A review. Proceedings of the IEEE, 64, 501-523.
Roth, E. M. and Schoben, E. J. (1983). The effect of context on the structure of categories. Cognitive Psychology, 15, 346-378. Rozin, P. and Gleitman, L. R. (1977). The structure and acquisition of reading II: The reading process and the acquisition of the alphabetic principle. In A.S. Reber and D. L. Scarborough (eds.), Toward a psychology of reading (pp. 55-141). Hillsdale, N.J.: Erlbaum. Samuel, A. G. (1977). The effect of discrimination training on speech perception: Noncategorical perception. Perception and Psychophysics, 22, 321-330. Samuel, A. G. (1981a). Phonemic restoration: Insights from a new methodology. Journal of Experimental Psychology: General, 110, 474-494. Samuel, A. G. (1981b). The role of bottom-up confirmation in the phonemic restoration illusion. Journal of Experimental Psychology: Human Perception and Performance, 7, 1124-1131. Samuel, A. G. (1986). The role of the lexicon in speech perception. In E. C. Schwab and H. C. Nusbaum (eds.), Pattern recognition by humans and machines, vol. 1: Speech Perception (pp. 89111). New York: Academic Press. Sawusch, J. R. (1986). Auditory and phonetic coding of speech. In E. C. Schwab and H. C. Nusbaum (eds.), Pattern recognition by humans and machines, vol. 1: Speech Perception (pp. 51-88). New York: Academic Press. page_268 Page 269 Selkirk, E. O. (1984). Phonology and syntax: The relation between sound and structure. Cambridge, Mass.: MIT Press. Shattuck-Hufnagel, S. and Klatt, D. H. (1979). The limited use of distinctive features and markedness in speech production: Evidence from speech error data. Journal of Verbal Learning and Verbal Behavior, 18, 41-55. Snow, C. E. (1972). Mothers' speech to children learning language. Child Development, 43, 549-565. Spring, D. R. and Dale, P. S. (1977). Discrimination of linguistic stress in early infancy. Journal of Speech and Hearing Research, 20, 224-232. Streeter, L. A. (1976). Language perception of 2-month-old infants shows effects of both innate mechanisms and experience. Nature, 259, 39-41. Studdert-Kennedy, M. (1986). Sources of variability in early speech development. In J. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes (pp. 58-76). Hillsdale, N.J.: Erlbaum. Trehub, S. E. (1976). The discrimination of foreign speech contrasts by infants and adults. Child Development, 47, 466472. Trehub, S. E., Bull, D. and Schneider, B. A. (1981). Infants' speech and nonspeech perception: A review and reevaluation. In R. L. Schiefelbusch and D. B. Bricker (eds.), Early language: Acquisition and intervention (pp. 11-50). Baltimore: University Park Press. Treiman, R. (1987). On the relationship between phonological awareness and literacy. European Bulletin of Cognitive Psychology, 7, 524-529. Trubetzkoy, N. (1939). Principes de phonologie. (J. Cantineau, trans. 1949). Paris: Klincksieck. Tulving, E. (1983). Elements of episodic memory. New York: Oxford University Press. Vassiere, J. (1981). Speech recognition programs as models of speech perception. In T. Myers, J. Laver, and J. Anderson (eds.), The cognitive representation of speech (pp. 443-457). Amsterdam: North-Holland.
Waibel, A. (1986). Suprasegmentals in very large vocabulary word recognition. In E. C. Schwab and H. C. Nusbaum (eds.), Pattern recognition by humans and machines, vol. 1: Speech Perception (pp. 159-186). New York: Academic Press. Walley, A. C. (in press). Spoken word recognition and vocabulary growth in early childhood. In J. Charles-Luce, P. A. Luce, and J. R. Sawusch (eds.), Theories of spoken language: Perception, production and development. Norwood, N.J.: Ablex. Walley, A. C., Smith, L. B., and Jusczyk, P. W. (1986). The role of phonemes and syllables in the perceived similarity of speech sounds for children. Memory and Cognition, 14, 220-229. Warren, R. M. (1970). Phonemic restoration of missing speech sounds. Science, 167, 392-393. page_269 Page 270 Werker, J. F. and Lalonde, C. E. (1988). Cross-Language speech perception: Initial capabilities and developmental change. Developmental Psychology, 24, 672-683. Werker, J. F. and McLeod, P. J. (1989). Infant preference for both male and female infant-directed talk: A developmental study of attentional and affective responsiveness. Canadian Journal of Psychology, 43, 230-246. Werker, J. F. and Tees, R. C. (1984a). Cross language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7, 49-63. Werker, J. F. and Tees, R. C. (1984b). Phonemic and phonetic factors in adult cross-language speech perception. Journal of the Acoustical Society of America, 75, 1866-1878. Werker, J. F., Gilbert, J. H. V., Humphrey, K., and Tees, R. C. (1981). Developmental aspects of cross-language speech perception. Child Development, 52, 349-355. page_270 Page 271
Chapter 8 Sentential Processes in Early Child Language: Evidence from the Perception and Production of Function Morphemes LouAnn Gerken I Introduction Language can be thought of as a multileveled system that translates between external signals and internal messages. Each level in the system (e.g., phonology, morphology, and syntax) has its own characteristic unit of representation (e.g., phonemes, morphemes, and sentences), and units at lower levels, that is, those closer to the signal are the building blocks of units at higher levels, that is, those closer to the message. For example, words and morphemes are made up of phonemes, and sentences are made of words and morphemes. Given this hierarchical linguistic organization, a logical first guess about the nature of language acquisition is that learning occurs in stages, with each stage having as its focus a particular linguistic level and unit of representation. Thus, infants might begin the language acquisition process by learning to perceive and produce the repertoire of sound segments used in their language. Once having acquired this repertoire, they employ it consistently to recognize and to produce isolated words. In the end, they acquire the principles that allow them to combine words into sentences. There has been a consensus among many child phonologists, as well as many of the authors in this book, that this view of language acquisition is at least partially incorrect. It has become increasingly clear in recent years that children's early
word learning does not follow directly from the auditory and articulatory repetoires that they demonstrate in infancy. Rather, it appears that much of children's knowledge of the abstract segmental character of their language grows out of early word learning (Jusczyk 1985; Macken 1979; Menn 1983). For example, a number of researchers have argued that children can distinguish among the set of words in their early vocabulary by using only page_271 Page 272 overall acoustic or featural properties and without needing a phonological representation (Jusczyk 1985; Menn 1983). Only when their vocabulary reaches a certain size and there is substantial acoustic or featural overlap among words, do children need to organize their mental lexicons in terms of phonological segments (see Charles-Luce and Luce 1990).1 Thus, it appears that the stage view of language acquisition is not supported by the developmental relation between phonemes and words. The stage view of language development has also been prevalent in examinations of children's multiword utterances. It leads to the contention that children learn referential content words through parental labeling and then use these words as the building blocks for their early sentences (Bates 1976; Bowerman 1973; Pinker 1984). In contrast with this position, I will suggest that, just as word learning might serve as a catalyst for some phonological organization, so might information contained at the sentence-level serve as a catalyst for the discovery of smaller units, such as words. In my discussion of young children's sentence-level processes, I will focus on their perception and production of function morphemes or functors, such as articles, and verb inflections. Functors do not typically have real world referents but instead are important aspects of both the prosody and syntax of sentences. Thus, the degree to which children demonstrate sensitivity to these elements is one measure of their knowledge of sentence-level processes. Children frequently fail to produce function morphemes in their earliest utterances, and this has been taken as the primary evidence for the view that children's early representation of language is based on referential content words. Therefore, when children hear sentences, they listen for familiar content words that were learned in isolation and ignore function morphemes as unfamiliar noise. Contrary to this view, I will present evidence demonstrating that children do attend to function morphemes when listening to sentences despite their failure to produce them. I will argue that, rather than treating function morphemes as elements that are ignored in their search for familiar content words, children use function morphemes in sentence comprehension. I will also demonstrate that young children are sensitive to the canonical prosodic characteristics of their language. These characteristics, in which function morphemes play a major role, may help young children break into the linguistic system. page_272 Page 273 II Children's Sensitivity to Function Morphemes Function morphemes have several properties that might make them useful for language learning. First, they have distinct phonological properties: they contain a small set of phonemes (in English /s, z, w, ð/), receive weak stress, and contain reduced vowels. All of these features may serve to identify functors as a phonological class (Gerken 1987a, 1987b; Gerken, Landau, and Remez 1990). Furthermore, due to the weak stress given to function morphemes in English, they convey much of the rhythmic character of the language. Research with 6-month-old infants has demonstrated that these listeners respond to the rhythmic structure of their own language (Jusczyk 1989). It is possible that children could initially become aware of functors as part of canonical rhythmic patterns, and this early awareness could help them to discover the role of functors in syntactic patterns (Gerken 1987a). Function morphemes also tend to occur at the beginnings and ends of major phrasal categories, making them potentially useful in partitioning sentences into their component phrases (Clark and Clark 1977; Greenberg 1963; Kimball 1973). Finally, individual functors cooccur with specific classes of content words within specific phrase types, and these cooccurence patterns could make functors useful as markers of syntactic category (Maratsos and Chalkley, 1980;
Morgan, Meier, and Newport 1987; Valian and Coulson 1987). For example, the signals the presence of a noun phrase (NP), and -ed, a verb phrase (VP). All of these properties might allow a child to employ function morphemes as a wedge into syntax (Gerken 1987a; Gerken, Landau, and Remez 1990). However, many language acquisition researchers have thought it unlikely that children use function morphemes in the earliest stages of language because these speakers typically fail to include functors in their utterances. For example, Roger Brown's (1973) Adam, Eve, and Sarah began consistently to produce articles and the third person regular verb inflection only at an average age of 38 months. At least three explantions are possible for children's failure to produce function morphemes. One is that children's multiword utterances are built from content words that were previously learned via real world reference. Nonreferential words are thus ignored during sentence comprehension and are not included in the child's early linguistic knowledge (Bates 1976; Bowerman 1973; Pinker 1984). Another explanation for children's function-morpheme omissions is that children pay particular attention to strongly stressed words and sylla page_273 Page 274 bles and ignore weakly stressed items, again resulting in a lack of functors in early linguistic knowledge (Brown and Fraser 1964; Gleitman and Wanner 1982). Numerous observations suggest that children are omitting weak syllables, but not nonreferential elements per se, when they omit function morphemes. For example, children also omit weakly stressed syllables from multisyllabic words (Ingram 1974; Smith 1973). In Quiche Mayan, a language in which function morphemes receive stronger stress than content words, children tend to omit content words and preserve functors (Pye 1983). This question of stress is related to third explanation because, although weak stress appears to be linked to children's omissions, it is not clear whether children's difficulties with weakly stressed syllables lie in perception or production. Jusczyk and Thompson (1978) demonstrated that infants were able to make phonetic contrasts equally well in weak and strong syllables, suggesting that weak stress is not necessarily a barrier to perception. Other studies have suggested that children can comprehend functors, even though they do not produce them (Gelman and Taylor 1984; Katz, Baker, and MacNamara 1974; Petretic and Tweney 1977; Shipley, Smith, and Gleitman 1969). Thus, the third possible explantion for children's omissions is that these listeners attend to function morphemes during sentence comprehension and omit them only during production due to constraints on the complexity of utterances that they can plan or produce. The notion that constraints on utterance complexity are responsible for functor omissions, as well as segmental errors, is supported by observations of young children's spontaneous and imitative speech. For example, Brown and Fraser (1964) found that children's imitations of adult utterances were approximately the same length in morphemes, regardless of the length of the adult utterances. Another way of saying this is that omissions increased as the target utterance became longer. Bloom (1970) found that children were more likely to omit sentential subjects from syntactic negatives than from the corresponding affirmatives. This is consistent with the Brown and Fraser data and suggests that children have some limitation on the syntactic and/or morphological complexity of their utterances. Other researchers, Nelson and Bauer (1991) and Waterson (1978), have found that children's segmental accuracy decreases as the length of their utterances increases. To compare these three explanations of children's function-morpheme omissions, Gerken (1987a, 1987b; Gerken, Landau, and Remez 1990) performed three experiments in which young children were asked to imitate page_274 Page 275 four-syllable strings that corresponded to the VP of an English sentence. The following are sample stimuli for experiment 1: 1. Pete pushes the dog. 2. Pete bazes the dep.
3. Pete pusho na dog. 4. Pete bazo na dep. In each experiment, the syllables in content word positions were either familiar referential content words (a verb and an object) or comparably stressed unfamiliar items (nonsense syllables). Similarly, the syllables in functor positions were either English function morphemes (always the -es verb inflection and the article the) or weakly stressed nonsense syllables (the set varied across experiments; full materials in Gerken 1987a; Gerken, Landau, Remez 1990). The measure of interest in each experiment was what children omitted in their imitations. Each of the three explanations for such omissions predicts different results. First, a view of children's early language that is based on referential content words predicts that, in sentences with familiar content words, as in stimuli 1 and 3 above, children will retain these words and omit the nonreferential items, regardless of whether the latter are English function morphemes or nonsense syllables. Whereas in sentences with no familiar referential words, as in 2 and 4 above, children should have no basis for retaining one type of element over another. This is because the referential view in its strictest form relies solely on the referential status of a particular word for the child, not on any of the word's acoustic properties. Second, the view that children are less likely to perceive weakly stressed syllables predicts that they should retain strong syllables and omit weak ones, regardless of whether any of these syllables constitutes an English word or morpheme. Finally, the view that children attend to function morphemes in speech perception but omit them in production due to complexity constraints predicts that they should omit English function morphemes more frequently than weakly stressed nonsense syllables. This is because the presence of English functors should help children to analyze morphologically and syntactically the strings in which these elements occur. Therefore, strings with English function morphemes will be given a more complete morphosyntactic analysis than strings with weakly stressed nonsense syllables. The latter may be treated as multisyllabic words or simply memorized as unanalyzed syllable strings. The difference in the page_275 Page 276 degree of linguistic analysis allowed by the two string types will result in strings with English function morphemes being more linguistically complex than strings containing weakly stressed nonsense syllables. Subjects in experiment I were sixteen children with a mean age of 26 months. Each child's mean length of utterance (MLU) was calculated from his or her spontaneous speech, with MLUs ranging from 1.30 to 5.02 morphemes per utterance (see Gerken, Landau, and Remez 1990 for a description of how MLU was calculated). Consistent with the literature on children's spontaneous speech, the children in this study frequently omitted weakly stressed syllables and virtually never omitted strongly stressed syllables, even when the latter were unfamiliar. Therefore, only weak syllable omissions were analyzed. An imitation was regarded as having one or two weak syllable omissions if both content words were imitated accurately (see exact definition of accurate imitations in section IV) and if either one or both weak syllables were absent. Weak syllables that were produced as filler syllables, or schwa, were not treated as omissions. For purposes of data analysis, children were divided equally into a lower MLU group (mean MLU = 1.73) and a higher MLU group (mean MLU = 3.91). Children who had lower MLUs and who, therefore, produced shorter spontaneous utterances also made significantly more functor omissions in their imitations than chidren who had higher MLUs (32% versus 11% omissions, respectively). Only the data for children with lower MLUs will be presented here (data for all subjects presented in Gerken 1987a; Gerken, Landau, and Remez 1990). Figure 8.1 shows that the children omitted significantly more English function morphemes than weakly stressed nonsense syllables.2 The fact that children frequently omitted weak syllables and almost never omitted strong syllables, even when the latter were not familiar, casts serious doubt on the referential view of children's function morpheme omissions. And, the fact that children were able to distinguish between weakly stressed English functors and nonsense syllables suggests that they were not ignoring all weak elements during perception. Rather, the data are consistent with the view that children omit weak syllables during production and that increased weak syllable omission is associated with increased linguistic complexity. Because strings with English functors were linguistically more complex than strings with weakly stressed nonsense syllables, children omitted the weak syllables in the former strings more frequently than those in the latter.
page_276 Page 277
Figure 8.1 Functor omissions in experiment 1 Since the strings in experiment 1 were read to children by the experimenter, it is, therefore, possible that different stress was given to English and nonsense items. Experiment 2 controlled for this possibility by using strings that were similar to those used in experiment 1 but that were generated on a DECtalk text-to-speech synthesizer. Briefly, DECtalk applies a phrasal template to phonetically specified strings, augmenting pitch and stress on some syllables, while decreasing them on others. Using the same template for all sentences allowed the acoustic realization of each syllable to be more tightly controlled. Subjects in this experiment were fifteen children with a mean age of 26 months. MLUs were calculated from spontaneous speech, and children were divided into lower (n = 7; mean MLU = 2.07) and higher (n = 8; mean MLU = 3.72) MLU groups. As in experiment 1, children in the lower MLU group omitted significantly more weak syllables than children in the higher MLU group (27% versus 10% omissions, respectively). And, as in experiment 1, only data from children with lower MLUs will be considered here. Figure 8.2 depicts that children again omitted significantly more English functors than weakly stressed nonsense syllables. Although the interaction between functor and content word was significant by subjects, but page_277 Page 278
Figure 8.2 Functor omissions in experiment 2 not by items, the differences between English and nonsense functor omissions were significant for both familiar and unfamiliar content words. Thus, the results of this experiment replicated those from experiment 1, further supporting the idea that children attend to function morphemes during speech perception. In English, vowels in weak syllables tend to be reduced to schwa. If children are selectively attending to strong syllables, one possibility is that they are selectively attending to syllables with full (non-schwa) vowels. The English functors in experiments 1 and 2 contained reduced vowels, while the weakly stressed nonsense syllables contained full vowels. Therefore, children might have omitted English functors more frequently than weakly stressed nonsense syllables in those experiments, not because English functors were recognized as morphemes but rather because the reduced vowels in English functors caused them to be more frequently ignored. To examine this possibility, a new nonsense functor sequence was created that contained reduced vowels and consonants that normally do not occur in English functors (/g, 1/). A sample sentence with the new sequence is ''Pete pusheg le dog". If children continue to differentiate English and page_278 Page 279
Figure 8.3
Functor omissions in experiment 3 nonsense functors in their omissions, then the notion that they ignore syllables with reduced vowels can be ruled out. Subjects for experiment 3 were sixteen children with a mean age of 27 months. As in the previous experiments, children were equally divided in lower MLU (mean MLU = 2.20) and higher MLU (mean MLU = 3.67) groups. Children with lower MLUs omitted significantly more functors in their imitations than children with higher MLUs (26% versus 5% omissions, respectively). Again, only the data from the lower MLU group will be presented. Half of the children heard tape-recorded natural speech, while the other half heard DECTalk. Because this manipulation did not interact with omissions, I will not discuss it further (see Gerken 1987a, Gerken, Landau, and Remez 1990). Figure 8.3 reveals that children in experiment 3 continued to omit English functors more frequently than weakly stressed nonsense syllables, even when both types of elements contained reduced vowels. Therefore, children do not fail to perceive or attend to syllables containing schwa. There was also a significant effect of content word, such that weak syllables in sentences with familiar content words were omitted more frequently than weak syllables in sentences with unfamiliar content words. page_279 Page 280 This suggests that the presence of familiar content words can assist children in assigning a morphosyntactic analysis to an incoming string. However, the presence of familiar functors appears more likely to trigger such an analysis. This is evidenced by the fact that children omitted English functors more frequently than weakly stressed nonsense syllables in all three studies, while content word type influenced omissions only in experiment 3. The evidence from experiments 1-3 clearly demonstrates that children perceive function morphemes. Why then do they fail to produce these elements in their spontaneous speech? Two general classes of explanations are possible. One class posits that children's function-morpheme omissions represent phonological processes, specifically, difficulties in the production of weak syllables. According to this view, children omit weakly stressed function morphemes based on the same phonological production mechanism by which they omit weak syllables from multisyllabic words. Why then do children omit weakly stressed function morphemes more frequently than weakly stressed nonsense syllables? I have suggested that utterances which are given a full syntactic and morphological analysis are more complex to produce than utterances that cannot be fully analyzed. This is consistent with a view of the speech production system in which an intended utterance is given a representation at several levels in sequence from the intended message to its articulation (Garrett 1975; Gerken 1991). As to the specific speech production model proposed here, resources from a limited pool are used at each level of representation that the speaker assigns (see Gerken 1991 for a more complete discussion). Because the model is sequential in nature, an utterance that requires resource expenditure at higher levels (e.g., syntax and morphology) will have fewer resources remaining at the phonological level than an utterance that has no representation at higher levels (e.g., an utterance encoded only as a string of syllables). Therefore, an utterance that can be assigned a morphosyntactic analysis will exhibit more phonological errors than utterances that cannot be assigned such an analysis. This account of children's function-morpheme omissions is consistent with research indicating that children's segmental accuracy decreases when the length of their utterance increases (Nelson and Bauer 1991; Waterson 1978). The second class of explanations postulates that children's function-morpheme omissions reflect pragmatic processes. According to this view, children respond to limits on their speech production abilities by omitting the least communicatively important elements. Because function morphemes serve only to modulate the meanings of the major syntactic page_280 Page 281 categories, children choose to omit them. Hence, children omit English functors more frequently than weakly stressed nonsense syllables because the former are familiar and thus known to be unimportant, while the latter are unfamiliar and
therefore potentially important. Note that, in the pragmatic view, functor omissions are governed by a different mechanism than weak syllable omissions from multisyllabic words. This difference exists because the latter cannot be seen as communicatively unimportant in the same way that function morphemes can. Because the pragmatic account treats function morphemes and weakly stressed syllables differently, it might have difficulty explaining why children omit weakly stressed nonsense syllables more frequently than content words. Perhaps, children occasionally treat weakly stressed nonsense syllables as English functors and thus as omittable elements. Or perhaps they use the fact that weak stress is correlated with unimportant sentential elements to omit weakly stressed nonsense syllables when they are not able to produce the entire target utterance. It is impossible to choose between phonological and pragmatic explanations for children's function-morpheme omissions based on experiments 1-3. However, later, I will provide evidence in favor of the phonological account when I show the similarity of word level and sentence-level omissions. III Children's Segmental Representation of Function Morphemes A listener is faced with two basic tasks when trying to comprehend an incoming utterance: partitioning the utterance into words and phrases and then labeling these parts with their lexical and phrasal categories (e.g., noun or NP). These tasks must be performed before higher-level syntactic and semantic analyses can be carried out. As discussed in the previous section, function morphemes are potentially useful for both partitioning and labeling. The fact that children attended to function morphemes in experiments 1-3 suggests that they could potentially use functors for sentence partitioning. However, children's ability to use function morphemes for syntactic labeling is more questionable. This is because children often begin producing functors as largely undifferentiated filler syllables, typically schwa (Bloom 1970; Peters 1983). If children represent function morphemes as an undifferentiated class, syntactic labeling would not be possible because individual functor classes, such as articles and verb inflections, would be given the same segmental representation. Alternatively, children might page_281 Page 282 have adultlike segmental representations of function morphemes but use filler syllables as a short form due to production constraints. If this is the case, children could potentially use function morphemes for syntactic labeling. In order to distinguish between these two possibilities, we must evaluate children's segmental representation of function morphemes. The data from experiment 3 suggest that children represent functors with some degree of segmental detail. This is evidenced by the fact that they were able to distinguish between English and nonsense functors when both contained schwa. However, it is possible that children represent enough information about functors to allow them to make such a distinction without the representation being completely adultlike. For example, children could represent many of the frequent English functors in an underspecified manner similar to example 5 (parentheses indicate optional consonants); adding the alternate example 6 would cover most of the remaining functors. This view is consistent with production data presented by Peters (1989). She found that young speakers often produced functors as schwa but that individual functors were sometimes produced with more segmental detail (e.g., nasalization on filler syllables that occurred in contexts that might require the prepositions in or on). 5. ( + fricative) schwa ( + fricative) 6. schwa (+ nasal) Experiment 4 was designed to test the degree of segmental specificity with which children represent function morphemes. Children were asked to imitate strings that contained English functors, -es and the; a sequence of two schwas; or weakly stressed nonsense syllables containing full vowels, o and ka. If children represent function morphemes in full segmental detail, then they should recognize English functors as familiar morphemes and omit them as they did in experiments 1-3. In contrast, they should fail to recognize as morphemes both schwa-schwa and weakly stressed nonsense syllables because neither is a functor sequence in English. This should result
in fewer omissions of both sequences than of the English sequence. However, if children represent functors as underspecified in the manner shown in examples 5 and 6, then they should treat both English functors and schwa-schwa as familiar morphemes and omit them. They should still fail to recognize weakly stressed nonsense syllables as morphemes and, therefore, omit them less frequently. page_282 Page 283 Experiment 4 was conducted in the same manner as experiments 1-3, but with one exception. So that children could not develop a template for imitation, they were also asked to imitate filler sentences that were of a different syntactic and rhythmic form than the test sentences. Both test sentences and filler stimuli were created with DECtalk. Subjects in experiment 4 were fifteen children with an average age of 27 months and an average MLU of 2.98. There was no difference in omissions between subjects with higher and lower MLUs, and therefore, the data from the entire group of subjects will be discussed.3 Figure 8.4 shows that children omitted English functors and the schwa-schwa sequence equally frequently and omitted both significantly more frequently than nonsense functors with full vowels. What does this pattern of data indicate about young children's mental representation of function morphemes? Without the results of experiment 3, an obvious interpretation of the data would be that children do not attend to or perceive syllables containing reduced vowels. However, children were able to distinguish English functors from other syllables with reduced vowels
Figure 8.4 Functor omissions in experiment 4 page_283 Page 284
in experiment 3, indicating that they did not ignore these items. Taken together, experiments 3 and 4 suggest that children might represent English function morphemes in an underspecified manner similar to examples 5 and 6 above.4 Such partial representations have also been proposed for children's content words by several child phonologists (Hawkins 1973; Ingram 1974; Macken 1979; Menn 1983; Waterson 1970; Wilbur 1980). According to this account, children rejected the weakly stressed nonsense syllables in experiment 3 as potential function morphemes because these syllables contained consonants that were neither nasal nor fricative. In contrast, they accepted the schwa-schwa sequence as a potential functor sequence because it did not contain any atypical consonants. We are currently testing this hypothesis by contrasting children's imitations of English function-morphemes and nonsense syllables that meet the specifications in
examples 5 and 6. This view of children's function-morpheme representations offers an interesting developmental picture of how functors might be used in language acquisition. During late infancy (approximately six to twelve months), segmentally undifferentiated functors and weakly stressed syllables in multisyllabic words may be part of the salient alternating stress pattern of English. As functors acquire a partial phonological representation, they can be distinguished from other weak syllables. This allows function-morphemes to become useful cues to sentence partitioning for children who are searching for linguistically relevant units in the speech stream. Interestingly, the underspecified quality of children's early functor representations might cause these elements to be especially salient because the frequencies of all individual functors combine into a single, very frequent element. However, if children do not differentiate among the members of the set of function morphemes, that is, treat the differently than -es), they cannot yet use them for syntactic labeling. At a later stage, as the segmental representations of individual functors emerge from one or more underspecified representations, these elements can be used for syntactic labeling (Peters 1989). IV Children's Use of Function Morphemes in Sentence Perception If children are able to use function morphemes as cues to sentence partitioning, we might predict that they will be better able to encode sentences that contain familiar functors than sentences that do not. Such facilitated page_284 Page 286
Figure 8.5 Accurate content verbs in experiment I
Figure 8.6 Accurate content verbs in experiment 2
page_286 Page 285 encoding might be reflected in more accurate imitation of some aspects of sentences that contain familiar functionmorphemes. To examine this possibility, Gerken (1987a; Gerken, Landau, and Remez 1990) examined the phonetic accuracy of children's content-word imitations in experiments 1-4. A content word was considered to be imitated accurately if it was phonetically identical to the target or deviated from the target by a single phoneme (e.g., dog produced as dod). Because children typically reduce consonant clusters in their spontaneous speech (Smith 1973), a consonant cluster was regarded as accurate if the child imitated just one of its consonants. If children are able to use familiar function-morphemes to locate and identify content words during sentence perception, then we might expect their content-word imitations to be more accurate in sentences with familiar functors than in sentences with weakly stressed nonsense syllables. To measure the content-word accuracy , I will consider the data from children with both lower and higher MLUs. In experiment I (see figure 8.5); children with lower MLUs showed no effects for the content-word accuracy measure. However, children with higher MLUs imitated content words significantly more accurately when the cooccurring functors were English than when they were nonsense. In experiment 2 (see figure 8.6) children with lower MLUs produced content words more accurately for sentences with English functors. They also produced familiar content words more accurately than unfamiliar content words. Children with higher MLUs showed no effect as to whether functors were English or nonsense, but they did imitate English content words more accurately than nonsense content words. In experiment 3 (see figure 8.7), children with both lower and higher MLUs produced content words more accurately for sentences with English functors. And both groups produced familiar content words more accurately than unfamiliar content words. Finally, in experiment 4 (see figure 8.8), children produced content words more accurately in sentences in which functors were either English or schwa than in sentences with full-vowel functors. They also imitated English content words more accurately than nonsense content words.5 Thus, in all four experiments, at least one group of children produced content words more accurately in target sentences containing English functors than in sentences containing nonsense functors. How are we to explain this pattern of results? One possible account is that the presence of functors or functorlike elements, such as schwa, might help in sentence partitioning. One version of this view posits that page_285 Page 287
Figure 8.7 Accurate content verbs in experiment 3
Figure 8.8 Accurate content verbs in experiment 4 page_287 Page 288
children might have been more likely to treat strings with English functors as sentences, as opposed to a sequence of nonsense syllables. This might have led to better encoding or better memory for the string and thus to more accurate reproduction. Another version of the partitioning view postulates that familiar functors may have served as markers for the presence of content words, thereby allowing children to concentrate on the segmental details of the latter.6 Alternatively, it is possible that content-word accuracy is inversely related to the amount of phonetic material in the utterance. Because children were more likely to omit English functors, utterances from which these elements had been omitted contained less phonetic material to produce, resulting in more accurate content-word imitations. However, this view is not supported by further examination of the data. First, children who had higher MLUs and who, therefore, were unlikely to omit functors also produced content words more accurately in strings containing English functors. Second, imitations by children with lower MLUs in experiments 1-3 were examined to determine if content words in strings from which functors were omitted were more accurate than those in strings from which functors were not omitted. The predicted trend was not found in any of the experiments, indicating that the amount of phonetic material children actually imitated did not influence the accuracy of their imitations. Thus, it appears that some version of the view that function morphemes aid in sentence partitioning is the best explanation for the data. V Metrical Processes in Words and Sentences Recall that in section II, I presented a model of speech production in which children's function-morpheme omissions reflect phonological processes triggered by complexity at higher levels (e.g., syntax and morphology). According to this account, the mechanism by which children omit weakly stressed function morphemes is the same mechanism by which they omit weakly stressed syllables from multisyllabic words. Any differences in omissions among weakly stressed elements result from differences in the linguistic complexity of the strings in which they occur. Conversely, the pragmatic account states that children omit functors because they deem them to be less communicatively important than content words. They omit weak syllables from multisyllabic words based on other, presumably phonological, principles. page_288 Page 289
In this section, I will provide evidence supporting of the phonological account of children's omissions by demonstrating that function morphemes and weakly stressed nonmorphemic syllables are both subject to the same phonological processes. In particular, I will demonstrate that children's omissions reflect processes of metrical phonology, the level at which patterns of strong and weak syllables are represented. Several studies of children's early productions of multisyllabic words have demonstrated that they are more likely to omit weak syllables from iambic metrical feet (weak-strong) than from trochaic feet (strong-weak (Allen and Hawkins 1980; Echols and Newport 1992; Smith 1973). For example, giRAFFE is more likely to be reduced to RAFFE than MONkey is to be reduced to MON. This pattern also appears in children's production of sentences. If children divided the target strings in previous experiments into metrical feet prior to production, the result would be one trochaic foot followed by one iambic foot. For example, ''pushes the dog" would be divided into PUSHes + the DOG (+ signifies division into metrical feet). If, as in their single word productions, children are more likely to omit weak syllables from iambic feet than from trochaic feet, they should be more likely to omit the second functor than the first in the strings used in the previous experiments. To test this, Gerken (1987a; Gerken, Landau, and Remez 1990) examined those imitations in experiments 1-3 in which only one functor had been omitted and in which the remaining functor was imitated in full. Figures 8.9 and 8.10 show that children in experiment 1 and those who heard the natural voice stimuli in experiment 3 omitted weak syllables from iambic feet (second functor position) significantly more frequently than weak syllables in trochaic feet (first functor position). There was no difference in omissions in experiment 2, which used DECtalk, or for the DECtalk stimuli in experiment 3.7 The omission pattern was the same for both English functors and weakly stressed nonsense syllables, suggesting that it was not due to children's preference to omit one morpheme over another. Rather, it appears that children omit weakly stressed syllables in sentential contexts based on the same metrical principles that govern their omissions of weak syllables in multisyllabic words. Children's more frequent omissions from iambic feet than from trochaic feet might also account for omission patterns that have been previously given syntactic or pragmatic explanations. For example, children frequently omit sentential subjects while retaining objects. One explana page_289 Page 290
Figure 8.9 Functor omissions the first and second positions in experiment 1
Figure 8.10 Functor omissions the first and second positions in experiment 3 page_290 Page 291
tion offered for this phenomenon is that children are born with the default setting of the pro-drop parameter set to allow null subjects (Hyams 1986). Another explanation is closely related to the pragmatic account of children's functor omissions. Children omit those elements that they deem unimportant, and because subjects tend to contain given, as opposed to new, information, they are omitted. Alternatively, it is possible that children's representation of metrical structure that causes their weak syllable omissions from multisyllabic words can also account for their subject omissions. Gerken (1990a, 1990b, 1991) contrasted these accounts for children's subjectless sentences by asking them to imitate sentences like the following, examples 7-15, in which subjects and objects were either pronouns, proper names, or common NPs. The sentences were spoken by the experimenter. 7. she KISSED him 8. she KISSED + PETE 9. she KISSED + the BEAR 10. JANE + KISSED him 11. JANE + KISSED + PETE 12. JANE + KISSED the + BEAR 13. the LAMB + KISSED him 14. the LAMB + KISSED + PETE 15. the LAMB + KISSED the + BEAR According to the pro-drop account, children should omit pronoun subjects more frequently than pronoun objects. The pragmatic account, however, posits that children should omit pronoun subjects more frequently than pronoun objects. In addition, they might omit subject articles more frequently than object articles because material in the subject NP is more likely to be given information. According to the metrical view, children should first assign to to-be-imitated utterances the metrical structures shown in examples 7-15. The principles used to assign these metrical analyses are as follows: (1) a metrical foot contains one, and only one, strong syllable, (2) whenever possible, syllables are taken in pairs, and (3) metrical feet are assigned left to right. Based on these metrical structures, some weak syllables occur in iambic feet and should be omitted, while others occur in trochaic feet and should page_291
Page 292 be retained. The metrical view predicts that children should omit pronoun subjects more frequently than pronoun objects because a pronoun subject is the weak syllable of an iambic foot, whereas a pronoun object is the weak syllable of a trochaic foot. They should also omit subject articles more frequently than object articles because the former is always the weak syllable of an iambic foot, while the latter can be in a trochaic foot with the verb (see examples 12 and 15). The unique prediction of the metrical account is that children should omit object articles in iambic feet (example 9) more frequently than object articles in trochaic feet (examples 12 and 15). Subjects in the experiment were eighteen children with a mean age of 27 months and a mean MLU of 2.70. Figure 8.11 depicts the omissions for subject NPs, subject articles, object NPs, and object articles. As predicted by all accounts, children omitted significantly more subject pronouns than
Figure 8.11 Functor omissions by syllable type in experiment 5 page_292 Page 293
object pronouns. They also omitted subject articles more frequently than object articles. Most importantly, they omitted object articles from iambic feet (example 9) more frequently than object articles from trochaic feet (examples 12 and 15). This last effect cannot be accommodated by the hypothesis that children omit the least important sentential elements, and therefore, it provides the strongest evidence for a phonological account for children's weak syllable omissions. This account is further supported by the fact that children omitted all weak syllables from iambic feet (subject pronouns, subject articles, and object articles from examples such as example 9) with nearly equal frequency. This suggests that the same mechanism is responsible for all omissions. These results confirm that young language learners are sensitive to the prosodic structure of their language and show that the same prosodic sensitivity exists for both words and sentences. VI Conclusion Recent research in the field of child phonology has called into question the notion that language acquisition proceeds from lower linguistic levels to higher. Rather, many child phonologists argue that children do not form a segmental representation until the size of their lexicon forces them to do so. I have applied this argument to a new area, showing
that isolated content words learned via real-world reference are not the sole building blocks of early language. I have used data on children's perception and production of function morphemes to suggest that some early sentence-level processes might serve as a catalyst for awareness of individual content words. In particular, I have demonstrated that young children who do not yet produce function-morphemes are nevertheless sensitive to these elements. This is indicated by the fact that they omitted familiar English functors more frequently than weakly stressed nonsense syllables, probably because the former increased the linguistic complexity of the strings in which they occurred. Children also appear to use function morphemes in sentence perception, as indicated by the fact that they produced content words more accurately in strings with English functors than in strings with weakly stressed nonsense syllables. Both of these findings indicate that young children do not only listen for content words during sentence perception but also are sensitive to the form of sentences as well. I have also suggested that children's sensitivity to function morphemes might grow out of their initial sensitivity to the prosodic structure of their language. This sensitivity to prosodic structure can be seen in the fact that page_293 Page 294 children omit function morphemes and other weakly stressed syllables from some metrical patterns more frequently than from others. Children's apparent segmental underspecification of function morphemes is also consistent with the notion that their initial approach to these elements is prosodic rather than purely segmental. From a broad developmental view point, it should not be surprising that some sentence-level processes may precede and facilitate some word-level processes. Indeed, Peters (1983) has demonstrated that a subset of children initially represent language in terms of larger than content-wordsized units. Further, research with infants has shown that these listeners are highly sensitive to the prosodic form of their own language (Fernald 1985; Jusczyk 1989). As language learners increasingly focus on assigning meaning to the utterances they hear, they may turn some of their attention to content words that are stably linked to real world reference. However, they may also continue to analyze the intonational and rhythmic forms that were salient in infancy. Function morphemes potentially provide an important bridge between infants' early attention to prosodic aspects of language and their later attempts to assign meaning because these elements are part of the rhythmic character of a language and because they are also important cues to sentence partitioning and labeling. Notes 1. Such an unordered, holistic representation of words will obviously not work for production. To produce a recognizable word, the speaker must send an ordered sequence of motor commands to the articulators. This raises the interesting possibility that the form of lexical representation that children use for perception is not the same as the representation they use for production. Perhaps a phonological or some other sort of segmental representation is the result of the combination of these two earlier forms. 2. With the exception of data from experiment 1, all reported data were subjected to analyses of variance with both subjects and items as random factors. Results reported to be significant were significant at the p <.05 level or below for both analyses. No byitems analyses were performed on data from experiment I because there was a near random assignment of subjects to conditions (see Gerken 1987a; Gerken, Landau, and Remez 1990). 3. The lack of an omission difference between high- and low-MLU children in experiment 4 stems from the fact that there were fewer utterances in which both content words were accurately produced in experiment 4 than in previous experiments. This was especially true of the lower MLU children in experiment 4. Thus, these children produced fewer utterances that could be examined for omissions than did children with higher MLUs. The relative decrease in imitations contain page_294 Page 295 ing two accurately produced content words is also responsible for the fact that there were fewer weak syllable omissions in experiment 4 than in experiments 1-3. The reason for the children's less accurate productions in experiment 4 might be
the inclusion of the filler sentences, which increased the perceptual difficulty of the task. 4. Several conference participants pointed out that schwa is itself the English functor a. Therefore, children might have omitted schwa as often as English functors not because their English functor representation was underspecified but rather because they treated the schwa-schwa sequence as a-a. This is clearly a possible interpretation of the data. However, if children treat a-a as an English functor sequence, they would seem to have little knowledge of the syntactic rules involving functors, a notion that is not well supported in their early function-morpheme production (Valian 1986). 5. When children imitated natural speech, no differences in their content-word accuracy was found between familiar and unfamiliar items. However, for synthetic speech, familiar content words were imitated more accurately than unfamiliar content words. This is probably because, when the signal became more difficult to comprehend, children relied more strongly on familiarity. This, in turn, resulted in better encoding and imitation of familiar content words. 6. In the model of children's function-morpheme omissions outlined in the second section of this chapter, I argue that increased linguistic complexity at higher levels results in more frequent errors at the phonological level. This model might predict that children should produce content words less accurately rather than more accurately in strings containing English functors because these strings are more linguistically complex than strings containing weakly stressed nonsense syllables. It is likely that the presence of function morphemes would not only potentially facilitate encoding but also impair production by increasing complexity. The reason that the two effects did not cancel each other out probably reflects the fact that inaccuaracy due to misencoding and inaccuracy due to production are not equally probable. If a child misencodes a word, the probability that she or he will produce it accurately is nearly zero. In contrast, if a child correctly encodes a word, the chances that she or he will accurately produce it is much higher than zero. Note that even children with lower MLUs only omitted approximately 30% of all function morphemes. Thus, there is a bias for encoding to be reflected more strongly than production in the content-word accuracy measure. 7. No iambic omission pattern was found when children imitated DECtalk stimuli. This might be because, for such stimuli, weak syllables were most often produced as filler syllables instead of as an identifiable functors. In such utterances, it could not be determined which weak syllable had been omitted. References Allen, G. and Hawkins, S. (1980) Phonological rhythm: Definition and development. In G. H. Yeni-Komshian, J. F. Kavanagh, and G. A. Ferguson (eds.), Child phonology, vol. 2: Production. New York: Academic Press. Bates, E. (1976). Language and context. New York: Academic Press. page_295 Page 296 Bloom, L. (1970) Language development: Form and function in emerging grammars. Cambridge, Mass.: MIT Press. Bowerman, M. (1973) Early syntactic development: A cross-linguistic study with special reference to Finnish. Cambridge: Cambridge University Press. Brown, R. (1973). A first language. Cambridge, Mass.: Harvard University Press. Brown, R. and Fraser, C. (1964) The acquisition of syntax. Monographs of the Society for Research in Child Development, 29, 9-34. Charles-Luce, J. and Luce, P. (1990). Some structural properties of words in young children's lexicons. Journal of Child Language, 17, 355-371. Clark, H. and Clark, E. V. (1977). Psychology of language: an introduction to psycholinguistics. New York: Harcourt Brace Jovanovich. Echols, C. H. and Newport, E. L. (1992). The role of stress and position in determining first words. Language Acquisition, 2, 189-220. Fernald, A. (1985). Four-month-old infants prefer to listen to motherese. Infant Behavior and Development, 8, 181-195. Garrett, M. F. (1975) The analysis of sentence production. In G. Bower (ed.), Psychology of learning and motivation,
vol. 9 (pp. 133-177). New York: Academic Press. Gelman, S. A. and Taylor, M. (1984) How two-year-old children interpret proper and common names for unfamiliar objects. Child Development, 55, 1535-40. Gerken, L. A. (1987a). Function morphemes in young children's speech perception and production. Unpublished doctoral dissertation, Columbia University. Gerken, L. A. (1987b). Telegraphic speaking does not imply telegraphic listening. Papers and Reports on Child Language Development, 26, 48-55. Gerken, L. A. (1990a). A metrical account of subjectless sentences. North East Linguistics Society, 20, 121-134. Gerken, L. A. (1990b). Performance constraints in early child language: The case of subjectless sentences. Papers and Reports on Child Language Development, 29, 54-61. Gerken, L. A. (1991). The metrical basis for children's subjectless sentences. Journal of Memory and Language, 30, 431451. Gerken, L. A., Landau, B., and Remez, R. E. (1990). Function morphemes in young children's speech perception and production. Developmental Psychology, 27, 204-216. Gleitman, L. and Wanner, E. (1982) The state of the state of the art. In E. Wanner and L. Gleitman (eds.), Language acquisition: The state of the art. Cambridge: Cambridge University Press. Greenberg, J. H. (1963). Some universals of grammar with particular reference to the order of meaningful elements. In J. H. Greenberg (ed.), Universals of language. Cambridge, Mass.: MIT Press. page_296 Page 297 Hawkins, S. (1973). Temporal coordination of consonants in speech of children: Preliminary data. Journal of Phonetics, 7, 235-267. Hyams, N. (1986). Language acquisition and the theory of parameters. Dordrecht: Reidel. Ingram, D. (1974) Phonological rules in young children. Journal of Child Language, 1, 49-64. Jusczyk, P. (1985). On characterizing the development of speech perception. In J. Mehler and R. Fox (eds.), Neonate cognition: beyond the blooming, buzzing confusion (pp. 199-229). Hillsdale, N.J.: Earlbaum. Jusczyk, P. (1989). Developing phonological categories from the speech signal. Paper presented at the International Conference on Phonological Development, Stanford University, September 1989. Jusczyk, P. W. and Thompson, E. (1978). Perception of a phonetic contrast in multisyllabic utterances by 2-month-old infants. Perception and Psychophysics, 23, 105-109. Katz, B., Baker, G., and MacNamara, J. (1974) What's in a name? On the child's acquisition of proper and common nouns. Child Development, 45, 269-73. Kimball, J. P. (1973). Seven principles of surface structure parsing in natural languages. Cognition, 2, 15-47. Macken, M. A. (1979). Developmental reorganization in phonology: A hierarchy of basic units of acquisition. Lingua, 49, 11-49. Maratsos, M. and Chalkley, M. (1980). The internal language of children's syntax. In K. Nelson (ed.), Children's language, vol. 2. New York: Gardener Press. Menn, L. (1983). Development of articulatory, phonetic, and phonological capabilities. In B. Butterworth (ed.), Language production, vol. 2. London: Academic Press.
Morgan, J. L., Meier, R. P., and Newport, E. L. (1987). Structural packaging in the input to language learning. Cognitive Psychology, 22, 498-550. Nelson, L. and Bauer, H. (1991). Speech and language production at age 2: Evidence for tradeoffs between linguistic and phonetic processing. Journal of Speech and Hearing Research, 34, 879-892. Peters, A. (1983). The units of language acquisition. Cambridge: Cambridge University Press. Peters, A. (1989). From schwa to grammar: The emergence of function-morphemes. Paper presented at the Boston University Conference on Language Development, October 1989. Petretic, P. A. and Tweney, R. D. (1977) Does comprehension precede production? The development of children's responses to telegraphic sentences of varying grammatical adequacy. Journal of Child Language, 4, 201-209. page_297 Page 298 Pinker, S. (1984) Language learnability and language development. Cambridge, Mass.: Harvard University Press. Pye, C. (1983) Mayan telegraphese. Language, 59, 583-604. Shipley, E., Smith, C., and Gleitman, L. (1969) A study in the acquisition of language. Language, 45, 322-42. Smith, N. V. (1973) The acquisition of phonology. Cambridge: Cambridge University Press. Valian, V. V, (1986). Syntactic categories in the speech of young children. Developmental Psychology, 22, 562-579. Valian, V. and Coulson, S. (1988). Anchor points in language learning: The role of marker frequency. Journal of Memory and Language, 27, 71 -86. Waterson, N. (1971). Child phonology: A prosodic view. Journal of Linguistics, 7, 179-211. Waterson, N. (1978). Growth of complexity in phonological development. In N. Waterson and C. Snow (eds.), The development of communication. New York: Wiley. Wilbur, R. B. (1980). Theoretical phonology and child phonology: Argumentation and implications. In D. Goyvaerts (ed.), Phonology in the 1980's. Ghent: Story-Scientia. page_298 Page 299
Chapter 9 Learning to Hear Speech as Spoken Language Howard C. Nusbaum and Judith C. Goodman In order to understand spoken language, listeners must perceive the sounds of speech, recognize words, and use sentence structure and conversational conventions to interpret the relations between words, among other skills. But to list each of these skills separately does not require that they develop independently. Indeed, these skills evolved together and are used together, so it seems plausible that at some level of description they develop together. Nonetheless, when we consider the research that has been carried out on language acquisition, it appears to us that two basic, yet tacit, assumptions underlie the way in which scientific questions are framed and investigated. In essence, we suggest that these assumptions serve to isolate research on the development of speech perception from research on the development of language understanding and to separate mechanisms of perceptual and cognitive development from mature perceptual and cognitive processing. First, there is an assumption that the mechanisms mediating the processing of speech and the processing of language are discrete, independent, and separable. Second, there is an assumption that the mechanisms responsible for the acquisition and development of perceptual and cognitive abilities are discrete, independent, and separable from the mechanisms that mediate those abilities once they are fully developed.
In many respects, we could say that these tacit assumptions are a consequence of focusing on specific research questions. Since it is impossible to investigate all aspects of acquisition of spoken language understanding at once, it is necessary to select tractable research questions. Clearly this approach to research has been successful and has yielded an amazing array of studies directed at investigating the acquisition of language and the processing of language by adults. However, in spite of the benefits of reducing research questions to isolatable issues, there is a cost. Perhaps it is a good time to reconsider this research strategy and the cost involved in page_299 Page 300 adopting these underlying assumptions. We hope that there is much to be learned by considering the similarities in the processes that mediate speech perception and language comprehension and by trying to characterize adult processes of spoken language understanding and the developmental process of language acquisition in a theoretically coherent way. However, it is difficult to do this given the prevailing research strategies. To illustrate this, consider developmental research on speech and language abilities. Research on the development of speech perception is carried out almost exclusively with infants under the age of one year (see Aslin, Pisoni, and Jusczyk 1983; Jusczyk 1982). These studies have used a wide range of experimental methods to investigate, in infants, the perception of phonological contrasts and phonetic features (e.g., Aslin, Pisoni, and Jusczyk 1983; Eimas et al. 1971; Jusczyk, Copan. and Thompson 1978; Jusczyk and Thompson 1978; Werker et al. 1981; Werker and Lalonde 1988), prosody (Fernald 1985; Hirsh-Pasek et al. 1987; Kemler Nelson et al. 1989), and talker characteristics (Jusczyk, in press; Mehler et al. 1978). By comparison, there is almost no comparable speech perception research carried out with older children who are in the process of actually using language. Burke (1976), Cole and Perfetti (1980), Elliott, Hammer, and Evan (1987), Goodman (1989, 1991, 1992), Walley (1987), Walley and Metsala (1990), Walley, Smith, and Jusczyk (1986) are a few notable exceptions, but their research is primarily focused on issues surrounding word perception. More commonly, however, starting with subjects around the age of one, research focuses more on lexical and sentential abilities (Au 1990; Au and Markman 1987; Bates, Bretherton, and Snyder 1988; Bloom 1970; Brown 1973; Huttenlocher and Smiley 1987; Soja, Carey, and Spelke 1990). One could interpret this shift in focus as suggesting, without explicitly stating, that psycholinguistic processing develops independently of and subsequent to speech perception. Even if many would disagree with this strong interpretation, the shift in research emphasis from phoneme perception to higher order linguistic units implies a reduced interest in understanding the perceptual development of the phonological system after the first year of life, relative to understanding the processing of higher order linguistic units. Indeed, the research questions posed in studies on language development contrast sharply with those that are addressed in infant speech research. Studies using young children examine how they acquire new lexical knowledge (Au 1990; Au and Glusman 1990; Au and Laframboise 1990; Carey and Bartlett 1978; Clark page_300 Page 301 1983), and how they learn constraints on the forms of sentence structure (Bates, Bretherton, and Snyder 1988; Bates et al. 1989; Bloom 1970). One implication of this view of language development is that the development of the speech recognition abilities must be complete in order to serve as the perceptual foundation for the development of higher, more abstract levels of language (e.g., Morgan 1986). Although this is an appealing general explanation of the development of psycholinguistic abilities, it may oversimplify what could be very interrelated and perhaps functionally interdependent processes and knowledge. Moreover, it is important to examine this theoretical perspective because, in treating the development of speech perception as separate from the development of language, researchers may have overlooked critical generalizations that would be informative about both developmental processes. These generalizations could also serve to constrain theories of adult speech perception and language understanding if they lead to new ways of thinking about adult spoken language processing. The Separation of Speech and Language
We turn to the first tacit assumption that we believe has shaped a great deal of research on language understanding and language acquisition and emphasize that there are good, pragmatic reasons for this assumption. Our view, however, is that it is important to consider whether taking a different approach may provide new insights into these problems. We are well aware that we are not the first to question the assumption that speech and language are mediated by separate systems. On the other hand, it is important to re-examine continually our basic assumptions to see if there is something new to be gained by changing them. The assumption that speech perception and language comprehension are mediated by different mechanisms is as old as the distinction between perception and cognition and is probably motivated by the same general concerns. Indeed, the primary motivation behind searching for different mechanisms to mediate speech perception and ''higher-order" psycholinguistic functioning is derived from the notion that speech perception serves as a modular, knowledgeencapsulated "front end" to language a simple pattern-recognition module that turns continuous variation in sound into discrete linguistic categories (Fodor 1983). This modular separation of the mechanisms of speech perception (and their development) from the mechanisms of language understanding (and their development) page_301 Page 302 does not have its origins in language acquisition research, although it appears to have been accepted there. This separation seems to reflect the distinction in linguistics as a discipline that exists between research in phonetics, which is associated with speech research in psychology, and research in all other branches of linguistics, which is more closely associated with higher-level psycholinguistic research. The research methods of acoustic phonetics and speech perception are grounded in instrumental studies of the physical properties of the speech signal, combining elements of psychophysics with linguistic description, but, from the modal view of linguistics, they are mired irretrievably in the performance side of the competence/performance distinction (e.g., Chomsky 1968). By contrast, phonology, morphology, syntax, semantics, and pragmatics are much more conceptually abstract and, in some ways, are described using more formal methods (e.g., Chomsky 1957; Langendoen and Postal 1984; Wexler and Culicover 1980). These more formal theories generally seek to characterize the linguistic competence of the language user as knowledge uncontaminated by the performance limitations of the perceptual and cognitive systems (although see Lakoff 1987; Langacker 1986; Ohala and Jaeger 1986 for different views from the modallinguistic perspective). This modular separation is realized in psychological research by the assumption that the processes that mediate speech perception serve to render the messy, highly variable acoustic waveform of speech into the pristine, discrete symbolic form that can be dealt with by higher-order psycholinguistic processes. This divides spoken language processing into a purely bottom-up, sensory transducing pattern recognizer and a top-down, conceptually driven comprehension system. In some sense, this division parallels early work on computer vision (e.g., Guzman 1968; Minsky 1963; Waltz 1975) in which it was assumed that the early sensory functions of the visual system serve to render the continuous variation in reflected light values into discrete features or symbols that are the atomic units of pattern recognition. By recoding continuous sensory variation in this way, models of vision converted the problems of segmenting scenes into objects and recognizing those objects into more simplified tasks of comparing propositions. However, in doing so, these models essentially avoided the most difficult and fundamental theoretical issues concerning the way sensory information makes contact with mental representations and probably, in doing so, led to theories with all the wrong assumptions (see Weisstein 1969). page_302 Page 303 Similarly, separating the theoretical issues surrounding speech perception and its development from language understanding and its development may lead to a set of basic assumptions for theories of both of these processes that may misrepresent their nature and relationship or seriously distort them. Indeed, there is a growing body of evidence suggesting that the mechanisms of speech perception are coextensive with, in service of, and dependent upon the mechanisms of language understanding in fairly complex ways. For example, studies of phoneme perception in adult listeners have demonstrated that the distribution of attention in phoneme perception is affected by factors relevant to lexical segmentation (Nusbaum and DeGroot 1990), the identification of phonemes is influenced by lexical status (e.g.,
Ganong 1980; Fox 1984; Miller and Dexter 1988), phonemic restoration is influenced by lexical knowledge (Samuel 1986), and the phonetic interpretation of acoustic cues may be affected by sentence-level information (Connine 1987; Connine and Clifton 1987; Dechovitz 1979). Although there may be some question about the specific mechanisms by which higher-order linguistic knowledge influences speech perception (e.g., postperceptual response bias versus direct modification of perceptual sensitivity), in order to explain the processes of speech perception and language understanding it will be important to investigate these interactions that occur between both types of processing. Moreover, regardless of whether speech perception and language comprehension processes interact as separate systems or operate as one system, similar computational principles may govern both. For example, both may operate as dynamically adaptive systems sensitive to the distributional characteristics of information (e.g., compare claims about learning syntax by Billman 1989, and learning phonetic structure by Greenspan, Nusbaum, and Pisoni 1988 and Nusbaum and Lee 1992). A systematic consideration of the similarities in processing between them may provide insights into a common set of psychological principles (see Nusbaum and Henly 1992). To the extent that speech perception and language comprehension must deal with similar computational problems, given that both evolved simultaneously within the same neural system, it seems plausible that these abilities may exploit a common set of computational or psychological principles. Note that we are not saying that both processes are carried out by a common system, although that is also a possibility; rather, we are claiming that the processes used by each level may operate in similar ways. Unfortunately, this kind of analysis is seldom carried out, and these considerations play little, if any, role in current theories. Indeed, the vast page_303 Page 304 majority of theories about speech perception have been directed at explaining phoneme perception without any consideration of the processing that might be carried out by any higher-level linguistic mechanisms (e.g., Abbs and Sussman 1971; Liberman et al. 1962; Liberman and Mattingly 1985; Stevens and Halle 1967). Other theories, albeit a smaller number, have been directed at explaining word perception without explaining either phoneme perception or higher-level processes (e.g., Grosjean and Gee 1987; Marslen-Wilson 1989). Finally, only a very few have attempted to account for both phoneme and word perception, and these do not go beyond the level of words (e.g., Klatt 1979; McClelland and Elman 1986). Even approaches that are characterized by advocating processing interactions among levels of processing do not try to explain speech perception or language understanding beyond the levels they directly address. Instead they simply acknowledge the existence of other levels of processing and make use of their outputs. For example, cohort theory (MarslenWilson 1989; Marslen-Wilson and Welsh 1978) attempts to explain word recognition as a consequence of interactions between bottom-up phonetic recognition and top-down semantic analysis, but it does not address the theoretical issues of phoneme perception or sentence comprehension. In short, these theories often acknowledge that processing goes on at other levels, and they explain how it might be used at the level they are addressing, but these theories seldom, if ever, specify how this processing is carried out, making even the most interactionist approach a functional modularist (see Fodor 1983 for a different interpretation of the data that gave rise to cohort theory). Of course, piecing together theories of processing at each level is a tall order, particularly when each theory rests on an independent set of processing assumptions. If we reconsider the tacit assumption of completely different mechanisms underlying each linguistic level, however, then we might rethink how these levels of language processing would interact. To the extent that different systems are governed by common principles, it may be much easier to explain the interactions among these systems. A broader perspective on the problem of speech perception than is currently taken by extant theories might lead to broader theoretical principles. Rather than treating phoneme perception as a different kind of phenomenon from word perception, prosody perception, utterance interpretation, and so on, it might be fruitful to examine these phenomena as if they share some common processing considerations. Again, it is important for us to emphasize that in suggesting a consideration of disparate phenomena as being related, we are not advocating a single, global mech page_304 Page 305 anism as an explanation of different systems. However, to the extent that different phenomena are accounted for by
processing subsystems that are constrained by the same kinds of neurophysiological/computational resources, there may be important similarities in their operating characteristics. Thus, by considering these phenomena to be related, it may be possible to identify theoretical principles that are sufficiently general to be applied across levels of linguistic analysis and are still sufficiently well specified to make testable predictions (Nusbaum and Henly, in press). Let us consider an explicit example of the way that a broader perspective on a theoretical problem could lead to a different understanding of that problem. The lack of invariance in the relationship between acoustic properties and the phonetic interpretation of those properties has posed one such problem for theories of speech perception (see Liberman et al. 1967). One acoustic property, such as a particular consonant-release burst or a specific pattern of formant frequencies, may have more than one phonetic interpretation, depending on factors such as the phonetic context for the property (Liberman et al. 1967) or the vocal characteristics of the talker (Peterson and Barney 1952). Furthermore, two different acoustic properties may have the same phonetic interpretation depending on the context (Liberman et al. 1967) or the talker (Peterson and Barney 1952). Thus there is a many-to-many relationship between acoustic properties and phonetic categories that listeners successfully resolve in order to identify correctly phonemes across variations in context and talkers. It is interesting that in spite of the clear similarities across these different types of lack of invariance in mapping acoustic cues to phonetic segments, theories of speech perception have typically addressed each type as if it were a separate and independent theoretical problem. Even though a very few models have referred to more general principles, such as making use of sources of lawful variability (e.g., Elman and McClelland 1986), the actual mechanisms proposed by these models are clearly specific to a particular type of variability, and there is no clear way to generalize the mechanisms. Consider the mechanism by which TRACE (McClelland and Elman 1986) implements coarticulatory influences between adjacent segments in order to resolve the problem of context-conditioned variability in acoustic-phonetic mappings. The weight of acoustic evidence for a particular phoneme changes, depending on the phonemes that have been recognized in the surrounding context. While this handles the problem of lack of invariance to changing phonetic context, it cannot be generalized even to address the problem of changing vocal characteristics for talker normalization or for speech rate normalization (Klatt 1986). Both of these page_305 Page 306 are further cases of the lack of acoustic-phonetic invariance in speech (see Liberman et al. 1967), but in TRACE they would have to be handled by other specific mechanisms than those currently implemented, and there is no simple extension of the current principles that can constrain those mechanisms. Thus, since neither a talker's vocal characteristics nor speaking rate are represented (and probably should not be represented) in a listener in the same way as phonemes, TRACE cannot be extended to deal with these problems using its current computational principles. Similarly, mechanisms proposed to address the problem of talker variability (e.g., Gerstman 1968; Syrdal and Gopal 1986) cannot be easily transferred to the problem of variability due to changes in phonetic context or speaking rate. Although the structural similarity across forms of lack of invariance have been noted (e.g., Liberman et al. 1967), there has been no attempt to propose a set of principles that could address all forms and thereby constrain theories of speech perception. This does not mean however, that it would be impossible to develop a single set of principles that could constrain theories of speech perception and possibly provide a general description of the mechanisms needed. But in order to do so, it is important to analyze the similarities among these phenomena Extending this approach more broadly to issues in spoken language understanding, Nusbaum and Henly (in press) point out that this of lack of invariance is not unique to phoneme perception. In word recognition, the phonemes that form words (not the acoustic patterns of the phonemes but the actual sequence of phonological segments) varies also with context (e.g., "Did you" ? /dIj^/ versus "Did she" ? /dIdSi/) and with talker characteristics (e.g., pin can refer to either a writing instrument or to a small, sharp object in one dialect, whereas the names of these objects would be pronounced pen and pin in another). In prosody perception, there is a many-to-many relationship between intonation contours and their interpretation that depends on talker and context (cf. Bollinger 1989). In sentence understanding, of course, the same idea can be expressed with different orderings of words, and one ordering of words can express different ideas depending on context. Thus, at many levels of language understanding, there is a lack of invariance between the pattern properties that express linguistic structure and the linguistic interpretation of those properties. Of course, the fact that there are similarities in the description of this problem does not mean we are proposing a single, global mechanism for all of language processing.
Still, it is possible that the same type of computational or psychological principles can describe the set of mechanisms that could resolve this lack page_306 Page 307 of invariance problem regardless of the level of linguistic description. On one hand, the specific implementations of mechanisms dealing with different forms of lack of invariance could differ across levels of linguistic representation; on the other hand, the computational theory describing the operation of these mechanisms may be very similar (Marr 1982). By considering more general solutions to this problem, we might gain insight into the psychological and computational principles that are important to both speech perception and language understanding. Certainly the theories and mechanisms that have been previously proposed to address this problem in the domain of speech perception cannot be generalized to other domains in any straightforward manner. Indeed, it is extremely unlikely that the kinds of very specific approaches based on articulatory or acoustic phonetics that have been proposed to address the problem of lack of invariance in phoneme perception are extensible to these other levels of analysis. For example, if TRACE (McClelland and Elman 1986) can address only one of three different and well-known forms of lack of invariance in relationships between acoustic properties and phonetic categories, it seems improbable that this kind of domain-specific implementation could be extended to address this problem at other levels of linguistic analysis. Indeed, no simple adjustment could explain how the meanings of words in sentential context interact to facilitate word recognition. Within TRACE, for example, phoneme units fire in response to sufficient activation of their constituent acoustic features, which are defined within a fixed and narrow window of time. This definition of an acoustic feature is specific to the distribution of acoustic information at the segmental level and will not suffice to explain the recognition of intonation contours as indicating either a question or a declarative statement (Hadding-Koch and Studdert-Kennedy 1964). Intonation contours must be recognized over much broader durations of speech (i.e., whole sentences) and would therefore require a new class of acoustic feature detectors with different computational characteristics. Thus, while it is certainly possible to devise a new set of computational units for every perceptual unit in speech, the definition of these units would be idiosyncratic and ad hoc. Similarly, almost all models of speech perception, such as motor theory (Liberman et al. 1962; Liberman and Mattingly 1985), LAFS (Klatt 1979), analysis-by-synthesis (Stevens and Halle 1967), as well as other theoretical approaches such as the search for invariant acoustic cues (e.g., Stevens and Blumstein 1981), are based on principles, mechanisms, or assumptions that are intrinsically tied to the representation of phonemes page_307 Page 308 as acoustic or articulatory properties and therefore cannot be applied in any straightforward way to the lack of pattern/interpretation invariance between the ordering of phonemes in a word and recognition of a word or the pattern of words in a sentence and comprehension of that sentence. Although a final theoretic specification may need to incorporate some aspects of domain-specific mechanisms or processes, these processes must operate in accordance with these principles or they will not be sufficient to account for the basic phenomena in their domain. Thus, if we reconsider the tacit assumption that the mechanisms underlying different levels of linguistic processing are independent, we might establish new computational principles that are sufficiently general to explain how listeners cope with pattern/interpretation variability across levels of perceptual and linguistic analysis. The Separation of Children and Adults Just as speech perception and language understanding have been viewed, theoretically at least, as separate and dissociable systems (Fodor 1983), there appears to be a parallel dissociation between explanations of spoken language perception and understanding in children and adults. On the surface, the reason for this seems straightforward: the linguistic performance of children and adults often appears to be qualitatively different, and it is therefore plausible to ascribe these differences to different kinds of processes. For example, linguists and psycholinguists have attributed different sorts of syntactic rules to children and adults, almost as if children and adults were speaking different languages (Brown 1973; but see Pinker 1984). Indeed, some researchers have described native English-speaking children as using a language that in some respects is structurally more similar to Italian (Hyams 1987) than it is to English. Although it is not
necessary to interpret this type of finding as evidence that children and adults process language differently (since it could reflect differences in linguistic knowledge rather than in processing), the sense one gets from these studies is that children and adults are fundamentally different kinds of language processors. Even if this is not the intended interpretation, the paucity of attempts to provide theories that connect different forms of linguistic behavior across levels of development implies this underlying view. This separation between children and adults is true at other linguistic levels as well. For example, in speech research, neonates are described as endowed with mechanisms designed to recognize a spoken language pop page_308 Page 309 ulated with many more phonetic categories than the language they will ultimately speak as adults (e.g., Aslin, Pisoni, and Jusczyk 1983; Best, this volume; Eimas 1978; Werker, this volume). These mechanisms used in infancy are often characterized as more general, whereas those mediating phonetic processing in adults are more specifically linguistic (see Best, this volume; Jusczyk, this volume). Similarly, at the level of word recognition, researchers (Jusczyk, this volume; also Aslin and Smith 1988; Walley, Smith, and Jusczyk 1986) have described infants as representing and processing spoken words holistically, whereas adults process and recognize spoken words analytically. These different descriptions make it appear that infants and adults represent and process spoken language in fundamentally different ways. Although there may be large differences in the way infants and adults perceive spoken language, attributing these differences to underlying processing or representational differences may not be the best working hypothesis from which to start (cf. Pinker 1984). The chapters in this volume illustrate a promising trend in this direction. For example, research has sought to relate phonetic processing in children and adults through either linguistic (Best, this volume) or cognitive mechanisms (Jusczyk, this volume; Pisoni, Lively, and Logan, this volume), or mechanisms of both types (Werker, this volume). Miller and Eimas (this volume) also examined the way that infants and adults perceive spoken language, detailing the similarities between context-dependent phonetic processing. Finally, Gerken (this volume) explored the simultaneous acquisition of two linguistic levelslexical and sentential processingand explored the relation between them. Still, the field generally lacks comprehensive theories that propose underlying mechanisms that can account for the development of language perception. Thus, the processes attributed to children and adults in speech perception, syntax, and word recognition are often qualitatively different, and, therefore, they often seem to be treated as separate and dissociable. We see this same kind of apparent functional dissociation when we consider theories of speech perception intended to account for adult performance. The majority of these theories generally seek to explain the abilities of adult listeners without considering the genesis and development of those abilities. When theories of adult speech perception, or language processing in general, propose mechanisms without considering the development of those mechanisms, two issues arise. The first issue is simply the question of whether those mechanisms could have possibly developed. Not all mechanisms are "learnable" (cf. Osherson, Stob, and Weinstein 1986)the computational structure of page_309 Page 310 some theories simply cannot be derived by a series of small modifications starting from a more simplified or primitive computational mechanism. There are few computational theories that can adequately account for systematic changes in a computational mechanism to achieve some computational goal or function. In fact, the most developed theories of computational change are the learning algorithms of connectionism. In these approaches, the processing mechanisms do not change with learning; rather, the representation of knowledge changes. In addition, the development of any mechanism proposed to account for adult processes should be biologically plausible. Although it seems reasonable to suppose that the neurophysiological changes that occur during development will have consequences for psychological processes, these changes must have some constraints. The kinds of biological changes that take place during maturation are unlikely to be sufficient to create a new mechanism of arbitrary complexity and power. Furthermore, in the interest of parsimony, a theory of adult processing mechanisms should be related to observed developmental changes in perceptuolinguistic abilities. Since we know that there are systematic changes in language ability with maturation and linguistic experience, we should restrict our theories to those that could have come into
existence in reasonable ways. The second issue concerns specifying the nature of transitions in development. Even if we accept different kinds of theories of speech perception to account for behavior in children and adults and assume that some particular adult theory is in principle learnable (i.e., computationally derivable or at least related to the child theory), we will still need to specify more precisely how learning or development changes infant ability into adult ability. In order to provide a complete explanation of speech perception, we need to explain more than just the starting point of this ability and its accomplished final form. We need to explain how this ability progresses throughout language acquisition. This may require an explicit theory of development to account for this progression. In other words, if it can be established that a particular theory could have developed, in some fashion, from the neonate's biological endowment and experiences, it is necessary to specify the precise way in which the theory developed. After all, different theories of perceptual development could, in principle, underlie any given adult perceptual theory, and these different developmental theories could make different predictions about adult performance. For example, a maturational theory might pre page_310 Page 311 dict that, following a biological critical period, processing in adults might be inflexible, whereas a learning theory might predict continued flexibility in processing. Few, if any, theories of speech perception, however, specify both an explicit perceptual theory and a theory of perceptual development (although see Jusczyk, this volume). Therefore, it is unclear how any particular theory of adult language processing could have come into existence. Unfortunately, this is a problem since recent research demonstrates that even adult theories should allow for (and therefore explain) development. For example, adult listeners can learn new phonological contrasts that are not part of their native phonological system (e.g., Pisoni, Lively, and Logan, this volume). Since learning is not within the domain of most adult theories of speech perception, these theories fail to explain this sort of ability. Thus, there are findings in adult speech perception research that theories treat as irrelevant because the underlying theoretical assumptions of these theories isolate them from developmental issues. These findings suggest, however, that research on and theories of the development of speech perception could provide important constraints on the shape of a theory of adult speech perception. Research investigating the way it develops may provide important clues about the knowledge and processes that mediate adult speech perception. When we discussed the separation of speech and language, we suggested that this dissociation is a consequence of accepting a modular view of spoken language processing. We suggest that an analogous assumption of modularityonly in diachronic termsunderlies the dichotomy between research on adult speech perception and the development of speech perception. In part, this form of modularity may result from characterizing development as a series of stages (Piaget 1975). A great deal of work in developmental psychology has questioned the notion of development as a progression of qualitatively distinct stages (cf. Gelman and Baillargeon 1983; Siegler 1986). Nonetheless, much of the research in language acquisition has viewed development as a sequence of qualitatively different knowledge structures rather than as a systematic progression that can be described as a continuous function in the mathematical sense. For example, classical U-shaped curves in development have typically been described as if there were entirely different kinds of mechanisms responsible for the different parts of the curve (Pinker and Prince 1988) rather than a complex interaction of different aspects of a single developmental process (but see Rumelhart and McClelland 1986; Plunkett and Marchman 1991). page_311 Page 312 With the focus of interest in developmental psychology often on characterizing the processing carried out at different stages and how this processing differs from stage to stage, the specific transitions that occur between stages seem to have become of secondary importance. Given the assumption that stages differ qualitatively, it is challenging enough to explain the knowledge and processes that operate at each stage without trying to explain precisely how the knowledge and processing is changed from stage to stage. Of course, the dramatic differences between children of different ages need to be understood at some level before the transitions between them can be investigated and explained. Perhaps, however, a focus on the changes in knowledge and process would support coherent theories of language processing that
account for both the performance of infants and children and of adults. In essence, the existing bias is no different than the working assumption in cognitive psychology that theories of adult cognition and perception can be formulated and tested without reference to or accountability for the development of the adult cognitive system. We have come to understand a great deal about cognitive processes by this approach. However, just as every theory of adult cognitive processing could have arisen from any one of several different developmental theories, the qualitative transmutation of processing from one stage to another could take place by any one of several developmental mechanisms, and just as this is ignored by most cognitive psychologists, the mechanism of transition is all too often ignored by developmental psychologists (see Goldin-Meadow et al. in press). If we instead view development as continuous change in several underlying mechanisms, an examination of the transitions might be informative about the way those mechanisms interact (see Thelen 1990, 1992). For example, the development of speech perception might be continuous and interconnected over time, with different processes, which overlap and coexist in time, given more or less weight at different points in development depending on factors such as functional relevance, available domainspecific knowledge, physiological maturation, and practice. Most research on the development of speech perception has served to catalogue the perceptual abilities of infants and compare these directly with adult perception (e.g., Aslin, Pisoni, and Jusczyk 1983) with little research directed at characterizing the developmental progression after the first year of life (although see Best, this volume; Tees and Werker 1984; Werker and Tees 1983; Werker, this volume). Few, if any, theories provide any systematic account of the developmental relation between the abilities of the infant to perceive speech and the abilities of the adult to page_312 Page 313 understand spoken language (although see Jusczyk 1986, this volume; Pisoni, Lively, and Logan, this volume). Most researchers do not treat development as consisting of completely abrupt and discrete stages. Nonetheless, the framework of stage theories is still reflected in our accounts of data: there is a tendency to treat developmental change as a sequence of changing skills or knowledge. Consider the influence of stage approaches on recent research on the role of prosody in segmentation of linguistic units during infancy. The findings suggest a sequence of stages of development in which infants are first able to segment speech into clauses, then into phrases and, finally, into words (Hirsh-Pasek, et al. 1987; Jusczyk, this volume). Thus, during the second half of the first year of life, infants are viewed as successively refining the process of segmenting fluent speech into linguistic units starting with larger units and, after developing some competence segmenting those units, working toward segmenting speech into smaller units until they are processing separate lexical items. Infants may engage in this stagelike progression of development of the units in segmenting speech, but other interpretations could be considered as well. It may be possible to explain the same data based on changes in the functional relevance of the different units of analysis. This is because the methodology used by Jusczyk and his colleagues tests for infants' preferences for an utterance with a pause at a unit boundary as compared to an utterance with a pause in the middle of a unit. Any explanation that posits sensitivity to the disruption of a unit by a pause is also viable. Suppose that infants, even as neonates, are capable of segmenting speech into words, phrases, and clauses. In other words, suppose that from their very first exposure to speech, infants perceive words, phrases, and clauses as discrete and separable units. We would then have to explain the apparent development of segmentation in these infants by some other progressive change such as changes in the relative functional significance of these units to the infant. For example, at around four months of age, the affective tone expressed in the prosody of a sentence may be the most communicative and important for infants (Fernald 1984, 1985). As a result, infants may initially be most attentive to a linguistic unit that provides the greatest affective information. Stated differently, even though the words, phrases, and clauses could all be available to the infant listener as separable units, it is the information distributed over the largest of these that is most important for the youngest infants. As infants begin to understand, through interactions with adults, that parts of these units actually have a different kind of meaning, they may focus attention on these smaller, page_313 Page 314 phrasal units. As they begin to understand that particular sound patterns can be associated with particular objects or
events, they may become more interested in words and may direct attention to those units. It is important to note that this interpretation does not require successive stages of development with each predicated on and replacing its antecedent. Even adults will continue to pick up the affective tone expressed over an entire utterance, and there is no reason to posit that this is carried out differently for children and adults. All this explanation requires is a change in the relative functional importance of different units of analysis as the infant's knowledge about the meaning of those units changes with experience. This type of explanation does not require a qualitative change in processing but instead a change in the child's listening goals. This example illustrates that perceptual data from one paradigm do not constrain theories sufficiently to support stagelike interpretations unequivocally, even though this kind of interpretation is made frequently. Although we know that there is a systematic progression, with age, in the perceptual sensitivity to disruption of different types of linguistic units, the explanation of this progression is not manifest in the data. This finding may provide the basis for a hypothesis, but converging operations from a wide range of paradigms may be necessary to constrain the possible explanations. In general, the description of development as stages that are associated with increasingly sophisticated, hierarchically structured linguistic or perceptual abilities may be difficult to substantiate directly. Our concern, however, is that this type of description is typically assumed rather than taken as one of several possible explanatory frameworks. It is possible that qualitative changes in behavior result from continuous change in different parts of extant underlying processes (Devaney 1986) rather than from a succession of qualitatively different processes. The notion of development as a series of stages suggests the idea that within a stage only one kind of ability can be found (e.g., clause segmentation, pivot grammar constructions) rather than a complex mix of processes. According to the modal view, the progression from one stage to the next is conceived of as a quantal jump rather than a smooth progression. The view that development is more continuous suggests more fluidity and overlap between these stages. Apparent jumps in behavior from one stage to another could be due instead to incremental changes of differing scale across several different processes, all operating simultaneously, or different mixtures of the extant processes at different points in time. Although it is true that very few researchers actually treat developmental stages as completely discrete and isolated points in development, it is also page_314 Page 315 true that there is very little research that directly examines the transitions that bridge changes in performance (GoldinMeadow et al. 1993). In sum, if we are to understand development, it may be problematic to describe or explain cognitive and psycholinguistic processes operating at one point in time as if they could be isolated from other points in time. The appearance of qualitative differences in patterns of performance across ages does not, by itself, constitute evidence for qualitatively different processes. This may be generally well understood and accepted by researchers; however, it is too often overlooked. Moreover, understanding processing at one age may necessitate understanding the differences that precede and follow it. The ultimate explanation of these differences may rest on a complex mix of processes that changes in relative emphasis over time or processes for which the knowledge or inputs change in different ways. Theories of speech perception might have a radically different form if theoretical questions about how speech is perceived encompass the lifespan of the listener from a continuous developmental perspective rather than from a punctate view. Therefore, we suggest that a complete theory of speech perception ultimately must account for the way speech perception develops as well as explain perceptual abilities during the seemingly steady states of infancy and adulthood. Speech Perception as Cognitive Psychology We have argued that, from our perspective, most theories of speech perception and language comprehension by adults are not intrinsically constructed to account for developmental data, either in terms of maturational plausibility or ''learnability," because this has not been seen as part of their "domain of accountability." These theories were designed only to account for the perceptual and linguistic performance and abilities of adults, so they generally do not attempt to account for either the performance of children at different ages or the mechanisms by which this performance changes with age and experience. Furthermore, even when current theories take an "interactionist" approach by combining multiple levels of analysis to explain speech perception or language comprehension, they remain functional modularists. While they stress the importance of input from multiple levels of analysis in processing linguistic units at one level, they do not explain the operation of the processes at those other levels. This is certainly reasonable given the goals of these theories. However, we have discussed how the goals of these theories have been shaped by tacit assumptions about the
structure and development of lan
page_315 Page 316
guage understanding, and this may limit the viability and explanatory power of these theories. Since these assumptions have constrained the kind of explanatory principles applied to problems in speech perception, there is no simple or straightforward way to extend them beyond their current domains of explanation. Rather, the fundamental nature of these theories must be altered to incorporate mechanisms that can explain development, and this means researchers need to start with a different set of governing principles. This requirement can be seen clearly in two specific examples that span the entire range of current theories of speech perception: the TRACE model (McClelland and Elman 1986) and motor theory (Liberman and Mattingly 1985). These theories differ from each other in almost every way possible, and yet they both typify the inherent limitations of the current zeitgeist. At the heart of TRACE is a computational network of simple feature detectors cross-connected to encode acoustic-phonetic and lexical knowledge. This means that to extend TRACE beyond its current linguistic limitations requires encoding linguistic knowledge such that there is a one-to-one correspondence between linguistic units and feature detectors and that all relationships among linguistic units must be defined a priori through a set of hardwired connections in the network. Clearly this particular architecture will limit the kinds of linguistic processing that can be carried out as it will rule out anything that requires computation with formal argument structure or structuresensitive operations, such as syntactic processing or morphological decomposition (see Fodor and Pylyshyn 1988). Even more important is the limitation that is imposed by the static nature of the computational mechanisms employed. In order for processing to operate successfully, the network must be stable because the intensional semantics of the model are directly encoded into this structure. This kind of local representation network cannot incorporate the kind of learning that is employed in other connectionist models that use distributed representations. To modify a local representation network and to integrate correctly new knowledge into the network, any learning algorithm would have to understand the extant intensional semantics of the network which would require a homunculus. Of course this would violate the basic computational principles of the model. Similarly, motor theory (Liberman and Mattingly 1985) is directed at explaining phoneme perception through direct recognition of the articulatory configurations that produced the acoustic patterns of segments. Thus, any pattern that does not have an intrinsic articulatory/acoustic relationship cannot be processed according to this principle. This means page_316 Page 317 that while the theory could possibly be extended to account for perception of prosody in utterances, it could never address any aspect of psycholinguistic processing, such as word recognition from a pattern of phonemes, or sentence understanding from a pattern of words. Furthermore, this theory is based on the precept that there is an invariant relationship between speech production and acoustic patterns that is directly detectable by the listener. Since this relationship is an intrinsic property of speech rather than knowledge that the listener has, it is not learned. The theory need not incorporate any learning because it starts from the assumption that perception is afforded by the stimulus rather than computed by the listener (cf. Gibson 1979). For both these examples, as for almost all theories of speech perception, the limitations on these theories are inherent in their underlying principles. In order for theories to transcend these limitations, they must be constructed according to principles that are sufficiently general to permit application to almost any linguistic level of analysis and are specified to be flexible and adaptive. However, even a cursory glance at current theories of speech perception indicates that the principles that govern their operation are domain specific and static. At their core, these theories often make use of simple pattern matching, using feature detectors (e.g., Abbs and Sussman 1971; McClelland and Elman 1986), template matching (e.g., Klatt 1979), or prototype matching (Massaro 1987). These are all static procedures that compare input information to stored representations following different algorithms. Even the way these theories account for context sensitivity in pattern matching is fixed and immutable, including the use of fuzzy logic (e.g., Massaro 1987), probabilistic networks (Klatt, 1979), and local reweighting of connections (McClelland and Elman 1986). There is nothing in the specification of any of these principles that is adaptive; in fact, each of these is a passive, open-loop computational process that implies invariance in processing (see Nusbaum and Schwab 1986). As a consequence, any theory built from
these principles alone may therefore be unable to account for development and may require additional assumptions to provide any flexibility of processing. As Nusbaum and Henly (1992) have pointed out, there is a need for greater flexibility in both the processes and representations that are posited by theories of speech perception. For example, most theories of speech perception assume (although this assumption is seldom explicitly stated) that there is a fixed "window of analysis" that is applied to all utterances in order to recognize linguistic units. This means that most theories assume that recognition of linguistic units is based on analyzing page_317 Page 318 segments of waveform that are fixed in duration. Although the size of this window might vary between levels of analysis (e.g., phonemes have one window and words another), these windows should be fixed across any particular level. Thus phoneme decisions should all be based on one, fixed amount of acoustic information. Similarly, the amount of signal needed to decide that a particular word has been spoken should be based on an analysis of a fixed amount of speech signal (see Frauenfelder 1992). In contrast with this view, based on several recent studies of speech perception, Nusbaum and Henly argued that listeners appear much more strategic and flexible in processing and appear to be adopting a dynamic "window of analysis" that changes with the nature of the decision and context. They argued that what constitutes evidence for recognizing a particular phoneme or word in one context could be radically different in a different context, and listeners strategically exploit those differences. For example, Nusbaum and DeGroot (1990) demonstrated that, in recognizing phoneme targets in nonword utterances, listeners attend to phonetic information on both sides of a syllable boundary. However, when recognizing phoneme targets in words, listeners attend more to phonetic information in the same word as the target than to phonetic information across a syllable boundary in an adjacent word. Nusbaum and Henly argued that listeners seem to vary strategically the use of pattern information in an utterance depending on the availability of other sources of information (e.g., lexical status). From this and other examples of strategic flexibility in speech perception, Nusbaum and Henly claimed that it would be a mistake to construct a theory of speech perception based on a fixed mechanism for both pattern recognition and for instantiating context sensitivity and a fixed form of mental representation for linguistic units. Instead, theories of speech perception need to be based on principles of processing and representation that can dynamically adapt to the specific sources of information available in any particular utterance, context, or situation. The problem is, of course, to find a set of principles that provide sufficient flexibility to account for this kind of result as well as the kind of adaptability needed to explain development of speech perception. Clearly the kinds of principles that have been used in the past are not sufficient for this purpose. However, cognitive psychology is concerned with explaining precisely this kind of adaptability and flexibility in processing (cf. Langley and Simon 1981). Thus, perhaps, we should look to cognitive psychology as the source of principles to guide the construction of theories of spoken page_318 Page 319 language processing (see Nusbaum and Henly, in press, for an extensive discussion). Moreover, beyond the basic issue of processing flexibility, there are a number of findings in speech research that can only be accounted for by principles derived from cognitive psychology (see Nusbaum and Henly in press). It is well known that speech perception is affected by the listener's expectations (Carden et al. 1981; Nusbaum and Morin 1992; Remez et al. 1981), by selective attention (Nusbaum and DeGroot 1990; Wood and Day 1975), by capacity limitations (Bookbinder and Osman 1979; Moray 1969; Nusbaum 1981; Nusbaum and Morin 1992), by perceptual learning (Pisoni et al. 1982; Schwab, Nusbaum and Pisoni 1985), and by category structure (Kuhl 1991; Li and Pastore 1992; Samuel 1982). These findings, however, are seldom explained by theories of speech perception because there is a tacit assumption that these effects are not reflective of the normative operation of the basic processes of speech perception. On the other hand, by assuming that speech perception and language comprehension operate as cognitive processes rather than as static, passive, specialized, and encapsulated mechanisms, it may be possible to account for these kinds of findings more directly. Furthermore, research in cognitive psychology has already addressed a number of theoretical problems and issues that are
similar to the most important issues in speech research. Thus, theoretic principles from cognitive psychology may be adopted or modified to provide the foundation for a cognitive theory of speech perception. In particular, the central issue in speech research, the lack of invariance in mapping acoustic properties to phonetic categories, has been considered in cognitive psychology from the perspective of the role of variability in attention (Schneider and Shiffrin 1977), in learning categories (Posner and Keele 1968), in context sensitivity (Medin and Shaffer 1978), and in categorization (Barsalou 1987). This treatment of variability in cognitive psychology provides us with a different way of talking about the listener's expectations about the structure of speech (cf. Newell 1975); the focus of attention to various acoustic properties of different segments or words (cf. Nosofsky 1986), as well as how those properties are used in linguistic categorization (cf. Nosofsky 1987); the effort required for different types of perceptual processing and tradeoffs due to capacity limitations (cf. Navon and Gopher 1979); the formation and use of phonological and lexical categories (cf. Barsalou 1983; Smith and Medin 1981; Posner and Keele 1968), as well as how speech perception changes with linguistic experience (cf. Gibson 1969) and how different levels of perceptual and higher-level symbolic and linguistic pro page_319 Page 320 cessing may be related (cf. Bartlett 1932); and mechanisms that allow phoneme and word recognition without explicit rules (cf. Anderson, et al. 1977). To summarize to this point, theoretical constructs that are central in cognitive psychology research may provide explanations for several findings in speech perception. Nusbaum and Henly (in press) have pointed out seven issues that theories of speech perception will have to address and have argued that these issues seem especially compatible with a theory based on cognitive principles: 1. How do listeners cope with the lack of invariance between pattern structure and the linguistic interpretation of that structure which occurs at all levels of linguistic abstraction? 2. How do listeners use expectations to reorganize and reinterpret pattern properties? 3. How are listeners able to use context to change the recognition process? 4. How do adult listeners learn to exploit the systemic and structural properties of spoken language that can be used to direct recognition processing and comprehension? 5. Why is speech perception an attention-demanding process in spite of years of experience? 6. Under what conditions do demands on attention increase and decrease? 7. How does spoken language understanding develop? Just how do the principles and theoretic constructs from cognitive psychology address these issues? According to the framework proposed by Nusbaum and Henly, speech perception is not a simple recognition process that maps acoustic properties onto linguistic categories, and development is not a simple process of determining which mappings are appropriate for a particular language. The very general nature of the problem of lack of invariance between pattern structure and linguistic interpretations in speech and language argues against that kind of approach. Instead, speech perception and spoken language understanding is a process of constraint satisfaction and hypothesis testing in which the particular constraints that are available and the hypotheses being tested vary dynamically with aspects of context and the communicative situation. In essence, listeners must dynamically direct attention to the properties of an utterance by using adaptive learning mechanisms to target those particular properties that are most informative about the linguistic page_320 Page 321 interpretation of the utterance given the immediate constraints of context and knowledge about the structure of language and communicative interactions. Those properties, both acoustic and linguistic, extracted from the bottom up and generated from the top down, are used to test hypotheses about the possible interpretations of an utterance at several levels of linguistic analysis. Thus, in order to address the seven issues outlined above, a theory of speech perception will have to make use of learning, selective attention, and constraint satisfaction. Of course, in order to completely specify a theory of speech perception, each of these theoretical constructs will have to be explained as well. This is not the same as claiming that speech perception is mediated by a single global mechanism. Rather, it is a claim that the mechanisms that mediate different parts of spoken language understanding are constrained to operate according to a common set of cognitive principles, although the specific implementation of these principles in any particular mechanism could differ
across mechanisms. One important way in which this view differs from the modal view of speech perception is in the implications for the form of the mental representation of linguistic categories, such as phonemes and words. In order to achieve the kind of dynamic responsiveness to fluctuations in constraints provided by context, linguistic categories cannot be represented by fixed and a priori specified property lists; some kind of alternative representation is needed. In almost all domains in which speech research is carried out, however, including psychology, engineering, linguistics, computer science, and audiology, linguistic categories are assumed to be represented by a rigid, denotational specification of pattern properties. Most psychological research on spoken language either explicitly or implicitly assumes that linguistic categories are represented by fixed prototypes (see Kuhl 1991; Li and Pastore 1992; Samuel 1982) or exemplars (Juscyzk, this volume) in which each phoneme or word has a category representation that represents the acoustic properties that specify a single best or average member of the category (for a prototype) or a collection of episodically experienced tokens of the category (for exemplars). In this regard, research on speech perception has begun to adopt some of the theoretic constructs of cognitive psychology. However, it is interesting that these particular constructs, namely prototype and exemplar representations, are known to have a number of limitations in accounting for categorization phenomena in cognitive psychology (see Medin 1989; Smith and Medin 1981) and, thus, are theoretically problematic. Moreover, it is precisely the inflexibility of prototype and exemplar models that is the problem for these models in cognitive psychology. Both prototype page_321 Page 322 and exemplar models require a specific and a priori listing of all the properties that must be matched in order to determine membership in that category, thereby limiting the flexibility and context sensitivity of these representations. Nusbaum and Henly (1992, in press; also Nusbaum and DeGroot 1990), therefore, argue that the mental representation of linguistic units cannot consist of a list of features. Rather than selecting a subset of features, context may completely redefine the set of features that denote the presence of a particular linguistic unit. Further, not all the information that listeners use for recognition can be specified in terms of features. For example, lexical knowledge can be used by listeners as part of phoneme recognition (Ganong 1980), and it is difficult to think of this sort of complex knowledge as a feature of the stimulus to be matched. Similarly, the relationships among vowels in a particular talker's vowel space may be used by listeners during recognition of vowels (Gerstman 1968; Ladefoged and Broadbent 1957) but cannot be described as a simple set of features. If the features dynamically change with context, it is difficult to see how a valid feature specification can be determined a priori for representing a particular linguistic category across all the contexts and conditions under which it must be recognized. Moreover, if some of the information used by listeners to recognize speech is abstract or relational and cannot be specified as features, a different form of category representation is needed. This form of representation needs to be able to generate the set of features that should be expected in any particular context since these cannot be determined prior to any particular utterance. Thus, this representation needs to be an abstract characterization of the kinds of features that can be expected under different conditions. Furthermore, beyond generating the possible feature sets needed for recognition, this representation has to be sufficiently abstract to encode other kinds of knowledge about categories that could be used in recognition (e.g., relational or functional information). Murphy and Medin (1985; Medin 1989) have proposed a way to think about the representation of abstract, conceptual categories that does not require a priori specification of a category as a feature list. They suggested that conceptual categories might be better thought of as theories that people have about the structure, nature, or function of those categories. Nusbaum and his colleagues (Nusbaum and DeGroot 1990; Nusbaum and Henly 1992, in press) have argued that this form of representation may provide a better model for representing linguistic categories as well. page_322 Page 323 Although Murphy and Medin were not specific about what a theory is and how it should operate in categorization, we can use scientific theories as a metaphor for understanding how categories represented as theories might function.
A theory can be thought of as a collection of propositions or principles that specifies abstract knowledge about the category. These propositions may include information about the function of the category, an abstract specification of its form and how context operates on it, and its relationship to other linguistic knowledge. Using these abstract descriptions of a category, recognition of a particular utterance may be thought of as a process of testing hypotheses generated under the constraints of the actual communicative situation. Just as scientists test for the veracity of a theory by proposing specific hypotheses that can be tested, mental theories of linguistic categories may generate specific tests that determine the veracity of a particular category (e.g., a phoneme such as /b/ or a word such as dog) given the available evidence. A variety of interrelated tests at various linguistic levels may be applied depending on the context and the situation. For example, some tests for recognizing a specific phoneme might involve simple feature detections, whereas others might be based on determining more abstract properties of the linguistic structure of the utterance, such as determining if the set of context phonemes is consistent with a particular cohort of words. According to this model of categorization, then, recognition is a process of generating appropriate tests and then assessing the outcome of those tests. Of course this testing process is a highly interactive and dynamically fluctuating process because the proposal of some hypotheses may direct attention to certain properties or organizations of the signal that were previously not processed. Processing these properties may actually change the set of candidate hypotheses being considered. The advantage in terms of flexibility is, of course, that the tests need not be stored ahead of time but can be computed as demands arise. However, this suggests that speech recognition and language understanding must necessarily be mediated by active, controlled processing (see Nusbaum and Schwab 1986). That is, speech perception should be an attention-demanding process in which the demands on attention fluctuate with the availability of constraints on the set of hypotheses being tested. Indeed, this portrayal of speech perception is precisely what was argued by Nusbaum and Schwab (1986). They outlined a framework for analyzing attention and attentional demands in speech perception and claimed that recognition should be thought of as a process of hypothesis page_323 Page 324 testing in which the demands on capacity are determined by the size of the hypothesis space. When more hypotheses need to be considered, the demands on cognitive resources (e.g., working memory, see Baddeley 1986) are greater. This framework provides the basis for explaining why speech perception requires attentional resources. In speech, there is a many-to-many mapping between pattern properties and the linguistic interpretation of those properties. When there are more possible interpretations, attention will have to be distributed over a greater number of hypotheses, and demands on cognitive resources should be greater as well. Taking this view of attentional processing together with the view of representation as theories about categories yields the following framework for a theory of speech perception outlined by Nusbaum and Henly (in press). Speech perception is a problem of determining the correct linguistic interpretation of an utterance. When a listener starts processing an utterance, there are few constraints on the set of possible interpretations. Thus, at all levels of linguistic analysis there is a great deal of uncertainty and a large number of possible hypotheses so attentional demands will be great. The smallest number of hypotheses will exist at the level of interpreting phonemes because this linguistic unit has relatively few members since there are few, if any, higher-order constraints on interpretation without prior communicative context. As the listener begins to learn the vocal characteristics of the talker (and other situational characteristics like room acoustics), the phonetic hypothesis space becomes small, thereby reducing the capacity demands of phonetic recognition (see Nusbaum and Morin 1992). As more of the utterance is recognized, constraints on the phonetic hypothesis space will come from other sources, such as some knowledge about the linguistic structure of the utterance (see Newell 1975). With a smaller number of hypotheses to test, attention can be focused on those relevant properties that are the most informative. Thus, phonetic recognition will become less demanding as the listener processes more speech from one talker and from one conversation. Indeed, as listeners learn to recognize more accurately synthetic speech produced by a computer (Schwab, Nusbaum, and Pisoni 1985), they become much more efficient in using cognitive resources (Nusbaum and Lee 1993). The problem of lack of invariance is therefore not resolved either by specialized knowledge, as in motor theory (Liberman and Mattingly 1985), or by specialized. domain-specific processing mechanisms, as in TRACE (McClelland and Elman 1986). Rather it is assumed that listeners are sensitive to the sources of lawful variability in speech (Elman and
page_324 Page 325
McClelland 1986) and attend to these sources of information to constrain the hypothesis space. Thus attention and constraint satisfaction both play roles in language perception. A listener's expectations constrain the hypothesis space from the top down, and linguistic context constrains the set of hypotheses from the bottom up. Both reduce uncertainty about the linguistic interpretation of an utterance, thereby increasing recognition speed and accuracy, and reducing the cognitive load of recognition. Taken as a whole, then, this view of speech perception rests on three general theoretical constructs: learning, attention, and constraint satisfaction. Speech perception may be a process of constraint satisfaction in which linguistic categories are recognized according to criteria that are based on represented theories of the linguistic structure and function of those categories. The specific constraints that specify the hypotheses for determining category membership may shift with factors that are responsible for the lack of invariance in speech, so listeners must dynamically shift attention from one set of hypotheses to another. Attentional demands fluctuate as attention is distributed over a dynamically changing set of hypotheses that must be evaluated from moment to moment in order to interpret the utterance. Given the adaptive mechanisms needed for learning structural constraints from context, it would not be surprising if adults are carrying out speech perception using the same adaptive mechanisms by which these constraints are learned during childhood. The Development of Speech Perception as Skill Acquisition By adopting a theoretical framework constructed from general principles of cognitive psychology, we can address speech perception and language comprehension with a common processing description. In other words, perhaps, a common set of principles can form a processing description that can constrain and explain the kinds of mechanisms that mediate spoken language processing. Again, this is not to say that one global mechanism underlies language recognition at all linguistic levels and ages; instead, general principles may be applied to specific domains and utilize changing linguistic knowledge. Rather than be tied to the specific details of phonetic structure, the more general principles of learning, attention, and constraint satisfaction apply equally well to problems of word recognition and sentence comprehension. Furthermore, by incorporating learning directly as a normative part of spoken language processing, this framework provides a natural way of accounting for the development of linguistic ability. page_325 Page 326 If we follow this extension from normative adult processing to the development of that processing from childhood, we can make specific predictions about the process of language acquisition. For example, since it is possible to account for development without positing extrinsic developmental mechanisms, the mechanisms used in spoken language processing by children and adults should be the samethat is, for both children and adults, speech perception and language comprehension should be carried out by the dynamic specification of a set of hypotheses that, when tested, provide linguistic interpretation. Thus, for children and adults, the cognitive load of language processing should increase as a function of the uncertainty about (i.e., lack of constraints on) the set of possible linguistic interpretations. If there is uncertainty about the talker or the linguistic context, cognitive load will be higher than when the listener, adult or child, can target the relevant constraints. However, if the mechanisms of processing language stay constant, what can account for the observed changes in performance with linguistic experience and maturation? There are certainly dramatic differences in the way children and adults use language. It is also true that there are huge differences in the knowledge that children and adults have about the structure and function of their language. Thus, it may be possible to account for these changes by taking the view that what is changing as a consequence of the experience of using language is the knowledge that the child has about language (see Goodman 1989, 1991, 1992). By applying our cognitive framework to language acquisition, the development of speech perception can be seen as a process of skill acquisition (cf. Logan 1988; Shiffrin and Schneider 1977). In fact, this is not a new proposal. The idea that language learning in adults and children is carried out by the same process of skill acquisition dates back to the late 1800s (Bryan and Harter 1898). Bryan and Harter studied the acquisition of telegraphy skills and reported particularly on
the acquisition by adults, over months, of the ability to recognize telegraphy. In learning to recognize telegraphy, operators first began to recognize the patterns corresponding to individual letters. As they learned the correspondence between patterns and letters, transcription performance improved until all the letters were learned. At this point, recognition performance reached a plateau for some period of time. Bryan and Harter claimed that during this plateau letter recognition was becoming automatized, and the cognitive demands of letter recognition were being reduced until attention could be directed to whole words at the next level of linguistic abstraction. The recognition rate increased again as the operators page_326 Page 327 learned to recognize words. Thus, Bryan and Harter described the acquisition of telegraphy as the acquisition of a ''hierarchy of habits" in which one level was learned, then automatized, freeing up attention for learning on the next higher level. It is interesting to note that Bryan and Harter speculated on the generality of this description of learning, suggesting that it might apply to other communication skills, such as children learning to read braille. Their view was that this process of skill acquisition was not strictly limited to some peculiarity of telegraphy but might be a more general property of learning any skillincluding language understandingin which the skill may be defined in terms of a structural hierarchy of abstraction. In fact, this notion of learning to read as a process of skill acquisition has been proposed in more recent years as well (Evans and Carr 1985; LaBerge and Samuel 1974). It seems a reasonable extension to consider the development of speech perception as a process of skill acquisition as well. But what are the implications of applying this perspective to language acquisition? One of the central issues in skill acquisition is the notion that demands on attention during processing are a direct consequence of the amount of practice or experience in carrying out the processing. As we gain expertise in a skill, the demands on attention are reduced and performance often appears to become automatized (see Brown and Carr 1989; Logan 1985, 1989). This suggests that as children learn more about their native language, attentional demands should be reduced. Following Nusbaum and Schwab's (1986) description of learning, children acquiring language should start out distributing attention broadly over the speech signal as few constraints would exist on the possible linguistic interpretations. However, as more of the structure of language is acquired, attention could become focused on those properties that are most informative about the intended linguistic interpretation of an utterance (see also Billman 1989). Thus the time course of attentional demands during language acquisition should be governed by the same factors that govern these demands in adults during the online recognition of an utterance. Demands for cognitive resources should fluctuate based on the availability of constraints on the space of hypotheses that must be tested to interpret correctly an utterance. For adult listeners, these constraints are manifest in the communicative situation and develop quickly as more of a conversation takes place. For children, these constraints are learned over the course of their experience with language, and if useful in reducing the hypothesis space, this structural knowledge is retained. In much the same way, adults learning to recognize page_327 Page 328 synthetic speech produced by a text-to-speech system learn the constraints peculiar to that system and retain them, even after just a few days of exposure, for up to six months (Schwab, Nusbaum, and Pisoni 1985). Consider, for example, perceptual learning of phonology (see Nusbaum and Lee 1992). The infant may be born with few a priori expectations about the way to distribute attention in analyzing the acoustic structure of speech. Experience listening to speech directs this attention toward those properties that are most relevant for the phonetic and phonological processing of speech in the infant's native language environment. This would be consistent with the data that suggest that infants are born with a universal phonetic inventory and that perceptual experience eliminates the nonnative contrasts from this inventory, maintaining the native contrasts alone. The apparent loss of nonnative contrasts over the first year is due to an active shift of attention away from those acoustic cues that are not phonologically informative. We suggest that the development of a phonological system is an active process of selecting relevant acoustic properties over the first year and then, over subsequent years, learning about the interaction of those properties with the use and meaning of language (see also Best, this volume; Juscyzk, this volume; Pisoni Lively, and Logan, this volume; Werker, this volume).
Indeed, there is evidence from second-language acquisition to support this notion: Japanese listeners actually attend to different acoustic cues in syllables that are not the ones that are functional for distinguishing /r/ from /1/ (Yamada and Tohkura 1992). The difficulty in relearning this contrast (cf. Pisoni, Lively, and Logan, this volume) may be a consequence of the difficulty in shifting attention to a set of acoustic properties that have been habitually ignored because they are not linguistically functional m the listener's native language (see Best, this volume; Nusbaum and Lee 1992). Similarly, children, faced with the task of understanding an utterance, are also learning about how that utterance fits into the linguistic knowledge they have already acquired. While we have argued that recognizing an utterance depends on using systematicity to constrain interpretations (Nusbaum and Lee 1992), Billman (1989) has argued that it is the systematicities in the structure of language that allow children to learn language. According to Billman, knowledge about one part of the linguistic system can be exploited as predictive of other parts of the system, and she has claimed that this may direct the child's attention to certain properties of an utterance. This process of constraining attention to properties that need to be learned is virtually identical to our description of the process page_328 Page 329 of recognition as well. Of course, we should reiterate that although the process of constraining attention to relevant properties may be similar for learning phonological and syntactic systems, the specific mechanisms that implement these constraints are probably quite different and domain-specific. Thus, our view of psycholinguistic development as skill acquisition is a direct generalization of our description of adult language processing. In other words, the processing of language carried out by children and adults is carried out by mechanisms governed by the same cognitive principles. Psycholinguistic development is essentially a side effect of the normal operation of spoken language processing. What differs between the adult and child is the richness of their experiences with and their knowledge about their native language. For both children and adults, language understanding directly depends on learning to focus attention on the relevant, constraining, and informative properties of an utterance given a particular knowledge state. In summary, we suggest that learning linguistic structure may be understood better when viewed as the process of shaping the distribution of attention. This suggestion is consistent with proposals by several contributors to this volume (Juscyzk; Pisoni, Lively, and Logan). Indeed, Jusczyk's model (this volume) could represent one way of implementing these cognitive principles for word recognition and phoneme perception. We believe that the principles we have described will constrain the set of possible models for other levels of linguistic development and processing as well. In general, children and adults must attend to the structural properties of spoken language to resolve the lack of invariance between patterns and their linguistic interpretations. This is not something that occurs only during the child's acquisition of native language or in the context of laboratory experiments on speech perception. Adults and children are continuously sensitive to the presence of information in utterances that can constrain the set of hypotheses needed to arrive at a linguistic interpretation. Thus, the shaping of the distribution of attention to the speech signal that occurs in perceptual learning during the development of speech perception and spoken language processing is also part of the normative process of speech perception in adult listeners. Acknowledgments Preparation of this chapter was supported, in part, by NIDCD, grant DC 00601 to the University of Chicago. We would like to thank Jim Stigler page_329 Page 330 and Janellen Huttenlocher for numerous discussions that influenced many of the ideas we cover in this chapter. References Abbs, J. H. and Sussman, H. M. (1971). Neurophysiological feature detectors and speech perception: A discussion of theoretical implications. Journal of Speech and Hearing Research, 14, 23-36.
Anderson, J. A., Silverstein, J. W., Ritz, S. A., and Jones, R. S. (1977). Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychological Review, 84, 413-451. Aslin, R. N. and Smith, L. B. (1988). Perceptual development. Annual Review of Psychology, 39, 435-473. Aslin, R. N., Pisoni, D. B., and Jusczyk, P. W. (1983). Auditory development and speech perception in infancy. In M. M. Haith and J. J. Campos (eds.), Carmichael's manual of child psychology, vol. 2: Infancy and the biology of development (4th ed., pp. 573-687). New York: Wiley. Au, T. K. (1990). Children's use of information in word learning. Journal of Child Language, 17, 393-416. Au, T. K. and Glusman, M. (1990). The principle of mutual exclusivity in word learning: To honor or not to honor? Child Development, 61, 1474-1490. Au, T. K. and Laframboise, D. E. (1990). Acquiring color names via linguistic contrast: The influence of contrasting terms. Child Development, 61, 1808-1823. Au, T. K. and Markman, E. M. (1987). Acquiring word meanings via linguistic contrast. Cognitive Development, 2, 217236. Baddeley, A. D. (1986). Working memory. Oxford: Oxford Science Publications. Barsalou, L. W. (1983). Ad hoc categories. Memory and Cognition, 11, 211-227. Barsalou, L. W. (1987). The instability of graded structure: Implications for the nature of concepts. In U. Neisser (ed.), Concepts and conceptual development: Ecological and intellectual factors in categorization. New York: Cambridge University Press. Bartlett, F. C. (1932). Remembering. Cambridge: Cambridge University Press. Bates, E., Bretherton, I., and Snyder, L. (1988). From first words to grammar. Cambridge: Cambridge University Press. Bates, E., Thal, D., Fenson, L., Whitesell, K., and Oakes, L (1989). Integrating language and gesture in infancy. Developmental Psychology, 25, 1004-1019. Billman, D. (1989). Systems of correlations in rule and category learning: Use of structured input in learning syntactic categories. Language and Cognitive Processes, 4, 12-155. page_330 Page 331 Bloom, L. (1970). Language development: form and function in emerging grammars. Cambridge, Mass.: MIT Press. Bolinger, D. (1989). Intonation and its uses: Melody in grammar and discourse. Stanford: Stanford University Press. Bookbinder, J. and Osman, E. (1979). Attentional strategies in dichotic listening. Memory and Cognition, 7, 511-520. Brown, R. (1973). A first language. Cambridge, Mass.: Harvard University Press. Brown, T. L. and Carr, T. H. (1989). Automaticity in skill acquisition: Mechanisms for reducing interference in concurrent performance. Journal of Experimental Psychology: Human Perception and Performance, 15, 686-700. Bryan, W. L. and Harter, N. (1898). Studies of the telegraphic language. The acquisition of a hierarchy of habits. Psychological Review, 6, 345-376. Burke, D. M. (1976). Developmental changes in speech perception: Accuracy and response time for identifying words in noise. Unpublished doctoral dissertation, Columbia University. Carden, G., Levitt, A. G., Jusczyk, P. W., and Walley, A. (1981). Evidence for phonetic processing of cues to place of articulation: Perceived manner affects perceived place. Perception and Psychophysics, 29, 26-36. Carey, S. and Bartlett, E. (1978). Acquiring a single new word. Papers and Reports on Child Language Development, 15, 17-29.
Chomsky. N. (1957). Syntactic structures. The Hague: Mouton. Chomsky, N. (1968). Language and mind. New York: Harcourt Brace. Clark, E. V. (1983). Meanings and concepts. In J. Flavell and E. Markman (eds.) Handbook of child psychology, vol. 3. New York: Wiley. Cole, R. A. and Perfetti, C. A. (1980). Listening for mispronunciations in a children's story: The use of context by children and adults. Journal of Verbal Learning and Verbal Behavior, 19, 297-315. Connine, C. M. (1987). Constraints on interactive processes in auditory word recognition. Journal of Memory and Language, 16, 527-538. Connine, C. M. and Clifton, C., Jr. (1987). Interactive use of lexical information in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 13, 291-299. Dechovitz, D. R. (1979). An effect of syntax on the perceptual integration of segmental features. In J. J. Wolf and D. H. Klatt (eds.), Speech communication papers presented at the 97th meeting of the Acoustical Society of America (pp. 319322). New York: Acoustical Society of America. Devaney, R. L. (1986). An introduction to chaotic dynamical systems. Menlo Park, Calif.: Benjamin/Cummings. page_331 Page 332 Eimas, P. D. (1978). Developmental aspects of speech perception. In R. Held, H. Leibowitz, and H. L. Teuber (eds.), Handbook of sensory physiology, vol. 8: Perception (pp. 357-374). New York: Springer-Verlag. Eimas, P. D., Siqueland, E. R., Jusczyk, P., and Vigorit, J. (1971). Speech perception in infants. Science, 171, 303-306. Elliott, L. L., Hammer, M. A., and Evan, K. E. (1987). Perception of gated, highly familiar spoken monosyllabic nouns by children, teenagers, and older adults. Perception and Psychophysics, 42, 150-157. Elman, J. and McClelland, J. (1986). Exploiting lawful variability in the speech wave. In J. S. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes. Hillsdale, N.J.: Erlbaum. Evans, M. A. and Carr, T. H. (1985) Cognitive abilities, conditions of learning, and the early development of reading skill. Reading Research Quarterly, Spring, 1985, 327-350. Fernald, A. (1984). The perceptual and affective salience of mothers' speech to infants. In L. Feagan, C. Garvey, and R. Golinkoff(eds.), The origins and growth of communication. Norwood, N.J.: Ablex. Fernald, A. (1985). Four-month-old infants prefer to listen to motherese. Infant Behavior and Development, 10,181-195. Fodor, J. A. (1983). The modularity of mind. Cambridge, Mass.: MIT Press. Fodor, J. A. and Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71. Fox, R. A. (1984). Effects of lexical status on phonetic categorization. Journal of Experimental Psychology: Human Perception and Performance, 10, 526-540. Frauenfelder, U. H. (1992). The interface between acoustic-phonetic and lexical processing. In M. E. H. Schouten (ed.), The auditory processing of speech. from sounds to words (pp. 325-338). Berlin: Mouton de Gruyter. Ganong, W. F. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception and Performance, 6, 110-125. Gelman, R. and Baillargeon, R. (1983). A review of some Piagetian concepts. In J. Flavell and E. Markman (eds.), Handbook of Child Psychology, vol. 3 (P. Mussen, series ed. pp. 179-193). New York: Wiley.
Gerstman, L. J. (1968). Classification of self-normalized vowels. IEEE Transactions on Audio Electroacoustics, AU-16, 78-80. Gibson, E.J. (1969). Principles of perceptual learning and development. New York: Prentice-Hall. Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. page_332 Page 333 Goldin-Meadow, S., Nusbaum, H. C., Garber, P., and Church, R. B. (1993). Transitions in learning: Evidence for simultaneously activated strategies. Journal of Experimental Psychology: Human Perception and Performance, 19, 92107. Goodman, J. C. (1989). The development of context effects in spoken word recognition. Unpublished doctoral dissertation, The University of Chicago. Goodman, J. C. (1991). The development of context effects in spoken word recognition. Paper presented at the biennial meeting of the Society for Research in Child Development, Seattle, Wash., April. Goodman, J. C. (1993). The development of context effects in spoken word recognition. Submitted for publication. Greenspan, S. L., Nusbaum, H. C., and Pisoni, D. B. (1988). Perceptual learning of synthetic speech produced by rule. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 421-433. Grosjean, F. and Gee, J. P. (1987). Prosodic structure and spoken word recognition. Cognition: International Journal of Cognitive Science, 25, 135-155. Guzman, A. (1968). Decomposition of a visual scene into three-dimensional bodies. Proceedings of the AFIPS Fall Joint Computer Conference, 33, 291-304. Hadding-Koch, K. and Studdert-Kennedy, M. (1964). An experimental study of some intonation contours. Phonetica, 11, 175-185. Hirsh-Pasek, K., Kemler-Nelson, D. G., Jusczyk, P. W., Wright Cassidy, K., Druss, B., and Kennedy, L. (1987). Clauses are perceptual units for young infants. Cognition, 26, 269-286. Huttenlocher, J. and Smiley, P. (1987). Early word meanings: The case of object names. Cognitive Psychology, 19, 6389. Hyams, N. (1987). The theory of parameters and syntactic development. In T. Roeper and E. Williams (eds.), Parameter setting (pp. 1-22). New York: Reidel. Jusczyk, P. W. (1982). Auditory vs. phonetic coding of speech signals during infancy. In J. Mehler, E. C. T. Walker, and M. Garrett (eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capabilities (pp. 361-367). Hillsdale, N.J.: Erlbaum. Jusczyk, P. W. (1986). Towards a model for the development of speech perception. In J. S. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes (pp. 1-19). Cambridge, Mass.: MIT Press. Jusczyk. P. W. (in press). How talker variation affects young infants perception and memory of speech. In J. CharlesLuce, P. A. Luce, and J. R. Sawusch (eds.), Spoken language processing. Norwood, N.J.: Ablex. Jusczyk, P. W. and Thompson, E. J. (1978). Perception of a phonetic contrast in multisyllabic utterances by two-monthold infants. Perception and Psychophysics, 23, 105-109. page_333 Page 334
Jusczyk, P. W., Copan, H., and Thompson, E (1978). Perception by 2-month-old infants of glide contrasts in multisyllabic utterances. Perception and Psychophysics, 24, 515-520. Kemler Nelson, D. G., Hirsh-Pasek, K., Jusczyk, P. W. and Cassidy, K. W. (1989). How the prosodic cues in motherese might assist language learning. Journal of Child Language, 16, 55-68. Klatt, D. H. (1979). Speech perception: A model of acoustic-phonetic analysis and lexical access. Journal of Phonetics, 7, 279-312. Klatt, D. H. (1986). Comment on "Exploiting lawful variability in the speech wave." In J. S. Perkell and D. H. Klatt (eds.), Invariance and variability in speech processes (pp. 381-382). Cambridge, Mass.: MIT Press. Kuhl, P. K. (1991). Human adults and human infants show a perceptual magnet effect for the prototypes of speech categories, monkeys do not. Perception and Psychophysics, 50, 93-107. LaBerge, D. and Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323. Ladefoged, P. and Broadbent, D. E. (1957). Information conveyed by vowels. Journal of the Acoustic Society of America, 29, 98-104. Lakoff, G. (1987). Women, fire, and dangerous things: What categories reveal about the mind. Chicago: The University of Chicago Press. Langacker, R. (1986). Foundations of cognitive grammar, vol. 1. Stanford: Stanford University Press. Langendoen, D. T. and Postal, P. M. (1984). The vastness of natural language. Oxford: Blackwell. Langley, P. and Simon, H. A. (1981). The central role of learning in cognition. In J. R. Anderson (ed.), Cognitive skills and their acquisition (pp. 361-380). Hillsdale, N.J.: Erlbaum. Li, X., and Pastore, R. (1992). Evaluation of prototypes in perceptual space for a place contrast. In M. E. H. Schouten (ed.), The auditory processing of speech: from sounds to words (pp. 309--314). Berlin: Mouton de Gruyter. Liberman, A. M. and Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21, 1-36. Liberman, A. M., Cooper, F. S., Harris, K. S., and MacNeilage, P. F. (1962). A motor theory of speech perception. In Proceedings of the speech communication seminar, vol. 2. Stockholm: Royal Institute of Technology. Liberman, A. M., Cooper, F. S., Shankweiler, D. P., and Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431-461. Logan, G. D. (1979). On the use of a concurrent memory load to measure attention and automaticity. Journal of Experimental Psychology: Human Perception and Performance, 5(2), 189-207. page_334 Page 335 Logan, G. D. (1985). Skill and automaticity: Relations, implications, and future directions. Canadian Journal of Psychology, 39, 367-386. Logan, G. D. (1988). Toward an instance theory of automatization. Psychological Review, 95, 492-527. Marslen-Wilson, W. (1989). Access and integration: Projecting sound onto meaning. In W. Marslen-Wilson (ed.), Lexical representation and process (pp. 3-24). Cambridge, Mass.: MIT Press. Marslen-Wilson, W. and Welsh, A. (1978). Processing interactions and lexical access during word-recognition in continuous speech. Cognitive Psychology, 10, 29-63. Massaro, D. W. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J.: Erlbaum.
McClelland, J. L. and Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1-86. Medin, D. L. (1989). Concepts and conceptual structure. American Psychologist, 44, 1469-1481. Medin, D. L. and Shaffer, M. M. (1978). Context theory of classification. Psychological Review, 85, 207-238. Mehler, J., Bertoncini, J., Barriere, M., and Jassik-Gershenfeld, D. (1978). Infant recognition of mother's voice. Perception, 7, 491-497. Miller, J. L. and Dexter, E. R. (1988). Effects of speaking rate and lexical status on phonetic perception. Journal of Experimental Psychology, 14, 369-378. Minsky, M. (1963). Steps toward artificial intelligence. In A. E. Feigenbaum and J. Feldman (eds.), Computers and thought. New York: McGraw-Hill. Moray, N. (1969). Listening and attention. Baltimore: Penguin. Morgan, J. L. (1986). From simple input to complex grammar. Cambridge, Mass.: MIT Press. Murphy, G. L. and Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological Review, 92, 289316. Navon, D. and Gopher, D. (1979). On the economy of the human processing system. Psychological Review, 86, 214-255. Newell, A. (1975). A tutorial on speech understanding systems. In R. D. Reddy (ed.), Speech recognition. New York: Academic Press. Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39-57. Nosofsky, R. M. (1987). Attention and learning processes in the identification and categorization of integral stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 87-108. page_335 Page 336 Nusbaum, H. C. (1981). Capacity limitations in phoneme perception. Doctoral dissertation, Psychology Department, SUNY/Buffalo. Nusbaum, H. C. and DeGroot, J. (1990). The role of syllables in speech perception. In M. S. Ziolkowski, M. Noske, and K. Deaton (eds.), Papers from the parasession on the syllable in phonetics and phonology. Chicago: Chicago Linguistic Society. Nusbaum, H. C. and Henly, A. S. (1992). Constraint satisfaction, attention, and speech perception: Implications for theories of word recognition. In M. E. H. Schouten (ed.), The auditory processing of speech: From sounds to words (pp. 339-348). Berlin: Mouton de Gruyter. Nusbaum, H. C. and Henly, A. S. (in press). Understanding speech perception from the perspective of cognitive psychology. In J. Charles-Luce, P. A. Luce, and J. R. Sawusch (eds.), Spoken language processing. Norwood, N.J.: Ablex. Nusbaum, H. C. and Lee, L. (1992). Learning to hear phonetic information. In Y. Tohkura, E. Vatikiotis-Bateson, and Y. Sagisaka (eds.), Speech perception, production, and linguistic structure (pp. 265-273). Amsterdam: IOS Press. Nusbaum, H. C. and Lee, L. (1993). The effects of learning on attentional demands in perception of synthetic speech. In preparation. Nusbaum, H. C. and Morin, T. M. (1992). Paying attention to differences among talkers. In Y. Tohkura, Y. Sagisaka, and E. Vatikiotis-Bateson (eds.), Speech perception, production, and linguistic structure (pp. 113-134). Tokyo: OHM Publishing.
Nusbaum, H. C. and Schwab, E. C. (1986). The role of attention and active processing in speech perception. In E. C. Schwab H. C. Nusbaum (eds.), Pattern recognition by humans and machines, vol. 1:. Speech perception. New York: Academic Press. Ohala, J. J. and Jaeger, J. J., (1986). Introduction. In J. J. Ohala and J. J. Jaeger (eds.), Experimental phonology. New York: Academic Press. Osherson, D. N., Stob, M. and Weinstein, S. (1986). Systems that learn. Cambridge, Mass.: MIT Press. Peterson, G. and Barney, H. (1952). Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24, 175-184. Piaget, J. (1975). The equilibration of cognitive structures. Chicago: The University of Chicago Press. Pinker, S. (1984). Language learnability and language development. Cambridge, Mass.: Harvard University Press. Pinker, S. and Prince, A. (1988). On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28, 73-193. page_336 Page 337 Pisoni, D. B., Aslin, R. N., Perey, A. J., and Hennessy, B. L. (1982). Some effects of laboratory training on identification and discrimination of voicing contrasts in stop consonants. Journal of Experimental Psychology: Human Perception and Performance, 8, 297-314. Plunkett, K. and Marchman, V. (1991). U-shaped learning and frequency effects in a multi-layered perception: Implications for child language acquisition. Cognition, 38, 43-102. Posner, M. I. and Keele, S. W (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77, 353363. Remez, R. E., Rubin, P. E., Pisoni, D. B., and Carrell, T. D. (1981). Speech perception without traditional speech cues. Science, 212, 947-950. Rumelhart, D. and McClelland, J. (1986). On learning the past tenses of English verbs. In J. McClelland, D. Rumelhart, and the PDP Research Group (eds.), Parallel distributed processing: Explorations in the microstructure of cognition, vol. 2: Psychological and biological models. Cambridge, Mass.: MIT Press. Samuel, A. G. (1982). Phonetic prototypes. Perception and Psychophysics, 31, 307-314. Samuel, A. G. (1986). The role of the lexicon in speech perception. In E. C. Schwab and H. C. Nusbaum (eds.), Pattern recognition by humans and machines, vol. 1: Speech perception. New York: Academic Press. Schneider, W. and Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 1-66. Schwab, E. C., Nusbaum, H. C., and Pisoni, D. B. (1985). Some effects of training on the perception of synthetic speech. Human Factors, 27, 395-408. Smith, E. E. and Medin, D. L. (1981). Categories and concepts. Cambridge, Mass.: Harvard University Press. Soja, N. N., Carey, S., and Spelke, E. S. (1990). Ontological categories guide young children's inductions of word meaning: Object terms and substance terms. Cognition, 38, 179-211. Stevens, K. N. and Blumstein, S. E. (1981). The search for invariant acoustic correlates of phonetic features. In P. Eimas and J. Miller (eds.), Perspectives on the study of speech. Hillsdale, N.J.: Erlbaum.
Stevens, K. N. and Halle, M. (1967). Remarks on analysis by synthesis and distinctive features. In W. Wathen-Dunn (ed.), Models for the perception of speech and visual form (pp. 88-102). Cambridge, Mass.: MIT Press. Tees, R. C. and Werker, J. F. (1984). Perceptual flexibility: Maintenance of recovery of the ability to discriminate nonnative speech sounds. Canadian Journal of Psychology, 38, 579-590. page_337 Page 338 Thelen, E. (1990). Dynamical systems and the generation of individual differences. In J. Colombo and J. W. Fagen (eds.), Individual differences in infancy: Reliability, stability, and prediction. Hillsdale, N.J.: Erlbaum. Thelen, E. (1992). Development as a dynamic system. Current Directions in Psychological Science. 1, 189-193. Walley, A. C. (1987). Young children's detections of word-initial and -final mispronunciations in constrained and unconstrained contexts. Cognitive Development, 2, 145-167. Walley, A. C. and Metsala, J. L. (1990). The growth of lexical constraints on spoken word recognition. Perception and Psychophysics, 47, 267-280. Walley, A. C., Smith, L. B., and Jusczyk, P. W. (1986). The role of phonemes and syllables in the perceived similarity of speech sounds for children. Memory and Cognition, 14, 220-229. Waltz, D. (1975). Understanding line drawings of scenes with shadows. In P. H. Winston (ed.), The psychology of computer vision. New York: McGraw-Hill. Weisstein, N. (1969). What the frog's eye tells the human brain: Single cell analyzers in the human visual system. Psychological Bulletin, 72, 157-175. Werker, J. F. and Lalonde, C. E. (1988). Cross-Language speech perception: Initial capabilities and developmental change. Developmental Psychology, 24, 672-683. Werker, J. F. and Tees, R. C. (1983). Developmental changes across childhood in the perception of non-native speech sounds. Canadian Journal of Psychology, 36, 278-286. Werker, J. F., Gilbert, J. H. V., Humphrey, K., and Tees, R. C (1981). Developmental aspects of cross-language speech perception. Child Development, 52, 349-353. Wexler, K., and Culicover, P. (1980). Formal principles of language acquisition. Cambridge, Mass.: MIT Press. Wood, C. C. and Day, R. S. (1975). Failure of selective attention to phonetic segments in consonant-vowel syllables. Perception and Psychophysics, 17, 346-350. Yamada, R. A. and Tohkura, Y. (1992). Perception of American English /r/ and /1/ by native speakers of Japanese. In Y. Tohkura, E. Vatikiotis-Bateson, and Y. Sagisaka (eds.), Speech perception, production, and linguistic structure (pp. 155174). Amsterdam: IOS Press. page_338 Page 339
Index
A Abbs, J. H., 317
Abramson, A. S., 39, 94-95, 101, 121, 127-128, 138, 172, 185 Accent effect of age of acquisition, 59, 97, 170 effect of first-language phonology, 109-110, 170 phonetic representations for L2, 110 Acoustic-phonetic information extracted from syllable-sized units of analysis, 246-247 as perceptual primitives, 175-176 reorganization of, 250, 256-259 represented as dimensional structure, 19-22, 24-25, 155-157, 248 Acoustic-phonetic invariance, lack of and lawful variability, 305, 322, 324-325 need for theoretical principles to resolve, 306, 317-319 passive mechanisms and, 317 and perceptual constancy, 232-233, 239-241 perceptual learning and, 19, 320, 325-329 search for invariance, 301 as a specific case of pattern/interpretation variability, 306-308, 320 as a theoretical problem, 305, 319 Age of acquisition. See Language acquisition, delayed Aitchinson, J., 182 Aks, D. J., 21 Allen, G., 289 Analysis-by-synthesis, 307-308 Anderson, J. A., 320 Anisfeld, M., 72 Articulatory gestures, as a constraint on perception, 14 Aslin, R. N., 3-4, 94-96, 107, 112, 114, 121, 123, 125, 131, 154, 156, 172, 175, 186, 228, 230, 231, 245, 254, 300, 309, 312 Attention capacity limitations and speech perception, 240-241, 319, 320, 323-324 capacity limitations and speech production, 280 directed to acoustic properties of speech, 130-131 and distortion of psychological space, 155-157, 248-249
as an explanation of difficulty of second-language learning, 20, 22, 145 as an explanation of loss of phonetic sensitivity, 19-20, 122, 125, 139, 154, 156-157, 249, 254-255, 328 factors in infant speech perception studies, 243 focus of, 19-22, 141, 150, 233, 238-241, 248, 303, 313, 318, 320, 324-325, 327-329 and perceptual learning, 7, 19, 24-25, 154, 320, 325 problems with explanations based on, 22-25 in relation to computational mechanisms, 323-324 role of stressed syllables in directing, 240, 253 shifts of, 19-20, 25 Attunement theory of perceptual development, 124-126 Au, T. K., 300
B Bach, M. J., 72 Baddeley, A. D., 324
page_339 Page 340
Baer, T., 40, 43, 47 Bahrick, L. E., 177 Bailey, P. J., 250 Baillargeon, R., 311 Baker, G., 3, 274 Baldwin, D., 113 Barney, H., 305 Barsalou, L. W., 319 Bartlett, E., 300 Bartlett, F. C., 320 Bates, E. A., 10, 19-21, 272, 273, 300, 301 Baudouin de Courtnay, J., 256 Bauer, H., 274, 280 Bellugi, U., 62 Berman, R., 228
Bertoncini, J., 228, 230, 235-237, 243, 250, 257 Best, C. T., 6-7, 9, 13, 16-18, 20, 23, 94-95, 100, 102-103, 107-108, 127, 130, 140-142, 153-154, 158, 167-168, 172-179, 189-191, 196, 198-202, 205, 230-231, 254-255, 309, 312, 328 Bever, T. G., 258 Bhaskararo, P., 96 Biemiller, A., 72 Bilinguals capacity limitations in, 23 perceptual bias of dominant language, 7, 22, 60 phoneme perception in, 10 segmentation strategies of, 22-23 Spanish-English classification of voicing, 250 Billman, D., 303, 327, 328 Black, J. W., 94 Blasdell, R., 11 Bley-Vorman, R., 84 Bloom, L., 274, 281, 300-301 Blumstein, S. E., 8, 37, 96, 307 Bolinger, D., 306 Bond, Z. S., 234 Bookbinder, J., 319 Bowerman, M., 272-273 Braida, L. D., 138 Bretherton, I., 10, 300-301 Briere, E., 170 Broadbent, D. E., 322 Broen, P., 235 Bromberger, S., 256 Brooks, L. R., 251, 252 Browman, C. P., 175, 205, 211 Brown, R., 240, 273-274, 300, 308 Brown, T. L., 327 Bryan, W. L., 326-327
Buchwald, S. E., 250 Bull, D., 239 Bundy, R. S., 239 Burke, D. M., 300 Burki-Cohen, J., 10 Burnham, D. K., 107, 143, 193 Butterfield, E., 96
C Cairns, G., 96 Capacity limitations. See Attention Carden, G., 250, 319 Carey, S., 300 Carney, A. E., 95, 101, 103, 131, 144, 172, 190, 242, 247 Carr, T. H., 327 Carrell, T. D., 44-46, 101, 135, 172, 249 Carter, D. B., 98 Carter, D. M., 234 Categorical perception, 121, 130-131, 247, 256 Categories flexibility in, 319, 321-322 formation of, 232, 251, 258, 319, 325 and prototypes vs. exemplars, 23-25 representation of, 23, 321 of sound patterns, 128, 229, 231, 249, 254 theories as a form of, 25, 322-323, 325 Chalkley, M., 273 Changeux, J. P., 231 Charles-Luce, J., 272 Chomsky, N., 256, 302 Church, K., 234, 236, 254 Clark, E. V., 5, 273, 300
Clark, H., 273 Clark, S. E., 248 Clements, G. N., 257 Clifton, C., 303 Coffey, S. A., 17 Cohen, L., 110 Cohen, M. M., 14 Cohort theory, limitations of, 304, 307 Cole, R. A., 234, 300 Columbo, J., 57-58, 239 Connine, C. M., 303 Constraint satisfaction, 21, 253 Context-coding mode of response, 138 Context-dependent perception. See also Acoustic-phonetic invariance page_340 Page 341 Context-dependent perception. (continued) acoustic-phonetic mappings for, 23-25, 305, 317-318, 322 in adults, 38-40, 158 in animals, 44 development of, 23, 25 in infants, 22-25, 41-43 in learning nonnative contrasts, 151, 157-158 need for general computational theory of, 303, 306-308, 318-319 origin of, 25, 40-41, 42-43 principles of, 25, 317-325 of similarity, 157 of sinewave analogs, 44-46 underlying perceptual mechanisms, 43-48, 251-252 in word recognition and sentence understanding, 306 Cooper, J., 58
Cooper, R. P., 231 Cooper, W. E., 234 Copan, H., 240, 300 Coppieters, R., 59, 78, 84 Coulson, S., 273 Craik, F. I. M., 252 Critical period. See also Language acquisition, delayed compared to learning theory, 22, 311 effects on phonological processing, 15, 84-86 exposure to phonetic contrasts during, 172 implications of second-language learning for, 19, 58, 59-60 implications of sign language acquisition for, 14-15, 58 Lenneberg's theory of, 57-58 social isolation during, 58, 60-61 and specialized neurological structures, 6, 17-18 Cross-linguistic comparisons, 121, 128, 138, 140, 227, 236-238 Cross-modal studies of speech perception, 179-180 Culicover, P., 302 Cunningham, C. C., 239 Curtiss, S., 61 Cutler, A., 7, 10-12, 15-16, 22-23, 60, 234, 254 Cziko, G., 72
D Dale, P. S., 231 Davis, A., 58 Davis, K., 61 Day, R. S., 319 Deafness, and spoken language acquisition, 78-79 deBoysson-Bardies, B., 60 DeCasper, A. J., 238, 253 Dechovitz, D. R., 303
DeGroot, J., 25, 303, 318-319, 322 Dekle, D. J., 175, 179 Delk, M., 62 Dell, G. S., 69, 84 Dent, C., 178, 180 Derrah, C., 242-243, 257 Devaney, R. L., 314 Developmental reorganization of perception, 153-155, 168, 174 alternative hypotheses for, 94, 107, 110-111, 171-172, 193-196, 213n2 cognitive mediation of, 19-22, 110-111 and early word-learning, 8, 185-186 ecological approach to, 178-180, 182 evidence of, 60, 96-100, 110, 171-172, 185-188 infant patterns of assimilation, 168, 196-203 and linguistic function of speech, 180, 188 role of articulatory gestures, 168, 201-203 sensory loss vs. perceptual attunement, 20, 98, 101, 114, 122, 156-157, 167, 172-173, 231, 249-250 simultaneous linguistic and cognitive advances, 110, 187-188 units of perception, 188-189, 195, 253-254, 294 Dexter, E. R., 303 Diehl, R. L., 46, 48, 175-176, 181, 250 Discrimination ABX task, 132-136 AX task, 143-144, 146 vs. categorization, 228 oddity task, 140 role in development of lexicon, 228-229 training procedure, 134 Dissosway-Huff, P., 141, 150 Dittman, S., 102, 103, 121-122, 137, 141-148, 157, 172-173 Dorman, M. F., 250
Doughtie, E. B., 12 Dukes, K. D., 234 Dupous, E., 114 du Preez, P., 11 Durlach, N. I., 138
page_341 Page 342
E Echols, C. H., 289 Ecological theory of speech perception based on articulatory gestures, 175-178, 180, 182, 190, 203-204 evidence from cross-modal studies, 179-180 evidence from production, 210-211 perception-production link, 13-14, 180-182, 203-204 Eichen, E. B., 6, 75, 84 Eilers, R. E., 95-96, 101, 125, 183-186, 198 Eimas, P. D., 3-4, 13, 20, 41-46, 52, 94, 96, 121-124, 130, 140, 158, 172, 184, 230, 231, 233, 245, 300, 309 Elliott, L. L., 79, 300 Elman, J. L., 5, 7, 20-21, 250, 304-307, 316-317, 324 Engebretson, A. M., 51 Enns, J. T., 21 Episodic trace, 251 Evan, K. E., 300 Evans, M. A., 327
F Fant, G., 175 Farwell, C. B., 10 Faulconer, B. A., 252 Feature detectors, 123, 130 Felzen, E., 72
Ferguson, C. A., 10, 154, 158, 188, 195 Fernald, A., 12, 234-239, 253, 294, 300, 313 Fey, M. E., 205 Fifer, W. P., 238, 253 Fischer, S. D., 6, 63, 69, 71 Flege, J. E., 59, 109-110, 154, 167, 170-172, 190, 212, 247 Fletcher, K. L., 59 Flowers, C., 58 Fodor, J. A., 144, 178, 258, 301, 304, 308, 316 Fowler, C. A., 13, 46, 168, 175-181 Fox, R. A., 303 Fraser, C., 240, 274 Frauenfelder, U. H., 318 Frederici, A. D., 114 Freeman, R., 58 Friedlander, B. Z., 239 Fromkin, V., 61, 256 Functors children's omission of, 272, 273-281, 288-293 children's perception of, 275-276, 278, 280, 293 children's segmental representation of, 281-284, 294 prosodic and syntactic properties of, 272, 273 role in children's syntactic analysis, 275-276, 280, 281, 293, 294 and segmentation of fluent speech, 284-288
G Galaburda, A. M., 52 Ganong, W. F., 10, 25, 303, 322 Gans, S. J., 44, 45, 46 Gardner, B. T., 182 Gardner, R. A., 182 Garner, W. R., 155
Garnes, S., 234 Garnica, O. K., 235 Garrett, M. F., 258, 280 Gavin, W. J., 125, 186 Gee, J. P., 234, 304 Gelman, R., 311 Gelman, S. A., 274 Generalized Context Model, 248-249 Gerken, L. A., 3, 8, 11, 240, 273-276, 279-280, 285, 289, 291, 309 Gerstman, L. J., 306, 322 Gibson, E. J., 173, 319 Gibson, J. J., 13, 173, 175-178, 182, 212, 317 Gillette, S., 141, 153, 172 Gleitman, L. R., 11, 240, 257, 274 Glenn, S. M., 239 Glusman, M., 300 Goldinger, S. D., 153 Goldin-Meadow, S., 312, 315 Goldsmith, J., 247, 253 Goldstein, L., 175, 205, 211 Goodell, E., 189, 205 Goodman, J. C., 7-8, 21, 25, 249, 300, 326 Gopal, H. S., 306 Gopher, D., 319 Goto, H., 94, 140, 153, 171-172 Gottlieb, G., 107, 114, 123-124 Greenberg, J. H., 273 Greenspan, S. L., 234, 303 Grieser, D. A., 190, 194 Grosjean, F., 7, 10, 234, 304 Guzman, A., 302
page_342
Page 343
H Hadding-Koch, K., 307 Halle, M., 8, 253, 256, 304, 307 Hammer, M. A., 300 Harter, N., 326-327 Hawkins, S., 284, 289 Hayes, B., 253 Head-turn procedure, 95 Heidmann, T., 231 Henly, A. S., 25, 303, 305-306, 317-320, 322, 324 High-amplitude sucking procedure, 42, 242, 244 Hintzman, D. L., 251-253, 258 Hirsch-Pasek, K., 8, 234-237, 300, 313 Hoefnagel-Hohle, M., 94, 97 Hoosain, R., 252 Hubel, D. H., 123 Humphries, T., 62 Hurlburt, M. S., 8, 25 Huttenlocher, J., 300 Hyams, N., 291, 308
I Infant babbling, 60, 188 Ingram, D., 274, 284 Insabella, G., 202
J Jacoby, L. L., 251-252 Jaeger, J. J., 302 Jakimik, J., 234
Jakobson, R., 256 Jamieson, D. G., 102, 122, 144, 146 Jenkins, J., 20, 94, 121, 131, 134, 137-139 Jensema, C., 62 Jensen, P., 11 Johnson, J. S., 59 Joyce, P. F., 239 Jusczyk, A. M., 244 Jusczyk, P. W., 3-10, 19, 23-24, 41, 46, 49, 95, 111, 114, 123, 154, 155-156, 175-176, 183, 186, 212, 228, 230-232, 235, 237, 239-245, 248-251, 255, 257-258, 271-274, 294, 300, 309, 311-313, 321, 328-329
K Karzon, R. G., 240 Katz, B., 3, 274 Katz, D., 79 Keele, S. W., 153, 319 Kemler Nelson, D. G., 8, 11, 12, 228, 235, 300 Kennedy, L. J., 244 Kewley-Port, D., 207, 208 Keyser, S. J., 257 Kimball, J. P., 273 Kirsner, K., 252 Klatt, D. H., 123, 234, 245-246, 256, 304-307, 317 Klein, R. E., 94, 95, 140, 184, 230, 254 Klima, E., 62 Kluender, K., 175-176, 181 Koluchova, J., 60-61 Konishi, M., 50 Krashen, S., 59 Kuhl, P. K., 4, 7, 10, 14, 23-25, 44, 95, 12 140, 154, 179, 182-183, 190, 194, 228, 231-232, 235, 239, 319, 321
L
LaBerge, D., 327 Ladefoged, P., 96, 199, 322 Laframboise, D. E., 300 LAFS a developmental version of, 245-246 limitations of, 307-308 temporal organization in, 257 Lakoff, G., 302 Lalonde, C. E., 6-8, 100, 104, 110-112, 127, 158, 168, 188, 231, 242, 249, 254, 256, 258, 300 Landau, B., 240, 273-276, 279, 285, 289 Lane, H., 3, 62, 142 Langacker, R., 302 Langendoen, D. T., 302 Langley, P., 318 Language hierarchical organization of, 6, 271 species-specific, 183 Language acquisition and bilingualism, 22-23 and cognitive mechanisms, 21-22 by deaf during childhood, 22, 78-79 distinguished from development of speech perception, 300-301, 308-315 importance of early linguistic experience for, 14-15 and levels of linguistic knowledge, 8-9 research strategies, 300 role of functors in, 273, 284, 294 role of prosody in, 235 similarities between phonological and syntactic processing in, 19-21 as skill acquisition, 153, 325-329 stage view of, 271-272, 293, 308-315 Language acquisition, delayed. See also Critical period
page_343 Page 344 Language acquisition (cont.) case of Genie, 61 in deaf children, 61-62, 78-79 effect on cognitive skills, 76-77 effect on first vs. second language, 22, 80-83, 86, 247 effect on linguistic structure, 84, 85 effect on phonological processing, 74, 84-85 effect on sign language processing, 75-78 and efficiency of processing, 76, 83 magnitude of effects, 78, 80-81 and verbal performance, 60 Lasky, R. E., 94, 95, 140, 184, 230, 254 Lazarus, J. H., 130, 133, 172, 190 Lee, L., 303, 324, 328 Lenneberg, E. H., 6, 57, 86, 97 Leonard, L. B., 205 Levine, S. L., 58 Levitt, A., 174, 183 Lexical acquisition and acoustic variability, 233 and development of phonological representations, 24 effects of prosody on, 11, 239-241 and segmentation, 227 from speech perception capacities, 245-255 Lexical processing errors predict comprehension, 70-71 semantic and phonological, 65-69 in spoken language processing, 72-73 and stage of language development, 73-74 Lexical representation
in delayed language acquisition, 22 perceptual properties encoded into, 159, 249, 251 as phonemes vs. syllables, 244-245 and phonotactic knowledge, 236-238 sound patterns associated with meanings, 229, 251 in young children, 271-272, 294n1 Li, X., 319, 321 Liberman, A. M., 3-5, 8, 13, 37-48, 121, 141, 175-178, 181, 190, 230, 233, 304-307, 316, 324 Liberman, I. Y., 257 Liberman, M., 247, 253 Liddell, S. K., 62 Lieberman, P., 182 Lindblom, B., 189, 195, 241 Linguistic knowledge adaptive use of, 317-325 constraining shifts of attention, 20 development of, 23-25, 309, 313-314 as different in children and adults, 308-309 distinguished from perceptual knowledge, 301-308 of late first- vs. second-language learners, 22-24, 85 providing perceptual organization of acoustic signal, 19, 247-248 represented as categories, 22-25, 319, 321-323 of syllable structures, 227, 237-238, 257 used in segmentation, 227, 234-238 of vowel spaces, 23 Linguistic mode of processing, 105-106, 248 Lisker, L., 39, 94-95, 101, 121, 127-128, 137, 138, 172, 185 Lively, S. E., 6-7, 10, 14-23, 102, 145-146, 167, 172-173, 176, 247, 249, 309, 311, 313, 328-329 Locke, J. L., 207 Logan, G. D., 252, 326-327 Logan, J. S., 6-7, 10, 14-24, 102, 104, 106, 111, 113, 145-146, 153, 167, 172-173, 176, 190, 247, 249, 309, 311, 313, 328329 Long, M., 59
Lowenthal, K., 170 Luce, P., 114, 123, 272
M McClaskey, C. L., 101, 135, 172, 249 McClelland, J. L., 5, 7, 304-307, 316-317, 324-325 McCune, L., 188 McCutcheon, M., 170 MacDonald, J., 14, 179 McDonough, L., 8 McGowan, R. S., 189 McGurk, H., 14, 179 McGurk effect, 179-180 MacKain, 94-95, 102-103, 114, 141-142, 153, 167, 172-173, 179, 184, 186, 193 Macken, M. A., 188, 195, 271, 284 McLeod, P. J., 239 MacNamara, J., 3, 274 MacNeilage, P., 189, 195 McRoberts, G. W., 6-7, 16, 17, 100, 107-108, 127, 130, 140, 154, 158, 167-168, 172, 174-179, 191, 196-199, 230-231, 254-255 MacWhinney, B., 19, 21
page_344 Page 345
Mann, V. A., 106, 172, 179 Manner of articulation infant categorization of, 41-43 and syllable duration, 39-40 and syllable structure, 45 transition durations in, 38-40 Maratsos, M., 273 Marchman, V., 311 Markman, E., 5, 300
Marr, D., 307 Marslen-Wilson, W. D., 5, 8, 304 Martin, C. S., 152-153 Mason, M. K., 61 Massaro, D. W., 14, 179, 317 Mattingly, 3-8, 13, 37, 43, 47, 175-178, 181, 304, 307, 316, 324 Maturational theory of perceptual development, 125-127 Mayberry, 6-7, 14-17, 22-23, 63, 69, 71, 75, 78, 81-84 Medin, D. L., 25, 319-323 Mehler, J., 114, 174, 185, 234-238, 253, 300 Meier, R. P., 273 Meltzoff, A., 14, 179 Meluish, E., 238 Memory echoes between secondary and primary, 253 of infants for phonetic information, 243-244 for lexical knowledge, 229, 251 multiple-trace models of, 252 for phonetic categories, 24, 138 selective nature of, 241 short-lived descriptive patterns, 247 for talker information, 23, 245 Menn, L., 3, 6, 10, 188, 195, 271-272, 284 Mental representations of linguistic entities. See also Linguistic knowledge as abstract, 23, 181 as categories, 23-25, 319, 321-323 as emergent gestural patterns, 180 of sound patterns, 158, 229, 241 Menyuk, P., 10 Metsala, J. L., 300 Miller, J. D., 51, 127, 140, 231 Miller, J. L., 10, 13, 25, 39, 40-49, 123-124, 230, 233, 245, 303, 309
Mills, D. L., 17, 18 Mills, M., 238 Minerva 2, 252 Minifie, F. D., 183 Minsky, M., 302 Miyawaki, K., 94-95, 140, 172-173 Mochizuki, M., 140-142, 150, 153, 172 Modularity functional vs. computational, 304, 315 and language, 178, 182 of speech and language, 301-308 stages of development as, 308-315 Molfese, D., 58 Moore, J. M., 95-96, 101, 183, 186, 198 Morais, J., 257 Morgan, J. L., 235, 273, 301 Morin, T. M., 319, 324 Morosan, D. E., 102, 104, 113, 122, 144, 146 Morrongiello, B., 190 Morse, P. A., 231 Morton, J., 8 Moskowitz, B. A., 159, 205 Motor theory basic tenets of, 13, 43, 175-177, 178, 181-182 limitations of, 304, 307-308, 316-317 specialized knowledge in, 324 and rate normalization, 43, 47 Mullennix, J. W., 152, 232, 239, 245 Murphy, G. L., 25, 322-323
N Nabelek, A. K., 102
Nakatani, L. H., 234 Navon, D., 319 Nelson, K., 11 Nelson, L., 274, 280 Neural plasticity. See also Sensory loss lack of, 17-18, 121, 125, 130 recovery from aphasia, 57-58 Neurophysiological mechanisms of perception, 13, 49-52, 130 Neville, H. J., 17 Newell, A., 319, 324 Newhoff, M., 205 Newport, E. L., 15-18, 22, 59, 75, 84, 273, 289 Niccols, A., 72 Nittrouer, S., 189 Nonnative contrast sensitivity. See also Developmental reorganization of perception and allophonic status, 142 and assimilation type, 196-202
changes in infancy, 60, 99-100, 111-113, 168, 173-174, 185-187 page_345 Page 346
Nonnative contrast sensitivity (cont.) in early infancy, 96, 183-185, 213n1, 242 effect of native phonology on, 109, 153, 171-172 failures in training, 121, 137 importance of testing procedures, 94-95, 103, 136-139, 140-149, 172 improvements with training, 97, 101-103, 127, 153, 172 limitations in adult, 14, 96-97, 99, 102 a longitudinal study of, 100 loss of, 20, 122, 137, 153, 156-157, 231, 249, 328 and modes of processing, 104-106, 111-113 perceptual assimilation model, 16-17, 107-108, 190-196, 200 in preadolescent children, 97-98
residual sensitivity in adults, 103-106, 172, 249 by second-language learners, 102, 109-110, 171, 172 theories of discriminability, 107-110 transfer and generalization, 135, 138, 143-147, 151-153, 157 of VOT boundary in infancy, 41, 96, 184-185 WRAPSA, and explanation of decline by, 254-255 Nonnative contrasts, production of. See Accent Norris, D., 11, 15, 234 Nosofsky, R. M., 19, 143, 155-157, 248, 251, 319 Nozza, R. J., 239 Nusbaum, H. C., 7, 21, 25, 155, 234, 249, 303-306, 317-324, 327, 328 Nygaard, L. C., 23
O Ohala, J. J., 302 Oiler, D. K., 95, 184, 186, 188 O'Rourke, P., 47 Osgood, C. E., 252 Osherson, D. N., 309 Osman, E., 319 Oyama, S., 59, 78, 170
P Paccia-Cooper, J., 234 Padden, C., 44 Palermo, D., 58 Pastore, R., 3, 319-321 Patte, P., 231 Patterson, C. J., 98 Pegg, J. E., 110, 113 Penfield, W. G., 57 Perception and production parallels and ecological approach, 180, 204
examples of, 47 Perception of auditory nonspeech events, 177 Perceptual assimilation model. See Nonnative contrast sensitivity, perceptual assimilation model Perceptual learning of acoustic patterns as meaningful signals 19, 121 and attentional mechanisms, 18-22 and changes in attentional focus, 7, 122, 173, 320, 323-324, 327-328 and constraint satisfaction, 21 as a developmental theory, 19-21, 325-329 as dimensional stretching and shrinking 19-21, 154, 248-249 and early experience, 123-127 effectiveness of training procedures, 101-103, 115n2 generalization of, 101, 102, 135, 138 147, 151-153, 157 of Hindi contrasts, 97, 103 of lexical structure, 227 limitations on, 14-17, 22, 151 mechanisms of, 123-125, 328 performance relative to native speak 102, 103 of /r/-/l/ distinction by Japanese listeners 102, 140-153 by selection, 123, 231 as a solution to lack of invariance, 318-321, 324-325 and structural changes in the brain, 17-18 and telegraphy, 326-327 theory of perceptual development, 124-126, 328 of VOT contrasts, 101, 127-139, 230 Perceptual magnet effect, 25 Perceptual mechanisms for speech perception. See also Neurophysiological mechanisms of perception articulation-based, 13-14 based on cognitive principles, 318-325 biological constraints on, 310
as distinct from psycholinguistic mechanisms, 299, 301, 303, 309 page_346
Page 347 Perceptual mechanisms for speech perception (cont.) as dynamic, 7, 25 importance of developmental theory for explaining, 309-312 learning in, 311 mechanisms of auditory analysis, 246-247 motor theory, 43, 47-48 (see also Motor theory) need for computational principles of, 304-308 pattern extraction and interpretation, 247-250 pattern matching approaches in, 317 phonological, 104-106, 111-113 progress on, 37-38, 48 role in relation to psycholinguistic mechanisms, 302 similarity to mechanisms of development, 4-5 simplifying assumptions about, 302 specialized or general auditory, 43-46, 110-111, 177-178 window of analysis in, 317-318 Perceptual primitives of speech perception, 174-176, 246-247 Perceptual processing and constraint satisfaction, 21 effect of early linguistic experience on, 6-7, 14-17 interactions between linguistic levels, 6, 8-10, 25-26 Perfetti, C. A., 300 Perlmutter, D. M., 62 Peters, A., 281-282, 284, 294 Peterson, G., 305 Phoneme perception in bilinguals, 10 as focus of past research, 4 identification functions in, 132-136 and lexical acquisition, 9 lexical influences on, 8, 10, 25, 253, 303, 318, 322
modified by expectations, 250 and phonotactic constraints, 236-238 in spite of talker variability, 146-147, 239 in stressed vs. unstressed syllables, 239-240 Phonemes as analytic abstractions, 258 definition of, 169 developing representations of, 23-24, 256-259 language-specific realization of, 169-170 and reading, 257 vs. syllables as initial representation of speech sounds, 24, 230, 244-245, 253-254 Phonemic restoration effect, 253 Phonetic environment, 141-142, 145-146, 150-151, 158 Phonological assimilation model. See Nonnative contrast sensitivity, perceptual assimilation model Phonological development. See also Developmental reorganization of perception controlled vs. automatic processing, 74, 84-85, 252 emerges from early lexical development, 6, 9, 23, 159, 228 evidence from toddlers' productions, 204-211 nativist vs. empiricist views of, 123 in older infants, 108, 202-203 and reading ability, 257 role of articulatory constraints, 204-205, 210-211 role of attention in, 241, 254-259 of segmental knowledge, 9, 111-113, 188-189, 195, 271-272 similarity to acquisition of syntax, 19 stages of, 113, 203 of vowel categories, 10 WRAPSA, 9, 254-259 Phonological processes in children's productions, 280, 288, 289-293 cross-linguistic considerations, 227-228 Piaget, J., 311
Pilon, R., 12 Pinker, S., 272-273, 308-311 Pisoni, D. B., 3-7, 10, 14-23, 44-46, 101102, 107, 114, 121-125, 130-133, 135, 138, 141, 144-146, 150-157, 167, 172176, 186, 190, 228-234, 239, 242, 245-249, 300, 303, 309-313, 319, 324, 328-329 Plunkett, K., 311 Polka, L., 102-103, 108-109 Port, R., 141, 150 Posner, M. I., 153, 319 Postal, P. M., 302 Potter, M. C., 252 Pragmatic processes, in children's productions, 280-281, 288-293 page_347 Page 348 Premack, D., 182 Preston, M. S., 207-208 Prince, A., 247, 253, 311 Processing interactions, between levels of linguistic structure, 6, 8-9, 23-26, 304, 315 Prosody acquisition of, 13 in child-directed speech, 235, 238-239 of Dutch and English, 238 effects on perceptual processing, 10-13 extracted from syllable-sized units of analysis, 247, 253 and focus of attention, 10, 12 infant sensitivity to, 231, 273, 294 and infant sensitivity to linguistic units, 11-12, 235 lack of invariance in, 306 and language development, 11 - 13, 235 limitations of motor theory for explaining perception of, 317 limitations of TRACE for explaining perception of, 307 metrical patterns of children's productions, 289-293 role of functors in conveying, 272, 273 and segmentation of linguistic units, 227, 234-236, 253, 313-314
stress patterns, 227, 231, 234, 238-240, 247, 253-254 Psychoacoustic theory of speech perception description of, 174-178 problems with, 179-180, 180-181 Psycholinguistic mechanisms computational similarities to speech perception, 21, 303 critical period in development of, 311 differences between children and adults, 300-301, 309 learnability of, 19-22, 309-310, 315 independence from speech processing mechanisms, 301-308 and invariance in linguistic processing, 317 Pye, C., 274 Pylyshyn, Z., 316
R Rate-dependent processing. See Speaking rate Rate normalization according to motor theory, 43, 47 general auditory mechanisms for, 43-46 and infant speech perception, 233 mechanisms of, 305 Rate of sign articulation, and age of acquisition, 77 Rawlings, B., 62 Reddy, D. R., 234 Remez, R. E., 240, 273- 279, 285, 289, 319 Repp, B. H., 3-4, 103 Robson, R., 190 Rose, S. P. R., 49 Rosenblum, L., 175 Roth, E. M., 252 Rozin, P., 257 Rumelhart, D., 311
S Samuel, A. G., 7, 242, 247, 253, 319, 321 Samuels, S. J., 327 Sawusch, J. R., 246 Scarcella, R., 59 Schaffer, J. A., 234 Schein, J., 62 Schneider, B. A., 239 Schneider, W., 319, 326 Schwab, E. C., 155, 317, 319, 323-324, 327-328 Scovel, T., 84 Second-language acquisition age of onset, 59, 170 attention to semantics, 17 effects of first language on, 7, 60, 109-110 implications for critical period hypothesis, 59-60 problems with attentional explanation of, 22 relation of production to perception, 171 Segmentation of fluent speech determined by native language, 15-16 development of, 159, 234-238 and the development of lexical knowledge, 227 phonotactic constraints on, 227, 236-237 as prerequisite for word recognition, 228 role of prosody in, 11, 227, 235-236, 253-254 use of functors by toddlers, 284-288 Segui, J., 114 Seigler, R., 311 Selkirk, E. O., 247, 253 Semantic context, and children's identification of ambiguous words, 8 page_348
Page 349 Sensory loss, 94, 125 evidence against, 93, 101-103, 106, 172-173 vs. perceptual reorganization, 20, 97-98, 114, 122, 153, 156-157, 167, 231, 249, 254-255 Sentence partitioning (parsing), use of functors for, 281 Shaffer, M. M., 319 Shattuck-Hufnagel, S., 256 Shea, S. L., 186 Sheldon, A., 94, 141, 171 Shiffrin, R. M., 144, 319, 326 Shin, H. J., 248 Shipley, E., 274 Shoben, E. J., 252 Sign language late first- and second-language learners of, 78-79 and oral educational policy, 62 Pidgin Sign English (PSE), 69-70 Sign language processing. See also Lexical processing errors and age of acquisition effects, 75-78 comparing late first- and second-language learners, 81-83 correlated with speech discrimination, 80 effect of experience on, 63-65, 70-71 Simon, H. A., 318 Sinewave analogs categorization of, 44-46 effects of expectations on perception of, 250 Singh, J. A., 60 Singh, S., 94 Sithole, N. N., 6-7, 16-17, 107-108, 127, 130, 140, 154, 158, 167-168, 172, 191, 196, 199, 230-231, 254-255 Skuse, D. H., 60 Slobin, D. I., 11 Smiley, P., 300
Smith, C., 274 Smith, E. E., 319, 321 Smith, L. B., 10, 23, 257, 300, 309 Smith, M., 175 Smith, N. V., 274, 289 Smith, S., 170 Snow, C. E., 94, 97, 235 Snyder, L., 10, 300-301 Social isolation in childhood case of Genie, 61 implications for critical period, 60-61 Soja, N. N., 300 Sommers, M. S., 23 Speaking rate, effect on manner perception, 39-40, 53n2 Speech errors of children, 72-73, 289 as evidence of phonological representations, 256 Speech perception in animals, 44, 127, 182 infant and adult testing procedures, 95, 197-198 role of speech production in, 13-14 Speech perception, infant. See also Developmental reorganization of perception and articulatory gestures, 178-179 as context-dependent (rate-dependent), 41-43, 233 discrimination studies, 230 distinguished from speech perception in older children, 300 language-universal ability, 60, 168, 173-174, 183-185, 213nl, 227, 249, 308-309 listening preferences in, 238-239 phonetically relevant categorization, 41, 100, 183 of prosody, 11-12, 235, 273, 294 sensitivity to talker differences, 232-233 of sinewave analogs, 46
units of perception, 9, 12, 253-254, 256-259 and visual articulatory information, 14, 179 of vowels, 10, 23-24 of weak syllables, 274 Speech perception, toddler of reduced vowels, 278-279 and utterance complexity, 295n6 of weak syllables, 276 Speech production and articulatory-gestural sensitivity, 204-205, 210-211 errors, 72-73, 256, 289 function word omission, 272, 273-281, 289-293 imitation of target utterances, 274-279, 283, 291-293 metrical processes in, 289-293, 295n7 of multisyllabic words, 289 perception-production link, 180-181, 203-204 pragmatic processes in, 280-281, 288-293
predominance of content words, 273 page_349 Page 350
Speech production (cont.) and resource (capacity) limitations, 280 segmental accuracy of, 274, 281-282, 285, 288 sentential subject omission, 289, 291-293 and utterance complexity, 274-276, 280, 288 weak syllable omission, 273-274, 276, 280, 289-293, 295n7 Spelke, E. S., 300 Spoken language acquisition by deaf during early childhood, 78-79 Spoken language understanding learnability of mechanisms for, 19-26, 309-310 as a skill, 252, 299, 313, 315, 325-329 spreading activation model of, 84
theoretical assumptions about, 299-315 Spring, D. R., 231 Stark, R. E., 188 Stemberger, J. P., 72 Stevens, E. B., 44 Stevens, K. N., 8, 37, 96, 128, 304, 307 Stob, M., 309 Strange, W., 20, 94-95, 101-103, 107, 121-122, 131, 134, 137-148, 153, 157, 167, 171-173 Streeter, L. A., 94-95, 130, 140, 184-185, 193, 230, 254 Stress patterns, 227, 231, 238-240 Studdert-Kennedy, M., 5, 13, 123, 154, 158-159, 175, 177, 189, 195, 212, 228, 255, 307 Suga, N., 49-51 Summerfield, A. Q., 250 Sussman, H. M., 49-51, 317 Swinney, D. A., 12 Syntactic analysis, children's and learning, 19, 21 role of functors in, 273, 275-276, 280, 294 role of prosody in development of, 235 Synthetic speech stimuli children's imitations of, 277-278, 279, 283, 289, 295nn5, 7 cross-language research using, 100, 104, 111, 186, 187 discrimination training using, 101 problems with VOT continua, 185 Syrdal, A., 306 Syrdal-Lasky, A., 94-95, 140, 184, 230, 254
T Tahta, S., 170 Takata, Y., 102 Talker normalization and formation of linguistic categories, 232
and infant perception, 232, 239 mechanisms of, 305-306 and perceptual learning, 324 and vowel spaces, 322 and word recognition, 152 Tash, J., 130, 242, 247 Taylor, M., 274 Tees, R., 6-8, 17, 98-100, 103, 111, 114, 127, 140, 167-168, 172-173, 193, 231, 242, 249, 254, 312 Terbeek, D. A., 156 Terrace, H. S., 182 Thelen, E., 312 Theories of speech perception descriptions of, 174-178, 181-182 developmental constraints on, 5 failure of, 48-49 general auditory, 43-46, 110-111, 179-180, 180-181 motor theory, 13, 43, 47, 175-177, 178, 181-182 progress on, 37-38, 48 relevance of neurophysiological mechanisms to, 13, 48-52 requirements of, 4-5, 10, 18 Thompson, E. J., 231, 240, 274, 300 Tohkura, Y., 328 Toyota, H., 72 TRACE limitations of, 305-307, 316 mechanisms in, 305, 324 Transitions in development importance of, 310-312 mechanisms for producing, 309, 312, 314 and stages of development, 313-315 U-shaped curves in, 311 Trehub, S. E., 94-95, 172, 183, 230, 239, 254
Treiman, R., 257 Trubetzkoy, N., 256 Tulving, E., 251 Turvey, M. T., 181 Tversky, A., 157 Tyler, L. K., 5, 8
U Underwood, B. J., 72 Universal theory of perceptual development, 123-126 Uttal, W. R., 49
page_350