Co~nirion,
6 (1978)
1
175187
Phonemic
effects in the silent reading of hearing and deaf children * JOHN
L. LOCKE
...
22 downloads
1039 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Co~nirion,
6 (1978)
1
175187
Phonemic
effects in the silent reading of hearing and deaf children * JOHN
L. LOCKE
Institute for Child Behavior Champaign, Illinois
and Development
After I come back from school, my mother taught me how to read... she used both loud voice and clear lip movement... I was taught to form letter when read. Learn to read comes from orating. (Gibson, Shurcliff and Yonas,
1970,
p. 68.)
Abstract Twenty-four deaf and hearing children silently read a printed passage while crossing out all detected cases of a pre-specified target letter. Target letters appeared in phonemically modal form, a category loosely analogous to ‘pronounced” letters (e.g., the g in badge), and in phonemically? nonmodal form, a class which included “silent” letters and those pronounced in somewhat atypical fashion (e.g., the g in rough). Hearing children detected significantly more modal than nonmodal forms, an expected pronunciation effect for individuals in whom speech and reading ordinarily are in close functional relationship. The deaf detected exactly as many modal as nonmodal letter forms, provoking the interpretation that deaf children, as a group, do not effectively mediate print with speech. The deaf also were relative@ unaffected byagrammatical class, while hearing subjects were considerably more likely to detect a target letter if it occurred in a content word than a functor term. Questions pertaining to reading instruction in the deaf are discussed.
It is known that speech is important to reading (cf., Kavanagh, 1968; Kavanagh and Mattingly, 1972) but it is not clear whether this is so mostly during the years in which the child learns to read or, persistently, the seconds in *This work was supported by the National Institute of Child Health and Human Development through Grants HD-53445 and HI)4595 1. The deaf subiects were tested in 1973 when the author was a Research I~cllow in the Department of Psychology at Yale University. The author is indebted to the Mystic Oral School for the Deaf for furnishing subjects and space, to J. Lowden for composing the passage and running the deaf subjects, to L. Locke for assistance in the grammatical categorization of target words and to R. Conrad for comments on an early version of the manuscript. Requests for reprints should be addressed to John L. Locke, Institute for Child Behavior and Development, University of Illinois, 51 Gerty Drive, Champaign, Illinois 61820. OOlO-0277/78/0006-0189$2.25
0
Elsevier
Sequoia
S.A.,
Lausanne
Printed
in the
Netherlands
176
John L. Locke
which the adult works his way from letters to meaning. It is apparent, though, that most proficient readers do something phonemic in their silent reading. One thing readers do, according to electromyographic studies (Hardyck and Petrinovich, 1970: McGuigan, 1970; Locke, 1971), is instruct the appropriate speech muscles to act - as if an audible pronunciation were forthcoming --though in the end there rarely is any visible or audible articulatory movement. Edfelt (1960) reports that nearly every reader does this ordinarily or can be induced to subvocally pronounce merely by decreasing the familiarity of the material or the clarity of the print. A second phonological effect in silent reading is observed when the reader takes longer to recognize or react to printed words (or, significantly, digits) whose “names” contain more syllables than other items of identical graphemic length (Eriksen, Pollack and Montague, 1970; Klapp, 197 1). A third phonemic effect in silent reading is revealed in the easy detection of rhyme (e.g., where sew-dough is chosen over clew-rolcglr and tour-hour) or the reader’s semantic resolution of anomalous word strings (e.g., where WEAK ANT MAY KIT + we can’t make it), both processes apparently involving auditory-phonetic imagery. That silent reading is phonemically sensitive is rather nicely illustrated by the several experiments of Corcoran (1966, 1967; Corcoran and Weening, 1968) and of McKay (1968), involving two slightly different tasks. In one, the reader searches through a prose passage for words spelled incorrectly; some are misspelled in a phonetically compatible way (e.g., hurd for heard), others in an incompatible fashion (e.g., borst for burst). In the other task, the reader identifies all instances of a predesignated letter (such as k) which in the test passage occurs in pronounced (e.g., keeping) and silent (e.g., knitting) forms. These studies show that readers are more likely to detect a misspelling if it is phonetically incompatible than if it is compatible, and are more likely to detect a letter if it is pronounced than if it is silent. Though experiments on readers’ dctcctiorzs of incorrectly spelled words clearly show phonemic effects, their results would be reinforced by a finding that writers’ gerzcratiom of misspellings also are phonetically constrained. In this connection, there is an interesting naturalistic study (Sears, 1969) which describes the spelling errors identified by the publications department of an aerospace company over a one year period. Of the 100 plus errors, over 92 percent were of the phonetic type (e.g., mztrge, priar), proving to the author that “engineers spell acoustically”. If silent reading were aided by phonetic processing, one might expect the deaf to read very poorly since their speech knowledge characteristically is limited. The expectation would be correct: surveysgenerally place the reading of 15-year-olds at about the third grade level, whether educated in the United
Phonemic effects in silent reading of children
177
States or Great Britain (Conrad, 1977). However, much of the linguistic basis for this may be due to deficits of syntax and vocabulary as well as phonology. There are several studies which suggest that many deaf children, unlike the hearing, may be more closely oriented to graphemes as visual stimuli than to the phonemes they conventionally represent or imply (Blanton, Nunnally, and Odom, 1967; BlantonandOdom, 1968; Lockeand Locke, 1971; Conrad, 1972a, 1973; Wallace and Corballis, 1973). On the basis of this, one might be tempted to posit a functional link between the tendency to recode graphemes phonologically and the ability to derive meaning from print. However, such an assumption would be too hasty for several reasons: (1) as mentioned, it has not been proved that phonological recoding is indispensibk to silent reading, (2) there are more speech coders among the deaf than there are good readers (cf., Conrad, 1972) and (3) the coding studies cited earlier were experiments in which deaf children had to recall serially unrelated strings of briefly exposed uppercase letters. Not approached was the question now asked here: when a deaf child goes from print to meaning, does he work through sound? Whether necessary or not, the hearing child is likely to consult the system of phonemes with which graphemes typically are linked. This is a possibility in the deaf child as well, at least if his speech is well developed and internalized, and the grapheme-phoneme correspondences are well known. Even given these contingencies, though, there would be no obvious benefit from phonological conversions unless the deaf child’s semantic system was phonologically based, surely a daring assumption in most cases. Alternatives, of course, are that the stages of print processing do not include a phonological recoding but instead rely upon some other system, perhaps one of shape cues or of implicit tingerspelling responses. These possibilities might be hard to identify, and it would be premature here to set down the experimental logic for doing so. Instead, this paper reports an attempt to determine whether the deaf are as phonemic in their silent reading as the hearing. In the experiment reported below, deaf and hearing children read a prose passage and in the process crossed out all instances they could find of certain designated letters. Where Corcoran’s (1966) letters were pronounced or silent, in the present experiment letters appeared in words whose pronunciations rendered the letters phonemically modal (e.g., the h in ahead = /h/) or phonemically nonmodal (e.g., the h in phone = /f/), a distinction to be elaborated later. It was assumed that letter detections would be greater in the modal cases only if the readers went from letter to phoneme and knew the operative pronunciation rules. If, however, the reader’s search was guided mainly by nonphonemic considerations, it was assumed the modal-nonmodal differences would largely be irrelevant, and that a letter’s detectability might be influenced instead by its orthographic environment - in some unknown
178
John I>. Locke
ways, if at all - or by the linguistic class and information in which it appeared.
burden of the word
Method As mentioned above, this experiment studied the detectability of letters pronounced, because of their orthographic environment or because of custom, in a phonemically modal or nonmodal way. Phonemically modal forms are the particular instances of a letter in which the pronunciation involves a phoneme typically associated with the letter’s name. For example, where g commonly is understood to have “soft g” (i.e., /dg/) and “hard g” (i.e., /g/) variants, words such as rag and rage qualify as phonemically modal items. Nonmodal forms are cases in which a letter’s pronunciation involves phonemes not typically associated with the letter’s name. For example, the same letter, g, functions as a nonmodal form in words like rough (where g = /f/), ring (where g = /g/ and right (where g is “silent”, a phonemic zero). The modalnonmodal distinction permitted the use of a greater number of letters, words and grammatical categories - and the creation of a somewhat more coherent passage - than would have been possible, or as easily achievable, with the pronounced and silent categories. Materials
The prose passage used in the letter detection task was one-and-a-half pages in length, doublespaced, and it contained 472 words. The passage was typed with an IBM Selectric typewriter - whose element bore standard sans serif type - and was photocopied for distribution. The passage was composed by the deaf subjects’ reading teacher, who limited her vocabulary to words she knew were familiar to her students. Since she was constrained further by the need to include certain target words, the result was less than a literary masterpiece. A sample paragraph, the second of nine, conveys something of the story’s style, content and readability: Suddenly the phone rang. The cheerful little girl was so glad. She rushed to the kitchen to answer it. It was her grouchy mother. The next long call was a wrong number. Enough of that ! There were three target letters in the passage, c, g and h. Each letter occurred in modal and nonmodal form in nouns, verbs, adjectives and functors (adverbs, prepositions, pronouns, conjunctions, determiners and participles). An attempt was made to distribute target letters across initial, medial and final word positions as much as possible without violating pronunciation and
Phonemic ejfects in silent reading of children
179
grammatical constraints. Table 1 shows the several phonemes presented by the three target letters in some words drawn from the passage for illustrative purposes. Silent letters constituted just eight percent of the nonmodal forms. There were 66 more target letters in the nonmodal category, an excess reflected in each of the four grammatical classes. The relative differences between modal and nonmodal frequencies for nouns, verbs and so forth were about the same for the three individual letters with the exception that the 34 the’s added disproportionately to the h count and greatly inflated the functor category. It is apparent from Table 1 that word position balancing was not completely successful, with few final letters in the modal set. Of the 290 target letters, 42 were in words which contained two target letters. Most of these two-target cases were ch (e.g., chilly, much) and gh (e.g., ghosts, caught) words which minimized semantic and grammatical variability in the analysis of performance on the three letters. Subjects The experimental group consisted of 24 1 l- to 16-yearold boys and girls enrolled at a residential school for the deaf. Their mean hearing level in the better ear for 500, 1000 and 2000 Hz was 95 dB (re ANSI, 1964). The school of education in which the considered itself “oral”, claiming a philosophy attainment of speech and lipreading proficiency was a primary goal’. The control group consisted of 24 12- to1 3-yearold hearing children enrolled in regular classes in a public school. All participants had normal or corrected-tonormal vision. Service in the experiment was voluntary and without formal reward. Procedures One third of the subjects in each group were assigned randomly to one of the three letters, c, g or h, which was written in the upper right hand comer of the first page of the passage. Subjects were asked to read the story, crossing out all cases of the designated letter as they did so. When finished, they were told, questions about the story would be distributed. Deaf subjects were run in groups of eight. As a consequence, the experimenter was able to write the target letter on the blackboard, showing what was to be done with several explicit examples. The hearing subjects were run in a single group. Consequently, each hearing child’s target letter was privately divulged and only neutral examples were given publicly. Otherwise, like the deaf, hearing subjects were instructed to cross out all instances of the desig‘Nevertheless, a number of students conversations outside the classroom.
were observed
copiously
fingerspelling
and signing
in their
180
John I,. Locke
Table 1.
Phonemic, grammatical and lexical characteristics their phonemically modal and nonmodal forms Phonemically Phoneme
modal N
letters
of the 290 target letters in
Phonemically
Examples
nonmodal N
Phoneme
letters Examples
c
lsl
7 23
lkl
city, race curly, because
32
I I
_
chair, marched machine back
34
30 R
Id Id
29
12
lfl
game. bag giant, cage
4 18 5
41
rough ring, singer night
-27 h
41
hot, ahead
Ill
31 23
IdI I=1
48 8
It/l
I
If-1
41
117
112
178
Grammatical
chased, much rushed, dish phone, enough the, another dough, fight
Class
Nouns
Verbs
Adjcctivcs
I:unctors
Nouns
Verbs
Adjectives
Functors
44
23
24
21
54
33
27
64
Word Position Initial
Medial
I:inai
Initial
Medial
Final
82
28
2
20
132
26
Phonemic effects in silent reading of children
nated letter while reading the story well enough to answer written about it later.
181
questions
Analysis and Results It was clear from initial analyses that all subjects understood the task; they cancelled more cases of their designated letter than they missed and erringly crossed out few untargeted letters (less than one percent of the total errors were false positives). All 48 subjects responded to the three content questions (e.g., “What did the girl see on T.V.?“) with sufficient information to show that they had, indeed, read the story while performing the letter cancellations. Detection errors - those cases where a letter occurred in the text but was not crossed out - were analyzed for letter (c, g, II), pronunciation (modal, nonmodal) and grammatical class (noun, verb, adjective, functor) effects. The results of these analyses were a significant pronunciation effect (F[ 1,421 = 32.878; p < O.OOl), a significant grammar effect (F[3,1261 = 15.821; p < 0.00 1), a significant group-by-pronunciation interaction (F [ 1,421 = 11.008 ; I_’< 0 .OO1) and a significant group-by-grammar interaction (F [ 3 ,126] = 7.478; ,U < 0.001). Letter effects were nonsignificant (F > 1). Hearing subjects displayed phonetic effects and grammatical effects to a much greater degree than did the deaf. Figure 1 shows the probability of a detection error for modal and nonmodal cases, collapsed across grammatical class. The hearing obviously approached the letter cancellation task phonemically; they were almost three times as likely to miss a nonmodal form (0.25) as they were to miss a phoFigure 1.
Probability of a failure to detect phonemically by hearing and deaf subjects. M=Modal N =Nonmodal
0 :rl
M
N
HEARING LETTER
M DEAF CLASS
N
modal and nonmodal
letters
182
John L. Locke
Figure 2.
Probability of a fhilure to detect letters in verbs, noum, adjectives am' jimctars by hearing and deaf subjects. .40
V=Verb N=Noun A=Adjective F=Functor
.30
20
d
.I0
0I
V
N
F
DEAF
HEARING GRAMMATICAL
A
CLASS
nemically modal form (0.09). As a group, the deaf showed IZO modal-nonmodal difference (0.14 for both). Figure 2 shows the probability of a failure to detect letters in nouns, verbs, adjectives and functors, collapsed across pronunciation category. The hearing show a modest linear increase in error probability from verbs to nouns to adjectives, then a somewhat sharper increase for the functors’. The deaf, whose grammatical effect; were significantly less, were about as likely to miss a letter in a noun as they were to miss a letter in a verb or adjective, with a slight increase for the functors. Of the 34 occurrences of the, the deaf and hearing error probabilities were 0.176 and 0.477 respectively. Conrad (1972a, 1973) has found that some deaf students give clear evidence of phonological coding in short-term memory, others no evidence of it. He also has observed a relationship between phonological coding and speech quality, as have we (Locke and Locke, 1971). Accordingly, it seemed important to learn whether the deaf modal-nonmodal equivalence represented the average performance of two opposing groups of deaf children, one with relatively good, the other with relatively poor hearing or speech. An analysis of hearing subjects showed that 23, or 92 percent, committed larger proportions of error on nonmodal than on modal forms. Among the deaf, 13, or 54 percent. were more likely to miss nonmodal than modal forms. Is this just random variation or do these deaf subjects resemble the hearing in a way the “The verb-noun 1970, pp. 96-98).
relationship
- a particular
LXX of verb centrality
~ is discussed
claewhcre
(Chafe,
Phonemic effects in silent reading of children
183
11 other subjects do not? An analysis of audiometric records showed the “hearing-like” subjects to have approximately the same hearing loss (96 dB) as the 11 subjects who gave no evidence of a pronunciation effect (94 dB). However, a study by Chen (1976), which appeared during the writing of this report, suggests the tendency toward “phonemic reading” in the deaf may be linked to hearing sensitivity. Chen’s profoundly deaf group showed no significant pronouncedsilent gap for e detection while her hard of hearing and normal subjects did. If these group differences persist in future research, it will be interesting to see if they are due to variations in reading proficiency (Conrad, 1977) or speech quality, or to unequal tendencies toward the phonetic recoding of printed material (Conrad, 1973). To ensure that none of the modal-nonmodal difference within the two groups was due to variations in the position of target letters within words, a separate analysis was performed. The proportion of error at initial, medial and final positions was calculated separately for modal and nonmodal cases and these proportions simply were averaged. This procedure gave equal weight to each of the three positions, regardless of their differing number of cases. The results of this analysis show the deaf mean proportion to be 0.14 for modal and nonmodal categories, as was the case without adjustment for word position. The hearing remain at 0.09 error for modal letters, but they drop to 0.20 for nonmodal letters. Hence, the pronunciation effect, which now occurs in the hearing at a ratio of two to one, holds regardless of the location of the letters within words. The two groups also are seen to perform at approximately the same overall level (0.14 error) when word position effects are controlled in this way. There were 14 words which contained silent letters. Unfortunately for analytical purposes, these words were distributed across all three letters, two word positions and four grammatical classes. Consequently, few controlled comparisons of silent, modal and nonmodal forms were possible. However, these variables were controlled perfectly in five cases (all g; three medial-letter nouns, two medial-letter verbs) which occurred in-all three forms. In these cases, the hearing subjects showed a steady increase from modal (0.09) to non-modal-sounded (0.19) to nonmodalsilent (0.23). These data suggest that the pronounced-silent distinction of Corcoran (1966) is, indeed, a valid one, It is not, however, an experimentally necessary distinction; the modal to nonmodal-sounded increase is over 100 percent, large enough to reveal the pronunciation effect, with just a 21 percent increase from sounded to silent. The deaf, inexplicably, made the fewest errors where the hearing made the most though they, like the hearing, did do worse on nonmodalsounded than on modal items. Deaf error probabilities were nonmodal-silent: 0.20, modal: 0.25 and nonmodalsounded: 0.28.
Discussion It seems appropriate to begin the discussion with a mild caveut. Subjects in this experiment were asked to locate certain letters while they read, a rather unusual request. It is possible that in attempting to carry out the experimenter’s instructions our subjects relied upon phonemic strategy to a degree greater than children characteristically do in silent reading. However, the functional links between reading and speech mediation are so clearly established, and the pronunciation effect so robust, it seems unlikely that lettersearching encouraged anyone to perform a wholly unnatural phonological act. The lack of a pronunciation effect in the deaf children seen here, taken with the lack of reading proficiency in deaf children generally, seems to reinforce previous findings that speech mediation facilitates reading (Hardyck and Petri&vi&, 1969, 1970). An alternative, that poor reading ability is reflect& in the lack of pronunciation effects, seems less reasonable for several reasons. First, the deaf subjects in this experiment read a passage constructed specifically for them, a composition which took strict account of their word knowledge and structural sophistication. Second, recent research with hearing children shows that poor readers are less likely than good readers to recode letters and words phonetically, cwu if thel: ure able to read then? (Liberman, Shankweiler, Liberman, Fowler and Fischer, 1977; Mark, Shankweiler, Liberman and Fowler, 1977). Further, this difference probably is due not to dissimilar rehearsal proclivities or patterns but, instead, to a deficiency in the “accessing and use of a phonetic representation” (Mark et al.. 1977). There may have been some deaf children who silently spoke the passage but gave little evidence of it because their grapheme-phoneme rules were not correct or their phonetic executions sufficiently palpable, in lieu of auditory imagery, to signal the presence of a target letter. But there were many deaf subjects who gave no evidence of inner speech. In its stead they went from print to meaning directly or by way of some nonspeech system, the most likely contenders probably being a shape code or fingerspelling. Though proficient fingerspellers do evidence somewhat better reading scores than less proficient fingerspellers (Quigley, 1969), it is not known whether this is because they do anything dactylic as they read. As mentioned earlier. the deaf did better than hearing subjects in finding the /I in the various instances of tlrc. This is particularly interesting in view of recent evidence that words such as u/zd and tllc may be missed by the proficient reader, probably because he decodes print from phrase-sized chunks (Drewnowski and Healy, 1977). It is conceivable, in this connection. that the deaf differ from the hearing not only with respect to their reading .strafegies
Phonemic effects in silent reading of children
185
but also with reference to their reading units. Now if syntactic considerations can direct the reader’s attention to or away from certain words, then it should also be the case that some words get phonologically recoded, some not, based on the structure of the sentence in which they occur. This would indicate that models of reading should emphasize the interdependence of syntactic and phonological levels of processing, as do the data of hearing and deaf subjects seen here. One is primed by previous research to expect the deaf to be visually oriented, but even in the hearing one must assume the primacy of visual over phonemic characteristics. This is suggested by the total absence of false positives traceable to phonemic interference (e.g., where s or k were mistakenly identified in a search for c’s). But in the hearing one must assume that phonemic recodes are relatively potent, capable of enhancing or competing with preliminary decisions based on the visual form of letters. Recall that the hearing did better than the deaf on modal letters, worse than the deaf on nonmodal letters; and this is not the first time speech mediation has helped and harmed the performance of normal hearing subjects (Blanton and Odom, 1968; DomiE, Hagdahl and Hanson, 1973). In the deaf, of course, our assumptions must be different. One may suppose that deaf children are visually oriented, but there is little reason to believe they would spontaneously generate phonemic recodes of a force sufficient to confirm or counter the results of strictly visual analyses. Perhaps that is exactly why the deaf seem to be more sensitive to visual information than do the hearing. There is, after all, the remarkable observation that deaf children spell more accurately than hearing children (Gates and Chase, 1926; Templin, 1948). However, on close inspection it is apparent that much of the advantage is attributable to the markedly lower incidence of “phonetic” errors in the deaf (Hoemann, Andrews, Florian, Hoemann and Jensema, 1976). It is well documented that the deaf are poor readers, and it could be that many have a fairly fixed ceiling on their potential for reading achievement as currently educated. Studies in several countries and-at different times typically indicate that this limit is near the fourth grade level when the reader graduates from deaf high school. Since the deaf children in our population apparently did not uniformly work through or consult or happen upon phonology in going for the meaning, it might be valuable to know what they did do, and how much of thier present ceiling is due to what happened at that stage of the reading process. References Blanton,
R. L., Nunnally, J. C. and Odom, P. B. (1967) in the verbal behavior of deaf and hearing subjects.
Graphemic, phonetic, and associative .I. 5’~. Hear. Rex, 10, 225-23 1.
factors
186
Blanton,
John
I>. Locke
R. L. and Odom, P. B. (1968) Some possible interference and facilitation effects of pronunciability. J. ver. I.earn. v. Beh., 7, 844-846. Chafe, W. (1970) Meaning and the Structure c?f‘Language, Chicago, The University of Chicago Press. Chen, K. (1976) Acoustic image in visual detection for deaf and hearing college students. J. g. Psychol., 94, 243- 246 Conrad, R. (19720) Short-term memory in the deaf: A test for speech coding. British J. Psychol., 63, 173.-180. Conrad, R. (1972b) Speech and Reading. In Kavanagh, .I. I’. and Mattingly, I. G. (Eds.), Language bv Ear and by Eye: The Relationships between Speech and Reading. Cambridge, Mass., MIT Press. (‘onrad, R. (1973) Some correlates of speech coding in short-term memory of the deaf. J. Sp. flear. Res., 16, 375~384. Conrad. R. (1977) The reading ability of deaf school-leavers. British Journal of educat. Psychol., 47, 138~~148. Corcoran, D. W. J. (1966) An acoustic factor in letter cancellation. Nature, 1710, 658. Corcoran. D. W. J. (1967) Acoustic factors in proof reading. Nature, 1714. 851~G352. Corcoran. D. W. J. and Weening, D. L. (1968) Acoustic factors in visual search. Q. J. exper. Ps@zol., 20, 83 -85. Dorni;, S.. Hagdahl, R. and IIanson, G. (1973) Visual search and short-term memory in the deaf. Reports from the Institute ofApplied Psychology. The University of Stockholm, No. 38. Drcwnowski, A. and Healy. A. 1;. (1977) Detection errors on the and and: Evidence for reading units larger than the word. Mern. Cog., 5, 636 647, Edfelt, A. W. (1960) Silent Speech and Silent Reading, Chicago, University of Chicago Press. Icriksen, C. W., Pollack, M. D. and Montague, W. I<. (1970) Implicit speech: Mechanism in perceptual encoding? J. exper. Psychol., 84, 502.-507. Gates. A. I. and Chase, E. H. (1926) Methods and theories of learning to spell tcstcd by studies of deaf children. J. educational Psycfrol., 17, 289-300. Gibson, L‘:. J., Shurcliff, A. and Yona?, A. (1970) Utilization of spelling patterns by deaf and hearing subjects. In Levi,, II. and Williams, J. P. (Eds.), Basic Studies on Reading. New York. Basic Books. Ilardyck, C. D. and Petrinovich, L. I:. (1969) Treatment of subvocal speech during reading. J. Read., I.?, 361 -368,419-422. Ilardyck. C. D. and Petrinovich, L. I:. (1970) Subvocal speech and comprehension level as a function of the difficulty level of rcuding material. J. verb. I,earn. verb. Beh.. 9, 647 -652. Iloemann. II. W.. Andrews. C. I<., Florian, V. .4., Ilocmann, S. A. and Jensen1a.C. J. (1976) The spelling proficiency of deaf children. Amer. Ann. Deaf; I-11, 489-493. Kavanagh, J. 1:. (Ed.), (1968) Communicating b)’ Language: The ReadingProcess. Bethesda, Maryland, LJS. Department of Ilealth, Lducation and Welfare. Kavanagh. J. I:. and Mattingly, 1. G. (1972) Langua~c by Ear and bv E.ve: The Rclatimships between Speccfz and Reading. Cambridge. h1ass., MIT Press. Klapp, S. T. (1972) Implicit speech inferred from response latencics in samedifferent decisions. Paper presented at Western Psychological Association, April. 1971 (cited in Posner M. 1.. Lewis, J. L. and Conrad. C. Component processes in reading: A performance analysis. In Kavansgh, J. F. and Mattingly, I. G. (ICds.), Language bv Ear and b.v E,lre: The Relationships bet\\,een Speech and ReadinK. Cambridge, Mass.. MIT Press. Libcrman, I. Y.. Shankwcilcr, I)., Liberman. A. M., I:owler. C. and 1:ischer. I;. W. (1977) Phonetic segmentation and recoding in the begInning reader. In A. S. Reber and 1). Scarborough (Eds.). Tobcaard a Psycholog_v of Reading The Proceedings of the CtiNY Conftirences. Iiillsdale. N. J.. Lawrence rrlbaum. Locke, J. L. (1971) Phonemic processing in silent reading. Perceptual and Moror Skills, 32. 905-906.
Phonemic effects in silent reading
of children
187
Locke,
J. L. and Locke, V. W. (1971) Deaf children’s phonetic, visual, and dactylic coding in a grapheme recall task. J. exper. Psychol., 89, 142-146. McCuigan, F. J. (1970) Covert oral behavior during the silent performance of language tasks. Psychol. Bull., 74, 309-326. MacKay, D. G. (1968) Phonetic factors in the perception and recall of spelling errors. Neuropsychol., 6, 321-325. Mark, L. S., Shankweiler, D., Liberman, I. Y. and Fowler, C. A. (1977) Phonetic recoding and reading difficulty in beginning readers.Mem. & Cog., 5, 623629. Quigley, S. P. (1969) The influence of tingerspelling on the development of language, communication, and educational achievement in deaf children. Urbana, Illinois: Institute for Research on Exceptional Children. Sears, D. A. (1969) Engineers spell acoustically. College Composition and Communication, 20,349-35 1. Templin, M. (1948) A comparison of the spelling achievement of normal and defective hearing subjects. J. e&cat. Psychol., 39, 337-346. Wallace, G. and Corballis, M. C. (1973) Short-term memory and coding strategies in the deaf. J. exper. Psychol., 99.334-348.
RksumC Vingt quatre enfants sourds ou entendants ont lu silencieusement un texte imprime en barrant toutes les occurrences d’une lettre cible specifiee auparavant. Ces lettres cibles se presentaient sous unc forme phonologiquernent individualisce, cette catdgorie comprend les lettres “prononcecs” (tclle que g dans badge) ou sous une forme phonologiquemenr individualiske, cette classe inclut les lettrcs “mucttes” ou celles qui sont prononcees de facon atypique (ex: le g de rough). Lcs cnfants non sourds ddtectcnt signiticativemcnt plus dc formes modales que de formes non-modales. On doit s’attcndre i cet effct dc prononciation chez les individus pour lesquels la parole et la lecture sont en relation fonctionnelle itroitc. Les enfants sourds detectent autant de lettres modales que de lettres non-modales. On peut interpreter ccla en disant qu‘en g&r&al l’enfant sourd ne relie pas effectivemcnt cc qui est imprime et ce qui est parle. Les rcsultats des sourds sont aussi relativement peu affect&s par la classc grammaticale, 1-s sujets cntendants par contre detectent plus facilement unc lcttre cible dans un mot plein que dans un foncteur. On discute de questions ayant trait i l’enseignement de la lecture chew les sourds.
Cognition, 6 (1978)
2
1899221
Analogic and abstraction
strategies in synthetic grammar learning : A functionalist interpretation*
ARTHUR
S. REBER**
Brooklyn College of GUN Y RHIANON
ALLEN
The Graduate Center of CUNY Abstract Subjects learned artij?cial grammars under two conditions of acquisition. paired-associate learningand observation of exemplars. The former procedure was strongI)) associated with the establishment of a fairb) concrete memorial space consisting of specific items and parts of items and the-use of an analogic strategy for making decisions about novel stimuli. The observation procedure was strong1.v associated with the induction of an abstract representation of the rules of the grammar and the use of a correspondence strategy for decision making. Moreover, this latter procedure led to more robust knowledge and better overall performance. Analyses of both objective response patterns and subjective introspections Jlielded coordinated data in support of this distinction. The relationships between acquisition condition and cognitive strategq’ are discussed from a functionalist point of view.
Over an extended series of studies we have been examining various aspects of a type of cognitive behavior which we have dubbed implicit learning (Reber, 1967, 1969,1973; Reber and Lewis, 1977). The basic working hypothesis of this research has been that the behaviors we have observed (and will shortly summarize) have been reflective of the operation of the cognitive process of abstraction. Stimulated by several recent papers (Brooks, 1974, 1977, 1978; Baron, 1977) implicating the operation of analogic processes in conceptual and linguistic behaviors we take here a fresh took at this working hypothesis. We demonstrate it to be wanting under some circumstances but quite robust *Our special thanks to Amy Lawrence for her efforts in the thankless job of transcribing the hours of tapes and for her help in preliminary data analyses. This paper was written while the senior author was a Fulbright Lecturer at the University of Innsbruck, Austria and the junior author was supported by Fellowships from CUNY and Canada Council. **Requests for reprints should be sent to Arthur S. Reber, Department of Psychology, Brooklyn College of CUNY, Brooklyn, N.Y. 11210, U.S.A. OOlO-0277/78/0006-0189$2.25
0
Elsevier
Sequoia
S.A.,
Lausanne
Printed
in the
Netherlands
190
A. S. Reher ad
R. Alkw
under others. As background for this let us summarize here briefly the basic conditions and principles of this essentially unconscious learning which are empirically supportable. First, the material to be learned must be rule-governed, but in a rather complex or nonabvious way. Hence, our primary experimental vehicle has been the artificial language learning situation’. For as with natural language and some social phenomena, an artificial language presents the subject with a large and flexible store of stimulus materials which reflect a structure based on probabilistic and often remote contingencies which are not immediately discernable. Second, the subject in the learning phase of the experiment must not approach the stimulus materials as if they contained a code to be cracked. Although such a consciously analytical strategy is certainly effective when the rules of the language are relatively uncomplicated (Miller, 1967) or when the structure of the language is made particulariy salient (Reber, Lewis and Kassin, in preparation), by and large, conscious attempts to learn the underlying rules for stimulus formation are detrimental (Reber, 1973; Brooks, 1978). Other than this important factor, it does not seem to make much difference how the subject is exposed to the material, although a firm attentional focus is certainly required. Effective learning has been found using memorization of letter strings (Reber, 1967, 1973; Millward, in preparation), sequential observation of exemplars from the language (Lewis, 1975), simultaneous scanning of a large array of representative letter strings (Kassin and Reber, in press; Reber, Lewis and Kassin, in preparation) and paired-associate learning where the learning material is embedded in another task (Brooks, 1974, 1978). Third, and this is fundamental, the essential nature of this phenomenon is that the learning that takes place is tacit and largely outside of the consciousness of the learner. That is, once this knowledge has been acquired, subjects experience great difficulty in verbalizing or otherwise explicating that which is known. Nevertheless, their knowledge can be put to use in a variety of other tasks. In particular, subjects are quite effective in differentiating new but acceptable letter strings from those which violate in some fashion the rules for letter order in the language (Reber, 1967, 1976; Brooks, 1978); they can effectively sort items from more than one underlying grammar (Brooks, 1978; Kantowitz, 197 1); they can transfer their knowledge of underlying structure from one set of symbols to another, so long as the new set reflects the same structural regularities (Reber, 1969); and they can engage in rather ‘In several experiments (Reber and Millward, 1968. 1971; Millward and Reber. 1972) the probnbility learning paradigm bar also been used. In those published reports, however, the theoretical connection to implicit learning was left largely implicit. Work in progress will specify and clarify the similarities.
Analogic and abstraction strategies in synthetic grammar learning
191
sophisticated problem solving such as the solution of anagrams within the context of the language (Reber and Lewis, 1977). This inequality between what is known tacitly and what can be said about this knowledge is a consistent finding. Even in the most intensive examinations of this effect, where subjects worked with anagram problems within the language for many hours over a four day period (Reber and Lewis, 1977), their conscious knowledge of the rules of the language always lagged behind the knowledge that they displayed through other modes of expression such as discrimination of acceptable from subtly unacceptable letter strings. As is the case with most natural language learning, as the depth of knowledge of the language increases so do both the ability to say what is known and the ability to use that knowledge, but the latter always remains ahead of the former. Finally, what subjects tacitly know is a valid, if partial, representation of the actual underlying rules of the language. The tacit knowledge of an implicit learner may not be a complete mapping of the structure of the stimulus environment but it most assuredly is not a distorted mapping. However, when subjects are not in an implicit mode during learning, as when they have been told in advance about the existence of rules and encouraged to discover them, then one finds evidence of nonrepresentative knowledge (Reber, 1973). On the basis of these results we have presented a picture of implicit learning which we feel has implications for epistemological theory (cf., Reber and Lewis, 1977). Our characterization has, as its core, an hypothesized abstraction process, a nonconscious, nonrational, automatic process whereby the structural nature of the stimulus environment is mapped into the mind of the attentive subject. This process of implicit learning shares certain similarities with the characterization of imagery presented recently by Pylyshyn (1973), with the point of view articulated by Polanyi (1966) in his analysis of tacit knowledge, with the perceptual theory of Gibson (1966, in preparation), with the prototype theory of Posner (1969) and with some of the recent arguments put forth by Shaw and his co-workers (see, e.g., Shaw and Pittenger, 1977). In all of these orientations to the problem of the form and structure of complex knowledge there is this notion of an abstract representational system. Although there are nontrivial differences between these several perspectives, the central similarity is of immediate interest - the common hypothesization of an unconscious abstraction process which maps veridically the intrinsic structure of the environment. In this context, a recent paper by Lee Brooks (1978) challenging the primacy of such abstraction systems in conceptual behavior is of some interest. At the heart of Brooks’ critique is the proposition that a great deal of the data which have been viewed as support for abstractions, prototypes, and other rule-governed cognitive processes can, in fact, be interpreted as due to
192
A. S. Reber and R. Allen
the operation of a nonanalytic, analogic process in which the mental representation is assumed to be little more (and little less) than an organized compendium of individuated instances. Thus, in his analysis of natural concept formation, he argues that a novel fourlegged beast is perceived as a dog, not because it fits with the viewer’s abstract feature system for dog but rather because it reminds him of some specific critter he has met before which was identified as a dog. More germane, Brooks also argues that a novel letter string not because it fits the subject’s tacit represenis classified as “grammatical”, tation of the synthetic grammar but rather because it reminds him of a letter string which he recalls from the set of strings previously labelled “grammatical”. Brooks’ polemic, as he takes pains to point out, is not that abstraction processes are not an important component of conceptual behavior; his primary concern is with increasing the theoretical weight assigned to the analogic process. This general point of view has also been expressed by others who have presented data in keeping with analogic operations, e.g., Baron (1977). We concur with the contention that analogic processes may have greater generality than has been attributed to them and we concur with the stated aim of specifying the conditions underwhich such processes may be recruited. Of direct relevance to this aim, then, is a brief description ofjust those conditions under which Brooks has observed evidence for the use of analogic rather than abstraction processes. His task was a paired-associates (PA) task. The stimuli were letter strings generated by two finite-state grammars similar to those used in a previously cited study (Reber, 1969). Each stimulus was paired with the name of either a city or an animal and the subject’s task was to learn the proper pairings. Unbeknownst to the subjects, the two grammars were not systematically related to the simple city-animal dichotomy of which they were generally cognizant, but rather to a nonobvious New WorldOld World dichotomy. After reaching criterion on the PA task, the subjects were informed for the first time of the critical relationship between letter strings and response classes and were then required to sort new strings according to whether they were examples of New World items, Old World items, or neither (the latter category to consist of items whose letter order was deemed to conform to neither of the underlying grammars). Subjects were successful in partitioning the new letter strings, averaging 62% correct where 33% is expected by guessing. However, the important aspect of Brooks’ experiment is that, given his procedures, it is highly unlikely that his subjects could have formed the kinds of abstract structures which we have hypothesized are at the core of artificial language learning. To further substantiate his claim that abstractions were not playing a part in this
Analogic and abstraction strategies in synthetic grammar learning
193
study, Brooks ran a total of four separate control conditions. They are rather complex and we need not describe them here but as a package they reinforce his conclusion that his subjects were not forming abstractions based upon grammatical structure. Rather, they seemed to have been setting up an explicit memorial space which consisted of letter strings with their associated responses and using this information to make their judgements in an analogic fashion. That is, a novel letter string presented during testing was treated by a subject not as one which “intuitively tits my abstract representation of Grammar 1” but rather as one which “reminds me of Denver so it must be New World”, or one which “looks like the one that was paired with lion so it must be Old World”. We have no quarrel with Brooks’ analysis of his data. Moreover, we do not dispute that within this restricted context his findings can be interpreted as arguing against the need to hypothesize abstraction processes, Our question, rather, concerns the generality of his conclusions. Analogic processes are basically decision making processes which are encouraged by a particular kind of memorial space. The laboratory technique of PA learning, and possibly a variety of real world situations, encourages the formation of memorial representations of this sort. When individuals in laboratories or more natural environments are not so constrained, it is not clear that they will naturally form the individuated memories which are the prerequisites for the analogic processes. The issue here is really a simple pragmatic one: what kinds of circumstances optimize acquisition by individuation, thereby encouraging reasoning by analogy, and what kindsencourage more analytic, rule-induction processes which optimize decision making of a more abstract nature? In this paper we address this issue within the context of the artificial grammar learning experiment. We demonstrate that subjects can be induced to tip the balance of cognitive functioning toward either the analogic or the abstract by controlling the manner in which they approach the stimuli during acquisition. Moreover, we are also concerned secondarily with a question which may have important pedagogic implications: within the context of complex stimulus environments such as we have here, is there any clear advantage accruing to one or the other of these cognitive strategies? Part of our demonstration, interestingly, relies upon data which are somewhat uncommon these days, introspective reports. A word of explication then. The decision to dust off this experimental relic came about during a pilot study in which we successfully replicated Brooks’ main findings. Several of the subjects used were experienced with artificial language learning experiments, having served in an earlier study. Although they faithfully followed instructions they protested that the PA procedure “felt different” from the
194
A. S. Reber and R. Allen
other procedure (memorization of examplars without labels). We then ran several new subjects under a variety of learning procedures; to the very last they maintained that there were compelling differences in the cognitive processes recruited by the two tasks. Yet, in spite of these feelings on the part of our subjects, the objective data reported by Brooks (and those found by us in the replication) were for the most part highly similar to those found in our earlier studies. Thus, it seemed prudent to focus in upon these introspective reports and to use them as a basis for exploring this nebulous phenomenological difference. Method Subjects The subjects were 10 specially selected advanced undergraduate and graduate students from the City University of New York. Prior to running in the experiment they were given a fairly thorough briefing on introspection, how it had been used in the past by turn of the century Structuralists, and what was expected of them. In particular, they were given Ktilpe-like instructions about how they were to try to scan their cognitive acts immediately after they had occurred and to give as complete a verbal description of them as they could. Stimulus materials Grammatical(G) strings The grammatical stimuli consisted of letter strings generated by the two equally complex finite&ate grammars shown in schematic form in Figure 1. These two grammars are new ones and have not been used before. They were constructed because we wished to use grammars of sufficient complexity so that a large number of different letter strings could be generated while keeping string length at a minimum. As shown elsewhere (Reber, Lewis and Kassin, in preparation), the longer a grammatical letter-string becomes the more salient its internal structure is and the more likely it becomes that subjects will try to explicitly “crack the code”. These two grammars were used to keep these explicit processes at a minimum. The structure functions which mathematically characterize finite-state systems (see Reber, 1967; Chomsky and Miller, 1958) are identical for these two grammars; hence each generates exactly the same number of strings of any given length. There are exactly five G-strings of Length 3, seven of Length 4, 11 of Length 5, and 18 of Length 6, the longest used. Of these 4 1 strings, 20 were selected to be used as learning stimuli, 25 were used as testing stimuli (including five “old” strings from the learning set) and one 6-letter string was arbitrarily dropped.
Analogic and abstraction strategies in synthetic grammar learning
Figure
1.
19.5
Schematic diagrams of the hvo finite-state grammars. An acceptable string of letters is generated by any sequence of state transitions from State 1 to any exit state. For example, in Grammar I the state sequence l-2-4-3-5-5 generates the string MVRXR.
Nongrammatical
(NG)
strings
These stimuli were created by systematically introducing letter position violations into grammatical strings. There were 25 such NG stimuli used during testing and they were constructed in the following fashion: five strings had an error in the initial letter, five had a violation in the second letter, five had an error in the next-to-last letter, five terminated incorrectly, two had a violation in a deep internal position, and the remaining three were grammatical strings “spelled” backwards. In this manner we were able to determine whether there were differential amounts of learning about the positional constraints for letters; the backward stimuli were introduced here to see what would happen when essentially all individual letter positions were wrong but the overall letter-to-letter constraints were kept intact. Learning procedures Paired-associates
task (PA)
The 20 G-strings were used as the stimuli; the names of 20 North American cities as the response set. The pairing was arbitrary and changed randomly
196
A. S. Reber and R. Allen
for each subject. The materials were printed on index cards and presented to the subjects in four sets of five items each. Subjects worked with an individual set using a 5 second anticipation method until they responded with the correct city names for the entire set on two consecutive trials. Order of presentation was changed randomly on each trial. Each letter string stimulus remained in front of the subject for a full 10 seconds on every trial. After all 20 had been learned in this fashion the full set was tested in a random order without feedback. The number of correct responses on this final trial gives us a rough measure of how much information from the PA task the subject takes into the testing phase with him. This procedure is, of course, a reduced version of Brooks’ task. Since only one grammar and one coherent set of associated responsare used here, Brooks’ elegant constraints which bias against the possibility of subjects inducing an abstract structure are missing. This, of course, is our point. As will become plain, it is the very nature of tile PA tusk itself which encourages individuated memorial representation and reasoning by analogy quite independent of the subterfuge of Brooks’ procedures. Obsuvation tusk COBS) The 20 G-strings were printed on index cards and presented one at a time to the subjects. Each stimulus was visible for 10 seconds. There were three such “runs” through the full set with order randomized each time. The instructions to the subjects were deliberately nebulous. They were told merely to pay the utmost attention to the letter strings but nothing else. This simple observation of exemplars is the least constraining procedure we have used in our published work in artificial language learning. Nevertheless, the learning levels achieved were quite high - higher than those found in an unpublished study using this technique (Lewis, 1975) and higher than found in several other reports using other tasks (Reber, 1967, 1973). The success of the OBS procedure here is due in some measure to the fact that we are using specially selected and highly dedicated subjects in this study. However, the usefulness of the OBS procedure has been previously demonstrated in other tasks (see Reber and Millward, 1968, 1971) and it was employed here because it minimizes the constraints upon the subject during learning and thus serves as an important focal point against which to view the highly constrained PA procedure.
After both learning procedures all subjects were run through the same standard test of well-formedness. They were all informed that the 20 stimuli they had seen during the learning session conformed to a set of underlying “gram-
Analogic and abstraction strategies in synthetic grammar learning
197
matical” rules which dictated the orders that letters may occur in and that they would be shown a further set of 100 strings. They were told that of the 100 exactly half were similarly grammatical and that half contained violations of the rules for letter order. After OBS learning the subjects were asked to decide whether or not each test string could be considered as “well-formed” or not. Aftet PA learning, in order to more closely replicate Brooks’ conditions, subjects were asked to decide whether or not each string could be a city, with the learning set to be used as examples of letter strings that were acceptable cities. All subjects were told that “a few” of the test items would be strings borrowed from the learning set. These 100 items actually consisted of only 50 different strings (25 grammatical and 25 nongrammatical) displaying the properties described above. Each was presented twice to assess subjects’ consistency of responding. Subjects were not informed of this repetition. The items were presented in a predetermined random order. All were printed on index cards and placed in front of the subject one at a time. There was no time limit on their responses and latencies were not recorded. No feedback about the correctness of the subjects’ responses was given.
Courzterbalancing All subjects were run twice; on one occasion they learned one grammar using the PA procedure and on the other occasion the other grammar using the OBS procedure. Order was counterbalanced so that five subjects were run using PA first and five using OBS first. At least a week separated the two sessions with a mean time between them of 19.8 days. In order to keep distinct these several conditions the following notational system is used throughout: PA-1st refers to those subjects who, at the point in time of the wellformedness task, have had but a single exposure to an artificial grammar learning task, that being within the context of the PA procedure. These five subjects are the ones who most closely resemble Brooks’ subjects. OBS-1st similarly refers to subjects whose only experience at the time of testing was with the unstructured observation condition. In terms of learning experience, these subjects most closely resemble those we have run in previous experiments. OBS-2nd refers to subjects who have had previous experience using the PA technique (i.e., they are the PA-1st subjects) and PA-2nd refers to those who had previous experience with the observation procedure (i.e., they are the OBS-1st subjects). The designations PA and OBS will refer to each condition as a whole independent of the order in which it was run.
198
A. S. Reber and R. Allm
Introspective protocols PA Ieurn ing Introspective reports were taken from each subject immediately after reaching criterion on each set. A tape recorder microphone sat in front of the subjects at all times and they spoke for as long as they wished about the task, describing in detail what they were doing, how they had learned that particular set, how the learning of that set was similar to or different from the learning of previous sets, and most importantly, what kinds of general cognitive processes they were aware of using during the learning. After the final test using the full set of 20 items they again were asked for an introspective report this time giving a more general overview of the task itself. The five PA-2nd subjects were asked to detail any differences or similarities that they felt between the procedures and, in particular, any differences in their cognitive modi operandi between the present PA task and the previous OBS task. OBS leamirlg Introspective reports were taken immediately after each run through the set of 20 exemplars. As in PA learning they spoke into the microphone freely about the task, what they were doing, how they were processing the letter strings on that particular run, whether they were doing anything different from what they did on previous runs, and what kinds of general cognitive processes they were aware of using during the session. The five OBS-2nd subjects were asked to describe in detail any differences or similarities in their cognitive processes between the two procedures. Testing The tape recorder was left running during the entire testing session and subjects were encouraged to keep up a running commentary. We asked them to provide as much detail as they could about the well-formedness decisions. In particular, they were to provide reasons and justifications for their judgements whenever they could. After SO trials they were asked for a general statement about what they were doing, how they were reaching their decisions, and specifically, how they were using the information they had acquired about the letter strings from the learning phase. After all 100 trials were completed they were asked for a general summary statement about the task and. when appropriate, to give a detailed comparison between the manner in which they made their decisions here compared with how they had made them after the other learning procedure. Finally, they were asked for an estimate of how well they thought they had done by giving an approximate percent of the items that they thought they had judged correctly.
Analogic and abstraction strategies in synthetic grammar learning
Results:
Introspective
199
analysis
Learning
Virtually every subject reported that the two learning procedures felt very different to them. The core of the contrast is found in the simple fact that PA procedures have specific task demands while OBS procedures are relatively unstructured. Our PA procedure required that the subjects learn explicit responses to go with particular stimuli and, as one subject put it, “I had to learn the pairings and I used any gimmick I could - item length, geography, imagery, letter names, sounds, key words, and many idiosyncratic mnemonics some of which 1 will not tell you”. The technique universally reported was to try to find elements of each stimulus that differentiated it from all others in that set and then use some mnemonic to pair those elements with their associated city. The PA learning was characterized as strongly elemental in nature. Subjects uniformly reported little or no interest in overall structure; in fact, holistic scanning proved useless. One PA-2nd subject commented, “ln the other procedure (OBS) I thought I was getting a feeling for some kinds of rules but when I tried that here it didn’t work so I went back to my memorizing tricks”. The failure to find evidence of exploitation of rules for letter order during the PA task (reported on below) bears out these introspections. Unlike the PA procedure, where every subject reported essentially the same experience, the OBS procedure produced a number of interesting stylistic differences and an intriguing diversity of introspections. Some subjects reported using broad visual scans so that, as one put it, “The shapes of the items began to make sense”. Others used the acoustic properties of the letter sequences, “1 pronounced the letter names in groups of two letters each and by the end I felt like I knew all the groups”. Some evolved rather unusual techniques based upon other skills. As one subject put it after the second pass through the exemplars, “There is something vaguely linguistic about this; 1 can sense embeddings, phrase boundaries, and recurrences”. Another reported, “If I make X and R ‘vowels’ I can pronounce the whole item, which makes things easy except when I have trouble finding the syllable breaks”. The OBS procedure was frequently cited as “easy” and, interestingly, as somehow less effective than the PA. We got reports like, “I don’t think I’m learning anything here (OBS)“, “I don’t know what I’m doing this time”, and this revealing comment, “Last time (PA) I searched for salient features, 1 found them and I knew them; this time (OBS) I feel passive, it’s very holistic, like my right hemisphere is engaged - but I don’t know very much”. Perhaps it is the Calvinist residue in our society that makes our subjects link up how hard they had to work during learning with how much they think they learned, or perhaps it’s the fact that they were all products of an edu-
200
A. S. Rcber ad R. Aller~
cational system that only rewards those who can verbalize their knowledge; but during and after the learning phase the OBS condition produced many comments about how little had been learned; the PA condition produced essentially none. However, as we report below, these estimates of knowledge were very different after the testing phases - after the subjects had had an opportunity to put their knowledge to work. There were also some interesting differences in subjective feelings about the two procedures over the course of the learning. Every single subject volunteered some report to the effect that the PA task got harder as it went (“I’m building up proactive interference, I can fee/it”). No such report occurred during OBS; in fact, several subjects mentioned that it seemed to be getting easier (“There’s no strain, each one feels more and more comfortable”). These are the primary distinctions that our subjects reported. They also made many comments which indicated that there were indeed similarities between the procedures. One nearly universal report was that in both tasks bigrams became the most salient features of the letter strings. In PA they were cited as the most frequently usable distinctive cues for the mnemonics for response learning; in OBS they often became “groups” or “chunks” which made “scanning easier”. Subjects were also conscious of other specific aspects of the letter strings, particularly the permissible initial and terminal letters, and the letters that could repeat. However, as we discuss below, although these similarities in introspection collected during learning are useful in identifying those aspects of complex structured stimuli which are likely to be learned, the manner in which these concrete pieces of information are used by subjects during the well-formedness task may depend upon the procedure which was used in their acquisition. Finally, there were a few introspective reports from the learning sessions that hint at an important issue, the effect of one kind of learning procedure upon the other. In simplest terms, OBS-2nd subjects approach the OBS session in ways subtly unlike those subjects whose first experience with an artificial language is with the OBS technique, and similarly for the PA-2nd subjects. One report here is indicative, “Compared with last week’s task (PA) I definitely found myself looking for specific patterns and occasionally getting in trouble because I had to keep changing my rules”. This searching for patterns and hypothesis testing during OBS learning was only reported by OBS-2nd subjects, those who had been run under the PA procedure first. Below, these order effects are discussed in more detail.
The extent to which subjects were able to provide reasons for their decisions and the appropriateness of those reasons are discussed (objectively) in a later
Analogic
and abstraction
strategies
in synthetic
grammar
learning
201
section. Here we are concerned only with the kinds of general introspections provided. The two learning procedures produced rather different kinds of introspections about decision making during this task. The heart of the difference is contained in the report of our most verbal subject. After the PA session he reported, “I used the learning stimuli in two ways. One is simple, did I see this item or one very much like it before or not? The other is using my little pieces that I used for mnemonics, but little pieces don’t help much in making these kinds of decisions. I don’t like this task at all, it makes me feel very dumb”. After OBS his summary was, “While 1 know some things like the ok first and last letters, almost all my decisions are based on things looking either very right or very wrong. Sometimes for some reason things came out and glared at me saying, ‘bad, bad, bad’, other times the letters just flowed together and I knew it was an ok item”. This report, and others like it, point rather directly to one of the central issues. To what extent are processes being reported here reflective of the operation of an abstract, rule-induction process and to what extent are they indicators of an analogic process ? Almost all our subjects reported that the primary outcome of the OBS procedure was an implicit, intuitive feeling about the overall structure of acceptable letter strings which they could use during testing. The PA learning, on the other hand, armed them with a more explicit and more superficial set of materials to use. One subject reported that after PA all she had to use were “concrete letter groups arranged on an image of a map of North America”. After OBS this same subject commented about a “vague feeling of induction, like the intuitions I get in linguistics about what’s permissible and what’s not”. Another reported, “Here (after OBS) it is pure abstract rules, pure. I never referred back to the original stimuli, only to what I knew about them The other (PA) was very stimulus bound, very physical”. Perhaps the most striking comparison is given in the following two comments from one subject. After PA she reported, “I thought I remembered quite a few stimuli and their cities and all-my decision making was based on comparisons with these - and a lot of guesses”. The OBS procedure, on the other hand, produced this summary, “I wasn’t really referring back to the learning items themselves but rather back to what I thought about during observation. Mainly I stared at an item to get a sense of whether it looked right or not”. Interestingly, even the two subjects who did not directly report that the procedures seemed that different to them made comments that reflect these distinctions. One commented, “My general decision making here (PA) is really very similar to before except that here I think I remembered some learning
202
A. S. Reber utzd R. Allw
items and I used them by asking myself whether I‘d seen each new one before or whether it was close enough to one so that it’s confusible with it”. Specific aspects of the letter strings were often cited as important in decision making after both learning procedures. On the surface they appear very similar; first and last letters, bigrams, the occasional trigram, and recursions were frequently mentioned. But these elements manifested themselves cognitively in different ways depending upon the learning procedures used. After PA subjects generally reported on specific aspects of letter strings that are or are not acceptable. e.g., “RXR is ok because cities have railroad crossings”, “TRMT is Tremont St.“, “XX is city crime”, etc. The OBS procedure produced reports with a different flavor like, “MXVR? MX? or XM?, the MX hurts my eyes ” “VX is good, XR is nice, maybe something in between would be good too, I’ll say yes to that”, and this gem, “MT? MT7,my right hemisphere says yes and my left says no. I have no idea whether I’ve ever even seen an initial MT before but it feels right so I’ll take”. Thus, although both conditions produce introspective reports which include frequent citations of specific properties of the grammars, the churucter of the reports differs from one procedure to the other. Introspections after OBS abound with references that have abstract, rule-like qualities; subjects refer to what can (and what cannot) be, what feels right (or wrong), and what is coherent (or not). After PA learning, their reports are strongly tied to specific physical properties of the letter strings and not to structural properties. Moreover, as is outlined below, they are overwhelmingly comments about what cannot be rather than what can be an aspect of an acceptable string. In their final summary statements subjects almost invariably reported that whatever information they had acquired from the PA procedure was much more fragile and labile than what they had acquired from OBS. The PA procedure produced several comments like, “growing confusion”, “disintegration and “everything was beginning to look faof things, even my mnemonics”, miliar”. Only one subject (an OBS-2nd subject) reported feeling that way after the OBS procedure, “I had to rely on my gut level feeling here and I had less and less confidence in it as time went on”. More frequently subjects commented on the lack of confusion after OBS. One reported that the thought the “decisions actually got easier to make”, and another made the observation. “what was fascinating was losing my Gestalt at one point for a couple of trials and then feeling it reform again”. These comments fit nicely with our final bit of introspective data - that concerning our subjects’ sense of their own knowledge as evidenced by their estimates of how well they performed on each well-formcdness taks. These estimates were taken at the end of each task and they make an interesting contrast with the previously presented reports of several subjects that, prior
Analogic and abstraction strategies in synthetic grammar learning
203
to the testing, they thought that they were learning more about the stimuli from PA than from OBS. After PA the mean estimated proportion of correct responses (F,) was 0.68; after OBS it was 0.74. These values are both slight underestimates of actual performance; the actual mean values for P, were 0.74 and 0.81 for PA and OBS respectively. In general then, our subjects did not feel that the knowledge gained during the PA learning procedure was as useful in making wellformedness decisions as that garnered from the OBS procedure and their performance bears this out. However, what is revealing about these estimates is that it is only after OBS that the subjects have any valid senss of their knowledge and its application. The correlation between individual PC’s and PC’s was 0.71 (p < 0.05) after the OBS procedure but a vanishingly small 0.07 after PA. After PA, subjects were essentially making blind stabs about their performance. As one put it, “I don’t know, how well do people do on tests like this, 6070, 80%? I’ll ball park it at 65%“. Note that the issue addressed here concerns knowing that one knows a thing and not that of knowing what one knows about that thing*. Although subjects know quite a bit after both learning procedures, they are quite insensitive to the validity and applicability of that knowledge acquired through the PA procedure. As one subject reported after the PA session, “I ended up with some very funny hypotheses about what was acceptable. If I am right about them I could have gotten 90% of these correct, if I’m wrong, I could actually be below chance”. The OBS procedure resulted in very different kinds of feelings about how much was known. Perhaps one report sums it up best, “I surely don’t know what I know but I knew when something was right and something was wrong; I don’t feel nearly as dumb here”. In summary, our subjects tell us that the OBS procedure tends to produce knowledge which is abstract in nature but which feels intuitive. The experience of freely scanning a series of exemplars for 10 minutes leaves them with a sense of acceptability of letter strings, a sense which they can effectively employ to judge new strings. Moreover, they tend to trust their implicit knowledge systems and make fairly accurate assessments of performance. The same subjects report that the PA procedure leaves them with a very different kind of memorial representation. They know some whole items and many parts of others but they report very little in the way of structure. New strings are ‘The that in this sentence should not be taken as strictly synonymous with the that in G. Ryle’s (1949) distinction between “knowing that” and “knowing how”. The issue which interests us here is the relationship between knowing or sensing that one does indeed know a thing and one’s ability to explicate or formalize that thing. Ryle’s division of knowledge into declarative and procedural has relevance in that what our subjects know may be regarded as largely procedural but we will not pursue that issue here.
204
A. S. Rehcr at& R. Allett
consequentially judged largely by recognition and analogy and estimates of performance are not in keeping with actual performance. Clearly, then, OBS gives us self-confessed “abstracters” and PA gives us “analogizers”. Importantly though, both give us some implicit learning. Under both conditions learning occurs largely in the absence of explicit code-breaking strategies; our subjects still cannot tell us very much about W/IU~they know; their memorial spaces are, by and large, accurate representations of stimulus structure; and their conceptual knowledge can be put to use in dealing with novel stimulus materials. It is rather interesting that all of these analyses and interpretations were based upon introspective, subjective data. In the following section we present data which perhaps are more palatable to contemporary researchers. However, as will become clear, the objective data are very tightly coordinated with the introspective data and the two nicely complement each other in providing a picture of the learner as either an unconscious abstractor of deep structure or as nonanalytic analogizer who works with a more superficial memorial space.
Results:
Objective
analyses
L earn itzg There are, of course, no concrete learning data from the OBS condition since subjects made no overt response to the stimuli. The PA procedure, however, produced some interesting results. First, there was no statistically detectable improvement in subjects’ ability to learn the S-R pairs over the four sets. The mean number of trials to criterion of Sets 1 - 4 was 5.1,4.6,4.1, and 4.4, and the mean number of errors was 4.4, 4.3, 2.7, and 3.0. This lack of trend contrasts sharply with data reported in other studies of artificial language learning where subjects were exposed to grammatical exemplars via a simple memorization technique (Reber, 1967, 1969, 1976). In all of those studies subjects showed considerable improvement in working with grammatical letter strings, a result that we and others (Miller, 1967) have taken as evidence that subjects are learning to exploit the structure inherent in the stimuli. It is not clear what the lack of improvement here indicates. One possibility is that under PA conditions subjects are so intensely involved in selecting out idiosyncratic properties of individual items for use in forming associations that they fail to attend sufficiently to the underlying structure. Many of the above introspective reports would seem to support this intepretation. On the other hand, we may merely be observing a cancellation effect in which the gradual apprehension of structure that leads to improved coding is masked
Analogic and abstraction strategies in synthetic grammar learning
205
by a systematic buildup of proactive inhibition. Again, the subjects’ introspections lend credence. There is very little in the way of hard data to differentiate between these possibilities. One relevant finding is an observation from an earlier study (Reber, 1969). In that experiment in which grammar learning was through the memorization of exemplars the accumulation of PI was slight and, using Wickens’ (1972) release procedures, barely detectable. Thus, if there is a considerable PI buildup effect it may be indigenous to the PA procedure. However, some of the analyses presented below lead us to question even further the extent to which the PA procedure provides subjects with much structural information. Independent of this issue, it is clear that subjects finished the PA learning phase with a good deal of information of some kind about the exemplars. In the final, no-feedback test of all 20 PA items they averaged 9.9 correct, or essentially half. Given the considerable degree of similarity between the learning exemplars this number is impressively high. Finally, we looked to see if there was any relationship between each individual’s performance on this final test and his ability to judge the well-formedness of letter strings. The correlation was positive but fell well short of significance, Y = 0.27. However, with only ten subjects it is not possible to determine whether a robust relationship exists here or not. Well-formedness
task
Here we present two kinds of analyses. One is based upon an evaluation of correct responding and the data are presented in terms of the probability of a correct response in assignment of “grammaticality” of a letter string (P,). The other analyses are based upon the justification that subjects provided for theirjudgements. In these latter caseswe will consider not merely whether or not a given response was correct, but also whether or not the reason given for that response is an accurate reflection of the underlying grammar. In several of these analyses, response biases and unequal numbers of items in particular classes will necessitate a correction for guessing. The appropriate procedure in situations such as these is to calculate a corrected P’, = P, - P.J 1 - P,, where P, is the probability of a correct guess and P, is the observed probability of a correct response. Whenever appropriate this corrected P’, was used. Probability of a correct response (PJ Table 1 gives the mean PC’s for G and NG items after
each learning procedure; the procedures are separated according to whether each was run first or second. An analysis of variance revealed an overall difference between the
206
A. S. Rrber and R. Allen
Table I.
P, on well-fbm~edncss Item stattis
G NC, Mcans
task ______
Learning Procedure ___-__ Observation
Paired Associate\
OBS-1 St
OBS-2nd
~MlXlS
PA-1st
PA-2nd
hlCXlX
0.872 0.752 0.812
0.820 0.788 0.802
0.846 0.770 0.808
0.684 0.764 0.724
0.744 0.768 0.756
0.714 0.766 0.740
procedures, F( 1,8) = 7.93, p < 0.025, and significant learning procedure by item status interaction, F(l,S) = 5.62,/? < 0.05. Note that this advantage in the ability to judge well-formedness following OBS accrues even though the PA procedure provided subjects with almost twice as much exposure time, on the average, to the stimuli during learning as the OBS procedure (18.5 minutes vs 10 minutes). Thus, with both conditions producing highly significant learning rates, the OBS procedure is clearly more efficient at imparting exploitable information about the structural status of letter strings. However, as the interaction implies, this is information which gives the OBS subjects an advantage in the ability to identify G-strings (0.846 vs 0.7 14) while having essentially no effect upon the rate of detection of NG-items (0.770 vs 0.766). The source of this effect is highly interesting and will be revealed in the more detailed analyses below. The analysis of variance failed to detect any overall effect of order and there was no procedure by order interaction. Initially these results surprised us since the subjects’ introspections suggested that there was a lasting “contamination” of the first learning experience upon the second. However, as we discuss below, raw P, is not the most sensitive measure for determining which decision making procedures a subject is using and several important order and procedure effects emerge with more fine-grained analyses of subjects’ justifications. Several other specific analyses were carried out. All subjects were better at detecting NC items when the violations occurred in the initial letter position (P, = 0.87) than in any other position (P, = 0.71), but were best when items contained multiple errors as in the case of the backward strings (P, = 0.95). No other position effects were found and no group differences were observed, the above values being means for all conditions. Moreover, no effects of the length of test strings were found under any condition. These results were all expected; they are in keeping with findings discussed elsewhere (Reber and
Analogic and abstraction strategies in s_vnthetic grammar learning
207
Lewis, 1977). They are presented here because they point out important commonalities between our procedures here, those employed in earlier studies, and those used by Brooks (1978). As we emphasized earlier, if only the superficial data are looked at and no specific comparisons are made, the subject who emerges from a PA learning procedure can appear very much like one who emerges from an OBS learning procedure. Finally, the P, data are germane to an issue that is of some theoretical importance, the degree to which the knowledge that subjects bring to the wellformedness task can be said to the representative of the underlying grammatical structure reflected in the exemplars. It has been noted previously (Reber, 1976) that when neutral instructions are given in the learning condition, subjects rarely induce rules that are not representative of the grammar. That is, the rules by which they operate are accurate, if incomplete, reflections of the underlying structure of the letter strings. On the other hand, when given instructions which encourage explicit rule searches, subjects frequently elaborate rules which are not representative of the grammar. In addition, this conscious construction of hypotheses about letter sequences seems to partially block the implicit procedures whereby veridical, probabilistic relations can be mapped. Evidence of these two types of rule induction can be derived from an analysis of the pattern of responses to the two presentations of each test item. The logic is straightforward: a large number of items misclassified on both presentations (EE) relative to the number misclassified only once (CE and EC) indicates that the subjects are using inappropriate rules when making decisions about grammatical status. Table 2 shows all four possible patterns for each learning condition and order of running.
Table 2.
Patterns of responding to successive presentations formedness task Pattern
Learning
Procedure
Observation
CC CE EC EE
of test items on the well-
Paired Associates
OBS-1 st
OBS-2nd
PA-l st
PA-2nd
181 21 23 25
182 17 22 29
159 25 19 47
165 35 16 34
208
A. S. Reber and R. Allen
In testing for the significance of the EE rate, comparisons are made between the frequency of EE items and the mean of CE and EC. This procedure prevents the x2 values from being inflated by the inequality of the frequencies of CE and EC. The overall EE rate here is significantly higher than that expected under the assumption that all errors of classification are produced solely by a lack of knowledge about grammatical structure, x2(20) = 3 1.49, p < 0 .OS. An examination of the individual subjects showed, however, that this effect is contained entirely in only the five PA-I st subjects. Of these five, two individuals showed a significantly high EE rate, x2(1) = 4.27 and 4.01, 19’s < 0.05, and the rate for this group as a whole was highly significant, x2(5) = 18.54, p < 0.005. On the other hand, none of the OBS subjects showed a significantly high EE rate and the overall rate for this condition is not significantly different from chance, x*(10) = 8.94. The same is true of the five PA- 2nd subjects, x2(5) = 4.01. Therefore, the only evidence for systematic nonrepresentativeness in the use of rules occurs after the PA learning task and not after the less structured OBS task. Moreover, subjects who had had previous experience with the OBS procedure, even if this experience occurred as long ago as six weeks before the PA task, showed no such tendency to use rules which did not accurately reflect underlying structure. In passing, it should be noted that the CE and EC rates in Table 2 hint at some interesting issues. First, note that the overall CE rate after PA learning is much higher than after OBS (60 vs. 38). As we cited above, after PA learning several subjects had reported, with no little distress, that they felt that they were doing more and more poorly on the task as time went on. To check on these hints, we compared the overall P, from Trials 1 - 50 with that from Trials 5 1 - 100. As the CE rates suggest, this comparison revealed a significant decrease in P, after PA learning (0.77 vs. 0.7 1; t(9) = 2.13, /I < 0.05) but no change after OBS learning (0.80 vs. 0.82). Whatever is being learned by subjects under the PA conditions seems to be somewhat less stable than that learned under the OBS condition. Second, the combined CE and EC rates can, in a very rough way, be used as an indication of the tendency for subjects to contradict themselves. There are, of course, two kinds of items here: those on which the subject merely guesses and happens to get it right on one presentation and wrong on the other, and those where the subject has actually used two conflicting decision rules. Below, we will present a more refined analysis based upon the subjects’ articulated self-contradictions which will help to clarify these two procedures in decision making. In particular, we shall see that articulated selfcontradictions are more likely to occur after PA learning than after OBS. These analyses, while suggestive of different processes operating after the two learning procedures, do not permit a determination of whether such nonrepresentative
Analogic and abstraction strategies in synthetic grammar learning
209
rules are explicitly or tacitly held constructions. We can, however, glean some understanding here if we look at the specific reasons that subjects provided for their justifications. Justifications The justifications procedure is well known in developmental psychology largely from the work of Piaget. Piaget has long used subjects’justifications in assessments of cognitive sophistication since he realized, as many other cognitive psychologists have not, that simple responses to experimental stimuli do not necessarily accurately reflect the “correct” underlying cognitive processes. For example, two children who perform equally well in their judgements on a number conservation task are often revealed to be different cognitive creatures when asked to explicate their responses (see, e.g., Piaget, 1929). Similarly, two subjects who reject a particular letter string may turn out to have done so for very different kinds of reasons. The following analyses are thus concerned with the specific justifications that subjects made with respect to each test letter string. Our interest here is with explicitly stated reasons and in uncovering evidence for general principles that underlie these explications. Initially, however, it must be appreciated how difficult it is to get specific justifications from subjects in these experiments. Even though we are using sophisticated, hand-picked subjects who have agreed to introspect for us, they still only supplied reasons for 821 of the 2000 decisions that they made. As we and others have repeatedly emphasized (Reber and Lewis, 1977; Brooks, 1978; Baron, 1977) conceptual knowledge acquired through the mechanisms of implicit learning is strikingly resistant to conscious explication. Nevertheless, these 821 justified responses form a sufficiently large data base so that some interesting differences in decision making after the two training procedures become apparent. Moreover, we will be able to make comparisons between these trials and those on which subjects were not able to provide reasons. In order to carry out these analyses we had to make some decisions about how to categorize the wide-ranging rationales that Subjects supplied. First, all justifications were classified as falling into one of the following types based upon the subjects’ explicit verbalized rule: (a) single letter rules - e.g., “you can (or cannot) start with an X”, (b) bigrarn rules - e.g., “TX can (or cannot) occur like that”, (c) trigram + longer sequence rules - e.g., “MXR looks wrong (or right) there”, (d) other rules - here we included those idiosyncratic rules which were otherwise unscorable, e.g., “you cannot have any three letters the same right at the beginning in something that only has a total of four letters”, and (e) old -justifications based upon the subjects’ belief that
210
A. S. Reber md R. Allen
the test item was one of the ones used during the learning phase. Note that subjects often reported that an item “reminded them” of one seen during learning. These were not counted here as such remarks were not always used by subjects as rationales for accepting an item. In any event they will be discussed separately since such responsesare particularly important in evaluating the analogic model. Second, wherever possible (i.e., all cases except old and others) the aspect of the letter string cited was scored for location. Here, because length of string varied from three to six, we used three broad categories: initial, internal, and terminal. Third, justifications that were “appropriate” were analyzed separately from those that were “inappropriate”. An appropriate justification is one where the subject’s reason for a response is an accurate reflection of the constraints of the grammar. Note that an appropriate justification does not necessarily lead the subject to a correct response - on occasion a subject would cite one or more appropriate reasons why a letter string was acceptable but misclassify it because of a failure to detect a violation elsewhere in the string. Similarly, an inappropriate justification does not always indicate that the subject made an error - we observed many occasions where a subject inappropriately cites a perfectly acceptable bigram as “not acceptable” but classifies the item correctly because there was an undetected (or at least uncited) violation elsewhere. Table 3 presents the 694 appropriate justifications which were storable for location. The most striking thing about these data is the remarkable similarity between the specific aspects of letter strings cited after both learning procedures. Regardless of how the material was presented, the initial position is highly salient, the terminal position somewhat less so, and the internal positions consistently underrepresented in these explicit and appropriate citations. This general anchor position effect is one of the more robust findings in synthetic language learning studies (see Reber and Lewis, 1977) and undoubtedly reflects a general propensity to concentrate attention on the beginnings and ends of informational units during learning (see also, Bever, 1970). Similarly, after both training procedures, subjects display a strong concentration on bigrams. Although this focus was slightly stronger after OBS than after PA, this difference was not significant. Clearly, a considerable proportion of subjects’ articulated knowledge can be characterized as an awareness of permissible and non-permissible letter pairs, particularly in the first two positions in a string. Note that initial bigrams account for fully 29.4% of the total citations. This salience of bigrams during learning has also been observed in other studies (Reber and Lewis, 1977) and may indicate that the bigram represents the level at which relative probabilities and invariances can be most efficiently mapped.
Analogic and abstraction strategies in synthetic grammar learning
Table 3.
Rule Type
Single Bigram Trigram Totals _____
+
Number of appropriate justifications cited
broken down by rule type and location
Paired Associates
Observation Learning -Location Cited
211
Location
Learning
Cited
Initial
Intcrnal
Terminal
Total
Initial
Internal
Terminal
Total
45 126 16 181
4 54 10 68
25 19 33 131
74 259 59 392
41 18 20 145
5 30 6 41
21 52 31 116
19 160 63 302
The types of rules formulated and explicated by our subjects in these ways do not show any learning procedure effects and may thus be highly constrained by general cognitive principles and the learning material used. As we mentioned earlier, in many ways the subjects whom Brooks (1978) cites as “analogizers” and those whom we have called “abstracters” look very much alike. There are, however, many differences between the patterns of justifications provided following PA and those following OBS. These can be seen in the data presented in Table 4. Here all responses on the well-formedness task are scored for whether or not that response was justified, the grammatical status of the item, and the learning procedure. Note first, that after both learning procedures there is a distinct tendency to justify NG responses relative to G responses, F( 1,8) = 16.32, p < 0.005. That is, clearly articulated criteria are predominantly for detecting violations, whereas the tacitly operating criteria are largely for decisions about what “feels right”. However, the two learning conditions produce rather different levels of effectiveness in these two modes of operation. As the values of P, in the table show, the likelihood that an individual decision about grammaticality will be correct is not the same in all cases. In particular, after PA learning explicitly justified responses are correct with a significantly higher probability_ than those items not so justified, t(9) = 2.35, p < 0.05. After the OBS learning experience the difference between justified and unjustified responses is not different, t(9) < 1. Further, if we draw specific comparisons between the two procedures we find that there is no statistical difference on justified items, t(9) < 1, but a highly significant difference on the unjustified items, t(9) = 3.88, p < 0.01. The essential difference here is between the ability of subjects to deal effectively with letter strings when they have no explicit verbalizable knowledge to guide them. After both learning procedures subjects emerged with a small but solid body of articulated rules which they used to make decisions about
212
A. S. Reber and R. Alkw
Table 4.
Number of justified and urljust
Observation Justified
to G and NG items
Learn@
Responses
Unjustified
Responses
c,
NG
Total
P,
G
NC, -
Total
P,
G NG Total
128 36 164
30 189 219
158 225 383
0.810 0.840 0.828
295 79 374
41 196 243
342 275 617
0.863 0.713 0.796
Item Status
Paired Associates
-
Justified
G NG Total
Learning Unjustified
Responses
Responses
G
NC
Total
P,
G
NG
Total-
P,
113 36 149
57 232 289
170 265 438
0.665 0.866 0.788
244 81 325
86 151 237
330 232 562
0.739 0.651 0.703
the well-formedness of novel letter strings. But only after the OBS experience do subjects also have a solid but tacit apprehension of grammatical structure which serves them well on those occasions when they have no conscious criteria; after PA there is relatively little in the way of this tacit knowledge of structure.
These two kinds of response justifications deserve special treatment since they represent the primary evidence for the analogic process. If subjects are operating primarily by referencing back to the stimuli from the learning session and looking for similarities between test stimuli and the stored exemplars then we ought to find a large number of items on which the subjects justify their responses by citing an item as being an “old” item or one that “reminds me of...“. Recall that all subjects were informed of the existence of “a few” test items which were identical to ones used during learning. The data here are rather clear. These types of justifications are not particularly common after either procedure although they are significantly more likely to be used after PA than after OBS. After PA subjects produced nearly twice as many “old” justifications as they did after OBS (77 vs. 40), t(9) = 2.70, p < 0.05, and fully ten times the number of “reminds me of” remarks (30 vs. 3), t(9) = 3.8 1, p < 0.02. Note that almost all of these items which
Analogic and abstraction strategies in synthetic grammar learning
213
subjects cited as being or reminding them of learning items were, in fact, grammatical (P, = 0.88 and 0.83 for OBS and PA respectively), so it is clear that subjects were rather judicious about using an articulated analogic criterion. However, afte’r PA learning subjects were significantly superior at actually recognizing items from the learning set. Of the 77 items cited as “old” after PA, 32 actually were old (P’, = 0.37); of the 40 cited after OBS only five actually were (P’, = 0.09). t(9) = 3.60,~ < 0.02. Thus, PA learning seems to provide subjects with a better memory for the actual learning stimuli than does OBS learning but, as the overall P, values reflect, it may do so at the expense of knowledge about structure.
Order
effects
The final analyses to be presented here look at several results which suggest that the two learning procedures have an effect upon each other. The situation here is quite complex which, of course, is one of the prices one pays for using withinsubject designs. First, there is the problem introduced by the fact that when the second session begins the subjects are no longer innocents. They are now aware that they have been and presumably will be once again working with complex, rulegoverned material. We already know (Reber, 1973; Brooks, 1978) that such information impacts upon the cognitive approach that a subject will take to an artificial language learning task. Second, the subjects are also aware that they are expected to know something about the structure of the materials from the training phase of the study and that their knowledge will be assessed using a well-formedness task. They also know that the second training procedure will be different from the first and that they are expected to introspect into the comparisons and contrasts in the cognitive processes utilized in each. Finally, it is a distinct possibility that subjects will transfer some of the cognitive processes used in the first task to the new task. Given all these factors it is noteworthy that the-role played by the order variable was not revealed by the analyses of variance on P, and number of justifications. Rather, it emerges in small ways when the fine grain of our subjects’ behavior is examined. The following results show clearly that subjects modify their cognitive modes on the second run as a result of their previous experience but that the constraints of the training procedure restrict these attempts to but minor nuances of style. One of these order effects was noted earlier in the finding that only PA-1st subjects produced a high enough EE rate for us to conclude that they had used a significant number of nonrepresentative rules. The other effects, all
214
A. S. Reberand
Table 5.
R. Allen
Comparisons between first and secorzd run subjects on various errors of response justification Procedure
PA OILY
Measure Inappropriate Justifications
G-Bigrams Misclassified
Self Contradictions
1 St
2nd
1St
2nd
1st
2nd
89 16
31 48
63 4
28 32
23 5
8 8
of which are based upon subjects’ justifications for their responses, are presented in Table 5. The inappropriute justifications measure contains all occasions where a subject justified a decision by citing one or more aspects of a letter string in a way that did not reflect the underlying grammar. The misclassified G-bigmns includes all cited letter pairs that were actually grammatical but were called unacceptable. Note that the inappropriate justifications measure includes these cases. We present them separately because they provide a good index of subjects’ lack of knowledge of permissible letter-to-letter contingencies. The self-contrudictiorzs measure includes all those instances where a subject’s verbalized rule on one trial was incompatible with one articulated on a previous trial. All of these cases show the same pattern. There are large differences between first-run subjects and small (or no) differences between second-run subjects. Forexample, ofthe subjects in their first experimental session, only the PA subjects display a large number of erroneous, inappropriate justifications, OBS-1st subjects make very few such errors. However, in the second experimental session, all subjects display a moderate number. This result is consistent with the notion that subjects who are not allowed to learn a grammar with neutral instructions formulate nonrepresentative rules. While OBS1st subjects learned under such neutral conditions, OBS-2nd subjects were aware that they were working with complex, rule-governed material. This knowledge, however, has the opposite effect of lowering nonrepresentativeness in PA subjects, although the same operation of erroneous and explicit rule induction is probably involved. That is, although the awareness of the existence of rules encourages rule induction that is occasionally erroneous, it also results in an attention to rules and structures that seems to be largely lacking in the PA-1st subjects. It would seem that knowledge that a system
Analoglc and abstraction strategies in synthetic grammar learning
215
is rule-governed does not always place one at a disadvantage in learning, but interacts with the task structure under which one learns (a point to be persued in detail in Reber, Lewis and Kassin, in preparation). As can be seen in Table 5 a large proportion of these errors are occasions on which subjects identify permissible bigrams as being nonacceptable. Most of the erroneous rules that our subjects verbalized were rules about what cannot occur rather than what can occur. It should not be assumed, however, that these justifications necessarily reflect rules with which subjects entered the well-formedness task. To the contrary, our subjects’ remarks indicate that many of them are formulated spontaneously during this task. In fact, this phenomenon of mislabeled G-bigrams relates directly to the memorization of and reliance on learning task items, for these justifications were frequently of the form, “RM? I don’t remember seeing RM before. Perhaps that’s not right. No, you can’t have an RM”. Essentially what we have here is an example of a subject inventing a “rule” about the unacceptability of a particular bigram based simply upon a failure to find it in memory. The five PA-1st subjects alone produced fully half of these bigram errors, doubtlessly due to their need to rely on their fragmentary memories of items from the learning set. However, these same subjects reduce such citations by half on the second session where the OBS procedure allows them to devote relatively more attention to overall structure of letter strings. On the other hand, OBS-1st subjects commit essentially no such errors, although having experience with PA learning (OBS2nd) causes them to label a moderate number of grammatical bigrams as ungrammatical. In neither learning condition was any systematic relationship observed between the occurrence of these errors and the frequency or position of such mislabeled bigrams in learning set items. Note that use of these rules contributed to the high EE rate found in PA-1st subjects and parallels the condition by order effect observed on this measure, for many subjects “stuck to their guns” with these erroneous formulations. However, many of the articulated rules proved to be somewhat less than stable. This is especially true of the PA-1st subjects.-As Table 5 shows, the live subjects whose only experience to date is with PA learning explicitly contradicted themselves a total of 23 times while all other conditions together produce only 21 such instances. While some of these contradictions went unnoticed by the subjects and were due to simple forgetting of a previously stated rule, many were conscious and agonizing retractions of previous statements. For example, several (and only) PA-1st subjects made remarks such as “An M-sandwich (the subject’s term for an item beginning and ending with M)! I know the last time I saw that I said no to it. But this just looks right. Perhaps you can have an Msandwich. I’ll take it this time”. Such conflicts
2 16
A. S. Reber am’ R. Alkw
between articulated and implicit knowledge contributed to the inflation of contradictory rules in some PA subjects. These order effects lend support to the interpretation that the PA task fosters a “memorize items and/or discriminable differences between items” strategy which has consequences for performance on the well-formedness task. We have already noted that the resultant relatively good memory for learning set items and their elements, combined with only meagre structural knowledge, encourages the use of analogic strategies. But it has other consequences. Because the PA subjects’ memory for the learning task items is not complete, they erroneously reject many items which contain acceptable elements simply on the grounds that the elements are not in their memorial space. And because they are losing information from memory rather rapidly, their contradiction rate is relatively high and the error rate increases over the course of the well-formedness task. However, this cognitive behavior is not solely constrained by the structure of the learning task, for in subjects who have had experience with observation learning (PAZnd) these behaviors are ameliorated. The reverse is true for the two groups of OBS subjects. Either experience with other cognitive strategies applied to the same kind of material or the simple knowledge that the material is rule-governed (and it is impossible to distinguish between these two possibilities at this point) in some way affects one’s behaviorjust as surely as does the structure of the learning task itself.
Discussion The argument has been put forward (Brooks, 1974, 1978; Baron, 1977) that analogic procedures may account for much of concept identification particularly in natural situations. We concur on this up to a point, for even with stimulus materials as far removed from natural concept instances as we have used here our subjects were frequently observed in the use of such strategies. Iiowever, there are very likely many cases of natural learning and decision making situations involving conceptual functions where analogic strategies are at least in principle nonoptimal and probably not favored by the learner. Natural language acquisition seems to be the most obvious case in point. When one asks a fluent speaker of a language whether a certain construction is acceptable or not, one rarely if ever obtains judgements like “No, that doesn’t remind me of anything I’ve heard before”. Ungrammatical sentences are met with a vague kind of unease, a sense that something is wrong, and this experience is felt even in situations where one can “make sense” of the
Analogic and abstraction strategies in synthetic grammar learning
217
message3. If pressed to rationalize this experience, an informant may be capable of identifying the aspect of the sentence that he feels is nongrammatical and can occasionally specify what needs to be done with it to make it acceptable. These intuitive senses of grammaticality and nongrammaticality with occasional verbal explications are the kinds of responses which dominated in our synthetic grammar learning procedure, but primarily following the unconstrained, observation acquisition experience. It seems clear that when the stimulus environment is rule governed, when the pool of learning exemplars is large, and when there are no immediate task demands, the optimal and favored mode is an abstraction of structure, be it conscious or not. However, there are clearly some very delicate interactions here between the type of stimulus material to be learned, the constraints of the acquisition task, and the resulting cognitive modi operandi that the learner engages4. While it is certainly true that our subjects performed quite well on those items where an analogic strategy was used, in general its use was associated with poorer overall performance than the implicit abstraction procedures; subjects do much better when left to their own devices. Moreover, while the analogic strategy may coexist with implicit rule induction, it seems to partially block its deployment. The memorize-and-analogize procedures tend to divert the subject from the deeper processing of material which produces the more compelling representation and the more efficient decision making about novel instances. In summary, knowledge of a complex stimulus environment as exemplified by our synthetic grammars is accessible through more than one type of acquisition procedure but depth of knowledge will be dependent upon the type(s) of procedure used and the extent to which they are employed. First, it is clear that all subjects do induce some explicit, consciously derived and held rules. The existence of several high frequency letter pairs in the grammars invites such activity and this type of learning was observed despite differences in task structure. However, this highly accurate and explicit analytic procedure is, from our point of view, uninteresting and it accounts for a distinct minority of our subjects’judgements. A second learning and decision making procedure is most easily describable as “analogic”. This procedure, characterized by an extensive and relatively accurate memory for individual items and frequent use of “reminds me of” 3Note that this problem of “making sense” of deviant or unusual linguistic constructions, independent of an assessment of well-formedness, may occasionally be analogic in nature (see Gleitman and Gleitman, 1970; Verbrugge, 1977). However, this seems to be a case of falling back on to the use of analogy simply because there are no structural components available for such semantic tasks. 4Thcre are, to be sure, personality factors involved as well, but that is another story, see Kassin and Reber (in press).
218
A. S. Reber arzd R. Allm
justifications is nonanulytic in Brooks’ (1978) sense in that subjects do not respond to the molecular elements of the stimuli and implicit in our sense in that they are not consciously aware of the aspects of the stimuli which lead them to their decisions. This procedure, while it allows for rather accurate assessments of test items actually seen before (“old” items), was associated with relatively poor knowledge of structure, a high rate of erroneous rejection of letter sequences not contained in memory, and rapid disintegration of performance over the course of the well-formedness task. Importantly, the use of this strategy was limited almost exclusively to the PA task. Only when memorizing exemplars for the purpose of pairing them with an essentially unrelated verbal response did subjects engage in analogic decision making. This finding is interesting in light of Baron’s (1977) evidence for use of an analogy procedure in reading nonsense words. Since English spelling is relatively irregular for an alphabetic system, a large proportion of common English words have to be taught to beginning readers by a paired-associate method. That is, in addition to phonemic attack skills, the six-year-old receives extensive exposure to a look-say method which enables him to learn some of our more archaically spelled words. It would thus be interesting to see if one obtains the same analogy-type justifications from readers of a more regular writing system, say German, who have had much less exposure to an essentially paired-associate learning method. In contrast, one might expect to obtain a much larger proportion of analogic justifications from readers of a logographic orthography such as Chinese where the instruction is almost purely of the PA type. Finally, there is the procedure which we have been calling implicit leurning. It is characterized by a relatively passive apprehension mode during acquisitioll, relatively high accuracy in assessing grammaticalness of letter strings even when a concrete justification cannot be supplied for the decision, performance that does not deteriorate over time, and knowledge that is structural rather than instance specific. Although some of this type of learning appeared to take place under the PA procedure, it was primarily associated with the unstructured OBS task. Type of learning and decision making procedure deployed, however, was not entirely constrained by stimulus material and learning condition. Expectations of task demands and experience with other learning procedures were also found to affect differential deployment of processing skills. The subject who knows that the material is rule governed, that he will be asked to make grammaticality decisions, and that he learned a grammar quite well enough three weeks ago by “doing nothing”, will act somewhat differently from the subject who enters the PA task naive. He will, in effect, distrust the analogy procedure the learning task leads him to use, and so use it in a very discrimi-
Analogic and abstraction strategies in synthetic grammar learning
219
nating way in making his decisions about grammaticality. He knows that not remembering seeing a specific bigram before does not necessarily mean that it is unacceptable. He will trust his intuitive feelings of structure somewhat more. In contrast, the subject who enters an observation task after exposure to the PA condition will tend to rely on the memorize-and-analogize procedure somewhat more than if he entered the observation task naive. He will therefore make a few of the same kinds of errors that he made some weeks before. Our data, in short, suggest a functionalist interpretation of performance in experimental and, we hope, many natural learning situations. Our subjects have at their disposal several procedures for gaining knowledge and using it to operate in the world. These procedures will be deployed differentially according to the type of material to be learned, the way in which it is presented, one’s expectations about the task, and one’s previous success with these procedures. For example, the normal reader of English knows many rules about pronunciation patterns, and he knows most of them implicitly. But he also uses an analogy strategy, or he would never pronounce the nonsense word “hamb” as if it were analogous to “lamb”. The first grade reading teacher uses both these procedures in his own reading, but he also knows some of the patterning rules explicitly, because he knows that he will have to explain them to young children. It is possible, in principle, to account for much of concept formation by memorization of instances and subsequent analogy. However, it is our conclusion that while such a strategy may provide an organism with the kinds of cognitive armamentoria for successfully classifying beasts that remind it of Lassie as “dogs”, it may give it some very strange ideas about conceptual structures like grammars.
References Baron, Bever, Brooks, Brooks,
Brooks,
J. (1977) What we might know about orthographic rules. In S. Dornic (Ed.), Atfenfion and Performance VI. Hillsdale, N. J., Lawrence Erlbaum. T. G. (1970) The cognitive basis for linguistic structures. In J. R. Hayes (Ed.), Cognifion and the Development of Language. N.Y., Wiley. L. R. (1974) Implicit learning and rule statements in rule-learning experiments. Paper presented at the Meetings of the Psychonomic Society, Boston. L. R. (1977) Visual pattern in fluent word identification. In A. S. Reber and D. L. Scarborough (Eds.), Toward a Psychology of Reading: The Proceedings of the CUNY Conferences. Hillsdale, N. J., Lawrence Erlbaum. L. R. (1978) Non-analytic concept formation and memory for instances. In E. Rosch (Ed.), Cognition and Concepts. Hillsdale, N. J., Lawrence Erlbaum.
220
A. S. Reher and R. Allen
Chomsky. N. and Miller G. A. (1958) Finite state languages. Information and Control, 1. 91 I 12. Gibson, J. J. (1966) 7%~ Serzses Consideredas Perccprual S_vsrems. Hoston: lloughton-Mifflin. Gibson, J. J. An Ecological Approach fo k’isual Perception. To be published by IIoughton-Mifflin. Clcitman, L. R. and Gleitman H. (1970) Phrase and Paraphrase: Some Innovative Uses of‘Lan,twz,se. N. Y.: W. W. Norton Co. Kantowitz. B. H. (1971) Information versus structure as determinants of pattern conception./. exper. PsychoI.. 89, 282-292. Kassin. S. M. and Reber, A. S. Locus of control and the learning of an artificial lanpuape. J. Res. Personal., in press. Lewis, S. (1975) Implicit and explicit learning of an artificial language. Unpublished PhD Dissertation. City University of New York. Miller, G. A. (1967) Project grammarama. In The Ps_vcholo~y of‘Communicarion: Seven Ess4.1~~. N. Y.: Basic Books. Millward. R. B. Models of concept formation. In preparation. Millward, R. B. and Reber, A. S. (1972) Probability learning: Contingent event %qucnce, with laps. Amer. J. PsychoI., 85, 81 98. Piapct, J. (1929) The Child’s Conceptiorl ofthe World. N. Y.. Ilarcourt Brace. Polanyi. M. (1966) The Tacit Dimension. Garden City. N. Y., Doubleday. Posner, M. 1. (1969) Abstraction and the process of recognition. In G. Cl. Bower and J. T. Spcncc (Eds.), The Psvchology of I.earninx and Motivation, Vol. 3. N. Y., Academic Press. Pylyshyn, Z. W. (1973) What the mind’s cyc tells the mind’s brain: A critique of mental ima$!ry. Ps,vcho1. BUN., 80. I ~-24, Reber. A. S. (1967) Implicit learning of artificial grammars. J. verb. Learn. verb. Reh., 6, 855-863. Reber, A. S. (1969) Transfer of syntactic structure in synthetic languages. J. exper. Ps.vchol., 81, 115~119. Rcbcr, A. S. (1976) Implicit Icarning of synthetic languages: The role of instructional set. J. cxper. Ps.vchol. Hum. Mem. Learn., 3, 88~94. Rebcr, A. S. and Lewis, S. (1977) Implicit learning: An analysis of the form and Structure of a body of tacit knowledge. C’OK..5, 333 -36 I. Rcber, A. S., Lewis, S.. and Kasuin. S. M. Implicit learning: Structural sa1icnL.e and instructions to learn interact. In preparation. Reber. A. S. and Millward, R. B. (1968) Event observation in probability learning. J. exper. Psychol., 77.317-327. Rcbcr. A. S. and Millward. R. B. (1971) I%nt tracking in probability Icarning.Amer. J. PsJ*choZ., X4. 85 99. Ryle. G. (1949) The Concepf ofMind. London, Ilutchinson. for a Shaw, R. and Pittengcr. J. (1977) Perceiving the face of change in changin, 0 faces: Implications theory of object perception. In R. Shaw and J. Bmnsford (Eds.), Perceiving, Acting, and Knorvin,c. Ilillsdale, N. J.. Lawrence Erlbaum. Vcrbrugge. R. R. (1977) Resemblances in language and perception. In R. Shaw and J. Bransford (Eds.), Pcrceiviug, Acting, and Knoitaing. Ilillsdale. N. J., Lawrence Erlbaum. Wickens, I). D. (1972) Characteristics of word encoding. In A. W. Melton dnd E. Martin (Eds.), Coding Processes in human .%lemory. N. Y ., Wiley. Re’sumc; Lc\ bujcts doivcnt apprcndrc tissazc par paircs associ&s proc&iurc li 13 constitution
dcs grammaircq artificielles xlon dcu\ modalit&\ d’acqui\ition: un apprcnou I’obscrvation d’clcmplcs. On pcut tr?s ncttemcnt a
Analogic and abstraction strategies in synthetic grammar learning
221
ties de themes spdcifiques et impliquant l’utilisation d’une strategic analogique lorsqu’il s’agit de prendre des decisions concernant des stimuli nouveaux. La procedure d’observation entraine l’induction d’unc representation abstraite de regles grammaticalcs ct I’utilisation d’une strategic de correspondance lors des priscs de dc’cision. En outrc cette procedure entrainc un savoir plus durable et de meillcurcs performances. Les analyses des cxemples dcs reponscs objectives et des instrospections subjcctives fournissent dcs don&es appuyant cctte distinction. On part d’un point de WC fonctionnaliste pour discuter des relations entre la modalite d’acquisition et la strategic cognitive.
3
Cognition, 6 (1978) 223-228
On a conceptual
hierarchy
DAVID
of time, space, and other dimensions*1 NAVON
University
of Haifa
Abstract Several observations about the wali humans conceive of attributes, changes and covariation of stimuli are presented as indications for the existence of a conceptual hierarchy) of dimensions in which time dominates space, and space dominates every other dimension. Consider the issue of how we apprehend stimuli that vary along several dimensions. We may often overlook some aspects, and sometimes be unable to ignore others. But even within the aspects to which we do attend, some may be more salient or dominant or “psychologically present” than others. Various empirical phenomena may be interpreted to indicate that such dominance relationships exist. For example, Garner (1976) suggests that if the speed of sorting a set of stimuli by one dimension is affected by variation on the other dimension but not vice versa, the second one dominates in some sense the first one. One may interpret this phenomenon as indicating some sort of hierarchy in processing or representation of different dimensions. That is to say, the apparent asymmetry of effect may arise from temporal order of processing or from depth of processing (see Craik and Lockhart, 1972) or from some sort of lexicographic system of representation. To illustrate, if size of visual stimuli has priority over brightness it might be because information about size reaches our awareness earlier than brightness information, or because more processing effort goes into evaluation of size than into processing of brightness, or because size is more major than brightness as a classification index in our mental image of the world. Of course, the meaning of a hierarchy depends on the operational procedure employed to establish its existence. Different procedures may reveal different hierarchies, although the hope is that they will converge on just one.
*Requests for proofs should Haifa. Haifa, Israel. ‘I am indebted to comments and two anonymous refcrecs. OolO-0277/78/0006-0223$2.25
be sent to: Dr. David Navon, made by Gita Ben-Dov,
3 Elsevier Sequoia
Department
Chasida
Ben-Tzur,
S.A.. Lausanne
of Psychology,
University
James Levin, Benny Printed
of
Shanon
in the Netherlands
224
David Navon
I suggest that certain dominance criteria proposed here can serve to identify a psychological hierarchy of three levels, (a) the time dimension, (b) the spatial dimensions, and (c) all other dimensions. Orderliness Consider what the partial order amofzg dimensions imposed by the mind may entail about order of stimuli witllirl dimensions. When dimensions are such that stimuli call be completely ordered by each of them, then for any pmticulur set of stimuli, any two dimensions may or may not correlate. We tend to have a feeling of “order” or “lawfulness” in a set of two-dimensional stimuli (or more precisely, a set of multidimensional stimuli that vary on just two dimensions) whenever the two dimensions happen to be fairly correlated. The lower the correlations among the dimensions arc, the more “random” or “unordered” the set looks to us. Had all dimensions had equal status, we would probably have been unable to pin down the locus of that unlawfulness; after all, why say that it is J which fails to correlate with x rather than the reverse? In that case stimuli will be perceived as a collection of unordered points in the psychological plane spanned by the dimensions x and y (Fig. 1A). For example, the perception of a set of identical circles scattered randomly on the plane of a blackboard appears to be of that sort. However, we often tend to take for granted the order of stimuli along one dimension and to attribute randomness or orderliness to the other one. Figure IB represents the case in which stimuli appear unordered just with respect to the dimension _I’.Figure 1C represents the case in which the “unordered” dimension is x. That phenomenon may result from a difference in the status of the two dimensions in the hierarchy: either x is more major than JJ as in Figure lB, or vice versa as in Figure 1C. Consider, for example, a line of circles of various diameters uncorrelated with their position on the horizontal axis. If someone were to describe in everyday terms an array like this one, we would accept a statement like “The size of the circles is random” or “The circles are not ordered according to size” as intelligible though not fully precise. In contrast, we would probably have a hard time understanding a statement such as “The position of the circles is random” or the like. Thus, the aligned circles illustrate the case represented in Figure 1B where x stands for horizontal position and _Ystands for diameter. The size dimension is blamed exclusively for the apparent “disorder”. This suggests that horizontal position precedes size in the hierarchy of dimensions, at least in the sense discussed here. When stimuli do correlate, the correlation may be interpreted as a fzmction relating the minor dimension to the major one. This interpretation is reflected
Conceptual hierarchy of time, space and other dimensions
Figure
1.
225
Three possible interpretations of a set of stimuli varying on two uncorrelated dimensions. See tex for explanation.
l l
Y
Y
0
X
A
X
X
B
C
by the fact that we choose to describe such stimuli as ordered by the minor dimension, as if it is the only source of order. For example, if the size of the circles is uncorrelated with their horizontal position we would describe them as ordered by size rather than by horizontal position. In a similar fashion it can be demonstrated that spatial position dominates almost every other dimension. But why just almost? Because the spatial dimensions appear to be dominated by the temporal dimension. For example, a sequence of lights flashing successively at various locations whose horizontal coordinates are uncorrelated with time is always viewed as random with respect to spatial position along the horizontal axis. We never conceive of the possibility of taking space as the ultimate standard for order, and to infer the “degree of order” of time variation from its correlation with spatial variation. Since time precedes location, and location seems to precede any other dimension except time, it follows by transitivity that time precedes every other dimension as well. And, indeed, imagine that lights flash in sequence at the same location but with variable intensity. If intensity is uncorrelated with time, it is intensity which will always be judged as the haphazard dimension. Thus, if we conceive of “orderliness” as an asymmetric interpretation of covariation exhibited by a number of points in the psychological multidimensional space, and diagnose asymmetry of dimensions by the tendency to attribute orderliness or randomness to just one of them, then the temporal dimension appears to dominate every other dimension and the spatial dimensions dominate every other dimension except time.
226
David Navor?
Change
In the same way that covariation exhibited by a collection of points in the multidimensional psychological space may be interpreted asymmetrically, so may a sifzglc connected region corresponding to a single object. There is nothing in any multidimensional solid in itself to suggest that the dimensions spanning the space in which it resides have different status in any sense. Yet the description of such a solid in terms of clza~gcs implies that one dimension isviewed as a function of another one (e.g., altitude as a function of position, position as a function of time). It seems unlikely that this is just a matter of convenience of description, because the pattern of dimension asymmetry exhibited by human usage of words of change is quite consistent, and furthermore, is congruent with the pattern observed in judgments of orderliness. The priority of time over space is reflected by the fact that language gives time-relations greater scope than that of location markers in sentences which describe variation over time and space; c.g. we tnight say “The ball was at the right corner yfter it was at the left comer” but never “The ball was at t2 tothe-right of where it was at t,“. In other words, we tend to describe motion or displacement as a temporal sequence of locations rather than as a spatial string of moments. This asymmetric interpretation of variation in time and space is also reflected in our understanding of movement verbs. For example, Webster’s New Collegiate Dictionary (1973) defines “to move” as, “to... pass from one place to another...“. In a similar way, numerous verbs denote c’ovariation of some attribute with time (e.g., to grow, to mature, to fade, etc.), however they are always interpreted as describing variatiorz of that attribute over time. To illustrate, all the definitions of “to fade” in Webster’s Dictionary imply this interpretation by not even mentioning time (e.g., “to lose freshness or brilliance of color”). Fewer verbs stand for changes of some attributes as a function of space (e.g., to shade, to slope). But I have not found any verb that could be interpreted to denote changes of time or space as a function of some other dimensions.
Scope of markers The way we describe attributes of objects also follows the same dimension asymmetries presented above. The truth of a statement about the value of an object on a certain dimension may be stipulated on values on other dimensions. When that occurs, it is always the value of the more minor dimension which is stated to be stipulated on the values of the more major ones. Obviously, statements about properties of objects, say color, may be restricted by spatial and temporal markers (e.g., “The sea is turquoise near the
Conceptual hierarchy of time, space and other dimensions
221
island of Hydra”), whereas the reverse phrasing makes no sense (“The sea is near the island of Hydra when it is turquoise”). Statements about location of objects may be marked for time (e.g., “The sun is at zenith at noon”), whereas we do not state the time of objects marked for location (“The sun is at noon when at zenith”), By the same token, we say “Only one object can occupy a position at a certain time” rather than “Only one object can occupy a certain time-point at a given position”, although the latter sentence would have been equally logical had we not viewed the dimensions of the world in an asymmetrical manner. A related fact is the observation that whereas our concept of “now” seems to elude any attempt for a definition which is not circular, our concept of “here” can be derived from the concept of “now”: “Here” may be defined as the location a certain distinguished object (viz, the ego) occupies now. Note that had we tried to reverse our view of the world and define “Now” as “the time-point the ego occupies here”, the present would not have been specified uniquely, since the ego may be “here” at more then one point in time.
Conclusion
Several observations suggest that our conception of the world (or of stimuli in the world) is not a multidimensional space in which all dimensions have equal status, but rather a hierarchy of dimensions, in which time occupies the first level and spatial dimensions occupy the second one. To recapitulate, the criteria used to establish this hierarchy were the following: (a) If the order of stimuli covarying on two dimensions is attributed to just one dimension, this dimension is regarded as dominated by the other one. (b) If objects are said to undergo change of one dimension over a second one but not vice versa, the first is viewed as dominated by the second. (c) If objects are said to assume values of one dimension at a given value of a second one but not vice versa, the first one is viewed as dominated by the second. Note that the order established by these criteria need not be congruent with the order of importance or memorizability for any particular set of stimuli. For example, there are many cases in which information about time of appearance of stimuli is useless compared with other properties, so it is unlikely to be processed thoroughly, retained, or retrieved. However, whenever all information is available to our mind, it will probably be represented in the hierarchical fashion described here. Thus, the hierarchy discussed here is a conceptual one. To what extent it correlates with dimensional priority in attention and memory is yet to be determined.
228
David Navon
References Craik,
F. I. M. and Lockhart, R. S. (1972) Levels of processing: A framework for memory research, J. verb. Learn. verb. Behav.. II, 67 1484. Garner, W. R. (1976) Interaction of Stimulus Dimensions in Concept and Choice Proccsscs. COK. Psychol., 8,98-123. Webster’s New Collegiate Dictionar,,,
(1973)
Springfield,
Mass.: G. & C. Merriam
Company.
La prdscntation d’un certain nombrc d’obscwations sur la fapon dont Its hurnains conGowent lcs attributs, les changements et les covariations des stimuli, fournit des indications sur l’existence d’une hidrarchic conccptuellc des dimensions danr laquelle le tcmps domine I’cspace, qui dominc toutcs les autrcs dunenslons.
Copition,
Discussions
6 (1978) 229-247
Tom Swift and his procedural
grandmother*
J. A. FODOR Massachusetts
Institute
of Technology
1. Introduction Rumor has it that, in semantics, AI is where the action is. We hear not only that computational (hereafter ‘procedural’) semantics offers an alternative to the classical semantics of truth, reference and modality’, but that it provides what its predecessor so notably lacked: clear implications for psychological models of the speaker/hearer. Procedural semantics is said to be ‘the psychologist’s’ theory of meaning, just as classical semantics was ‘the logician’s’. What’s bruited in the by-ways is thus nothing less than a synthesis of the theory of meaning with the theory of mind. Glad tidings these, and widely credited. But, alas, unreliable. I shall argue that, soberly considered: (a) The computer models provide no semantic theory at all, if what you mean by a semantic theory is an account of the relation between language and the world. In particular, procedural semantics doesn’t supplant classical semantics, it merely begs the questions that classical semanticists set out to answer. The begging of these questions is, of course, quite inadvertent; we shall consider at length how it comes about. (b) Procedural semantics does provide a theory about what it is to know the meaning of a word. But it’s not a brave new theory. On the contrary, it’s just an archaic and wildly implausible form of verificationism. Since practically nobody except procedural semanticists takes verificationism seriously any more, it will be of some interest to trace the sources of their adherence to the doctrine. (c) It’s the verificationism which connects the procedural theory of language to the procedural theory of perception. The consequence is a view of the relation between language and mind which is not significantly different from that of Locke or Hume. The recidivism of PS theorizing is among the most striking of the ironies now to be explored.
*Requests for reprints should be addressed to J. A. Fodor, Department of Psychology, MIT, EIO0341 Cambridge, Mass. 02139, U.S.A. By ‘classical semantic’s I’ll refer to the tradition of Frege, Tarski, Carnap and contemporary model theory. What this picks out is at best a loose consensus, but it will do for the purposes at hand. OOlO-0277/78/0006-0229$2.25
Q Elsevier
Sequoia
S.A.,
Lausanne
Printed
in the
Netherlands
230
J. A. F&or
2. Use and mention Perhaps the basic idea of PS is this: providing a compiler for a language is tantamount to specifying a semantic theory for that language. As JohnsonLaird (1977) puts it, the “... artificial languages which are used to communicate programs of instructions to computers, have both a syntax and a semantics. Their syntax consists of rules for writing well-formed programs that a computer can interpret and execute. The semantics consists of the procedures that the computer is instructed to execute (page 189)“. Johnson-Laird takes this analogy between compiling and semantic interpretation quite seriously: “...we might speak of the intension of a program as the procedure that is exeis not cuted when the program is run... (page 192)“. And Johnson-Laird alone. According to Winogrdd (197 1) “the program [which mediates between an input sentence and its procedural translation] operates on a sentence to produce a representation of its mearzing in some internal language... (page 409; my emphasis)“. Now, there is no principled reason why this analogy between compiling and semantic interpretation should not be extended from artificial languages (for which compilers actually exist) to natural languages (for which semantic theories are so badly wanted). If compilers really are species of semantic theories, then we could provide a semantics for English if only we could learn to compile it. That’s the PS strategy in a nutshell. Since the strategy rests upon the assumption that the compiled (e.g. Machine Language)’ representation of a sentence is eo ipso a representation of its meaning, it will pay to consider this assumption in detail. It will turn out that there’s a sense in which it’s true and a sense in which it’s not, and that the appearance that PS offers a semantic theory depends largely upon sliding back and forth between the two. To begin cheerfully: if we take programming languages to be the ones that we are able to compile (thereby excluding natural languages de facto but not in principle), then there is a sense in which programming languages are ipso facto semantically interpreted. This is because compiling is translation into ‘Compiling doesn’t, normally, take you dirrct1.v into %lL; normally there arc inter-levels of rcprescntcltion between the lanpuagc in which you talk to the machine and the language in which the machine computes. This doesn’t matter for our purposes, however, since compiling won’t be semantic interpretation unless each of the representations in the series from the input language to ML tran4atcs the one immediately previous: and translation is a transitive relation. That is: if WC can reprcscnt English in B high-level language (like PLANNER. say) and if we can represent PLANNER in ML, thon we can represent English in ML. Conversely, if our procedural representation of an l:nglish sentence eventuates in ML (as it must do if the machine is to operate upon the representation) then we gain nothing in point of semantic interpretation in virtue of having passed through the intermediate representations; though. of course, we may have gained much in point of canveniencc to the programmer.
Tom Swift and his procedural grandmother
231
ML and ML is itself an interpreted language. If, then, we think of a compiler for PLi as a semantic theory for PLi, we are thinking of semantic interpretation as consisting of translation into a semantically interpreted language. Now, this needn’t be as circular as it sounds, since there might be (indeed, there clearly are) two different notions of semantic interpretation in play. When we speak of ML as an interpreted language, we have something like the clussicul notion of interpretation in mind. In this sense of the term, an interpretation specified denotata for the referring expressions, truth conditions for the sentences (or, better, ‘compliance conditions’, since many of the sentences of ML are imperative) etc. Whereas, when we speak of interpretation for PLi, we have in mind (not classical interpretation but) translation into a language that is itself classically interpreted (e.g. translation into ML). It’s entirely possible that this is the right notion of interpretation for a natural language; that, for one reason or another, it would be a bad research strategy to attempt to classically interpret a natural language ‘directly’ and a good research strategy to attempt to interpret it ‘mediately’ (viz. via its translation into some other, classically interpreted, formalism). This idea is quite widespread across the disciplines, the mediating language being called ‘logical syntax’ in philosophy, ‘internal representation’ in psychology, and ‘ML’ in AI. I’m inclined to endorse this strategy, and I’ll return to it towards the end of this paper. But what’s important for present purposes is this: if we see the PS strategy as an attempt to interpret English by translating it into a classically interpreted ML, then we see straight off that PS isn’t an alternative to classical semantics. Rather, PS is parasitic on CS. Translation into ML is semantic interpretation only if ML is itself semantically interpreted, and when we say of ML that it is semantically interpreted, we mean not that ML is translated into a semantically interpreted language, but that ML is classically semantically interpreted. Translation has got to stop somewhere. So, there’s a sense in which compiling is (or might be) semantic interpretation. Hence, there’s a sense in which a programming language ‘has a semantics’ if it is compiled. That’s what is right about the PS approach. But there is also a sense in which compilable languages are not ipso facto interpreted; a sense in which compiling need not - and normally does not - give the ‘intension’ of a program. The point is that the interpretation assigned a PL sentence by the compiler is not, normally, its ‘intended interpretation’; it’s not, normally, the interpretation that specifies what the sentence means. One might put it even stronger: machines typically don’t know (or care) what the programs that they run are about; all they know (or care about) is how to run their programs. This may sound cryptical or even mystical. It’s not. It’s merely banal.
232
J. A. Fodor
You can see the point at issue by considering an example (suggested to me by Prof. Georges Rey, in conversation). Imagine two programs, one of which is a simulation of the Six Day War (so the referring expressions designate, e.g., tank divisions, jet planes, Egyptian soldiers, Moshe Dayan, etc. and the relational terms express bombing, surrounding, commanding, capturing, etc.): and the other of which simulates a chess game (so the referring expressions designate knights, bishops, pawns, etc. and the relational terms express threatening, checking, controlling, taking, etc.). It’s a possible (though, of course, unlikely) accident that these programs should be indistinguishuble wfwz compiled; viz. that the ML counterparts of these programs should be identical, so that the internal career of a machine running one program would be identical, step by step, to that of a machine running the other. Suppose that these programs were written in English. Then it’s possible that the sentence “Israeli infantry surround a tank corps” and the sentence “pawns attack knight” should receive the same ML ‘translations’. Yet, of course, the sentences don’t mean anything like the same; the compiled versions of the sentences don’t, therefore, specify their intensions. Or, if you’re dubious about intensions, then: the sentences don’t have the same compliance conditions (if they’re imperatives) or truth conditions (if they’re declaratives); and there is nothing in common between the entities that their referring expressions designate or the relations that their predicates express.3 In the sense of “interpretation” we are likely to have in mind when we speak pretheoretically of a semantic theory as interpreting English sentences, these sentences are root interpreted when they are compiled. Equivalently: the prugrunznzcr knows the interpretation; he knows which interpretation he intends, and, probably, he cares about the program largely because he cares about the (intended) interpretation. But the machine doesn’t know the intended interpretation. Nor does it care what the intended interpretation is, so Iotzgas it krmvs the irlterpretutiorz in ML. How could it be that a compiled sentence is interpreted in one sense yet not interpreted in the other? The answer has partly to do with the nature of computers, and partly to do with the point of compiling. Computers are devices for doing jobs we set; in particular, devices for following orders. What we need for instructing a computer is a language in which we can tell it what jobs we want it to do. What the computer needs to do its job is instructions in a language that it can understand. Machine Language
3\Vcll, of course, there has to be somefhirzr: in common; the two programs have to be congruent in son2e sense; else why should they go over into the same ML representation’? Hut what is common surely isn’t the intended interpretation of the sentences qua sentences of English. It isn’t what they mean.
Tom Swift and his procedural grandmother
233
satisfies both these conditions. In one way of looking at things, it is a very powerful language since, in a useful sense of “precisely”, any job that can be specified precisely can be specified in ML (assuming, as I shall do throughout, that ML is at least rich enough to specify a Turing machine). But, looked at another way - and this is the essential point - it is a very weak language. For, the things you can talk about in ML are just the states and processes of the machine (viz. the states and processes that need to be mentioned in specifying its jobs). So, the referring expressions of ML typically name, for example, addresses of the machine, formulate stored or storable at addresses, etc. And the relational terms typically express operations that the machine can perform (e.g.writingdown symbols, erasing symbols, comparing symbols, moving symbols from address to address, etc.). Whereas, prima facie, the semantic apparatus of a natural language is incomparably richer. For, by using a natural language, we can refer not just to symbols and operations on them, but also to Moshe Dayan and tank divisions, cabbages, kings, harvestings, and inaugurations. Of course, it’s conceptually possible that anything that can be said in a prima facie rich language like English can be translated into (not just paired with) formulae of ML. But, patently, from the mere assertion that English might be compiled in ML and that compiled languages are ipso facto interpreted, it does not follow that the meaning of English sentences can be expressed in ML. A fortiori it does not follow that a compiler for English would ipso facto capture (what I’m calling) the intended semantic interpretations of the sentences of English. Why, then, do procedural semanticists think that compiling is an interesting candidate for semantic interpretation ? I think part of the answer is that, when procedural semanticists talk informally about the compiled representation of English sentences, they allow themselves to forget that ML is interpreted solely for machine states, processes, etc. and speak as though we knew how to interpret ML for much richer semantic domains; in fact, for the world at large. That is, they talk as though the semantics of ML constituted a theory of language-and-the world, whereas, in fact, it provides only a theory of language-and-the-insides-of-the-machine. This equivocation is very widespread indeed in the PS literature. Thus, for example, Winograd remarks that (in PLANNER) “‘...we can represent such facts as ‘Boise is a city’. and ‘Noah was the father of Jafeth’. as: (CITY BOISE) (FATHER-OF NOAH JAFETH). Here, BOISE, NOAH and JAFETH are specific objects, CITY is a property which objects can have, and FATHER-OF is a relation” (page 207).4 To which one wants to reply: ‘who says they are?’ In particular, neither Winograd (nor PS at large) supply anything remotely 41have left Winograd’s citation conventions as I found them.
334
f. A. Fodor
resembling an account of the relation between a symbol (say, ‘BOISE’) and a city (say, Boise) such that the former designates the latter in virtue of their standing in that relation. Nor, of course, does the eventual ML translation of ‘BOISE IS A CITY’ provide the explication that’s required. On the contrary, ML has no resources for referring to Boise (or to cities) at all. What it can do (and all that it can do that’s relevant to the present purposes) is refer to the expression ‘BOISE’ and say of that expression such things as that it appears at some address in the machine (e.g. at the address labelled by the expression ‘CITY’). But, of course, the sentence (capturable in ML) “the expression ‘BOISE’ appears at the address CITIES” is not remotely a translation of the sentence “Boise is a city”. To suppose that it is to commit a notably unsubtle version of the use/mention fallacy. Or, consider Miller and Johnson-Laird (1976). They provide an extended example (circa page 174) of how a procedural system might cope with the English question: ‘Did Lucy bring the dessert?’ The basic idea is that the question is “translated” into instructions in accordance with which “[episodic memory] is searched for memories of Lucy...” The memories thus recovered are sorted through for reference to (e.g.) chocolate cakes, and how the device answers the question is determined by which such memories it finds. The details are complicated, but they needn’t concern us; our present point is that the authors simply have no right to this loose talk of memories of Lucy and internal representations of chocolate cakes. This is because nothing is available in their theory to reconstruct such (classical) semantic relations as the one that holds between ‘Lucy’ and Lucy, or between ‘chocolate cake’ and chocolate cakes. Strictly speaking, all you can handle in the theory is what you can represent in ML. And all you can represent in ML is an instruction to go to the udllress labelled ‘Lucy’ (but equally well ~ and perhaps less misleadingly -~labelled, say ‘#959’ or ‘BOISE’ or ‘The Last Rose of Summer’) and see if you find at that address a certain formula, viz. the one which is paired, under compiling, with the English predicate ‘brought a cake’. Of course, the theorist is welcome - if he’s so inclined - to construe the formulae he finds at Lucy (the address) as information about Lucy (the person); e.g. as the information that Lucy brought a cake. But PS provides no account of this aboutness relation, and the machine which realizes the Lucy-program has no access to this construal. Remember that, though ML is, in the strict sense, an interpreted language,’ the interpretation of ‘Lucy’ (the ML expression) yields as denotatum not Lucy-the&l but only Lucy-the-address (viz. the address that the machine goes to when it is told to go to Lucy and it does what it is told). In short, the semantic relations that we care about, the one between the name and the person and the one between the predicate ‘bring the desert’ and the property
Tom Swift
and his procedural
grandmother
235
of being a dessert bringer, just aren’t reconstructed at all when we represent ‘Did Lucy bring dessert?’ as a procedure for going-to-Lucy-and-looking-forformulae. To reconstruct those relations, we need a classical semantic theory of names and predicates, which PS doesn’t give us and which, to quote the poet, in our case we have not got. In effect, then, a machine can compile ‘Did Lucy bring dessert?’ and have not the foggiest idea that the sentence asks about whether Lucy brought dessert. For, the ML “translation” of that sentence - the formula which the machine does, as it were, understand isn’t about whether Lucy brought dessert. It’s about whether a certain formula appears at a certain address. We get a clearer idea of what has happened if we forget about computers entirely; the brilliance of the technology tends to dazzle the eye. So, suppose somebody said: ‘Breakthrough! The semantic interpretation of “Did Napoleon win at Waterloo?” is: find out whether the sentence ‘Napoleon won at Waterloo” occurs in the volume with Dewey decimal number XXX,XXX in the 42nd St. branch of the New York City Public Library’. So far as I can see, the analogy is exact, except that libraries use a rather more primitive storage system than computers do. “‘But’, giggled Aunt Martha, ‘if that was what “Did Napoleon win at Waterloo?” meant, it wouldn’t even be a question about Napoleon’. ‘Aw, shucks’, replied Tom Swift”. 3. Verificationism I have stressed the use/mention issue, not because I think that use/mention confusions are what PS is all about, but because, unless one avoids them, one can’t get a clear view of what the problems are. However, I doubt that anybody really thinks that you can translate English into ML, taking the latter to be a language just rich enough to specify sequences of machine operations. If you put it that way (and that’s the way you should put it), no procedural semanticist would endorse the program. What is, I believe, widely supposed is that you can translate English into an enriched ML; that there is some ML rich enough to say whatever you can say in English, and some machine com‘In what sense is ML ‘strictly interpreted’? It’s not just that there exists (as it were Platonically) a consistent assignment of its formulae to machine states, processes, operations, etc., but that the machine is so contructed LISto respect that assignment. So, for example, there is a consistent interpretation of ML under which the formula ‘move the tape’ is associated with the compliance condition moving the tape; and, moreover, it is a fact about the way that the machine is engineered that it does indeed move the tape when it encounters that formula. This parallelism between the causal structure of the machine and the semantics of ML under its intended interpretation is what makes it possible to ‘read’ the machine’s changes of physical state as computations. Looked at the other way round, it’s what chooses the intended interpretations of ML from among the merely consistent interpretations of ML.
236
J. A. Fodor
plicated enough to use that language. So reconstructed, the PS research strategy is to find the language and build the machine. Of course, you don’t want the language to be too rich; because (a) it’s no news (and no help) that you can translate English into very rich languages like (e.g.) French (or, for that matter, English); and (b) translation isn’t interpretation unless it’s translation into a (classically) interpreted language, and if ML gets too rich, we won’t know how to interpret it any more than we now know how to interpret English. That is, to repeat, the very heart of the problem. A compiler would be a semantic theory only if it took us into a very rich ML; i.e., into a language rich enough to paraphrase the sentences of English. But we do not know how to interpret very rich languages, and we do not have anything approaching a clue about what it is to be able to use a rich, interpreted language (e.g. what it is to be able to use ‘Lucy’ to refer to Lucy or what it is to be able to use the internal representation of chairs to think about chairs). PS provides us with no insight at all into either of these questions; indeed, precious few of its practitioners seem to understand that these ure questions where insight is needed. Nevertheless, I think that many procedural semanticists think that they have found a middle road. Suppose, in particular, that we equip our machine with sensory transducers and correspondingly enrich ML with names of the transducer input and output states. This enriched ML (which I’ll call MLT) still talks about only machine states and processes, but now they are states and processes of something more than a computer: at best a robot and at worst a sort of creepy-feely. Let’s assume, also, that MLT has linguistic mechanisms for introducing definitions; e.g. it has a list of dummy names and rules of elimination which allow it to replace complex expressions (including, of course, expressions which contain names of states of the transducers) by single expressions consisting of dummy names. The question is whether the machine language of this device is rich enough to translate English. It seems to me that many, many researchers in AI believe that the answer to this question is ‘yes’. Indeed, I think that they had better think that since (a) compiling is translating only when the target language is classically interpreted; and (b) so far, there is no suggestion from PS about how to interpret a target language which, is enriched beyond what you get by taking standard ML and adding the names of transducer states (together with the syntactic mechanisms of eliminative definition). So, if the procedural semanticists don’t think that English can be paraphrased in MLT, they owe us some alternative account of how the target language is to be classically interpreted. And if they don’t think the target language can be classically interpreted, they lose their rationale for taking compilers to be semantic theories in an)’ sense of that notion.
Tom Swift and his procedural grandmother
237
Be that as it may, the historical affinities of this program are all too clear. They lie, of course, with the Empiricist assumption that every non-logical concept is reducible to sensation concepts (or, in trendier versions, to sensation plus motor concepts) via coordinating definitions. I do not want to comment on this program beside saying that, Heaven knows, the Empiricists tried. Several hundred years of them tried to do without ‘thing’ language by getting reductions to ‘sense datum’ language. Several decades of verificationists tried (in very similar spirit) to do without ‘theoretical’ terms by getting reductions to ‘observation’ terms. The near-universal consensus was that they failed and that the failure was principled; they failed because it can’t be done. (For an illuminating discussion of why it can’t be done, see Chisholm (1957)). My own view, for what it’s worth, is that the permissible moral is a good deal stronger: not only that you can’t reduce natural language to ML-plustransducer language, but really, that you can’t hardly reduce it at all. I think, that is, that the vocabulary of a language like English is probably not much larger than it needs to be given the expressive power of the language. I think that’s probably why, after all those years of work, there are so few good examples of definitions of English words (in sensation language or otherwise.) After we say that ‘bachelor’ means ‘unmarried man’ (which perhaps it does) and that ‘kill’ means ‘cause to die’ (which it certainly does not), where are the other discoveries to which reductionist semantics had lead? Surely it should worry us when we find so few cases for our theories to apply to? Surely the world is trying to tell us something? (Of course, there are lots of bad examples of definitions; but it’s uninteresting that one can get pairings of English words with formulae to which they are not synonymous. To get some idea of how hard it is to do the job right, see J. L. Austin’s “Three Ways of Spilling Ink”, a lucid case study in the non-interdefinability of even semantically closely related terms.) I want to be fairly clear about what claims are at issue. For the reductionist version of the PS strategy to work, it must be true not only that (a) many words of English are definable, but also that (b) they are definable in a primative basis consisting of ML plus transducer vocabulary; in effect, in sensation language. (a) is dubious but not beyond rational belief. But (b) requires not just that there be definitions, but also that there be an epistemologically interesting direction of definition; viz. that the more definitions we apply, the closer we get to MLT. There is, however, no reason at all to believe that this is true, and the bankruptcy of the Empiricist program provides considerable prima facie evidence that it is not. Semantic reduction is the typical PS strategy for interpreting the nonsyntactic vocabulary of a rich programming language; viz. semantic reduc-
tion to formulae in MLT. In this respect, there is simply no difference between the typical PS view and that of, say, John Locke. However, as I rcmarked above, in the standard PS analysis, compiled programs tend to emerge as sequences of instructions, and this fact puts a characteristic twist on PS versions of Empiricism. In particular, it burdens PS with a form of verificationism. For, if ‘that’s a chair’ goes over into instructions, it must be instructions to do something. Sometimes procedural semanticists suggest that the instruction is to add (the ML translation of) ‘that’s a chair’ to memory. But it’s closer to the spirit of the movement to take it to be an instruction to confirm ‘that’s a chair’, viz. by checking whether that (whatever it is) has the features in terms of which ‘chair’ is procedurally defined. Indeed, one just about has to take the latter route if one is going to argue that procedures capture intensions, for nobody> could suppose that ‘that’s a chair means ‘remember that that’s a chair’: where as it’s at least possible to believe that ‘that’s a chair’ means ‘that’s something that has the observable features F‘ where ‘F’ operationally defines ‘chair’. So, once again, it’s the verificationism that turns out to be critical to the claim that compiling captures meaning; if ‘chair’ can’t be defined in operational terms, then we have no way of interpreting MLT for chairs. And if we can’t interpret MLT for chairs, then we have no way of capturing the meaning of ‘that’s a chair in MLT. Some procedural semanticists admit to being verificationists and some do not. Miller and Johnson-Laird (op. cit.), for example, deny the charge both frequently and vehemently, but the grounds of their denial are obscure. Interpreting liberally, I take it that their point is this: to give procedural reductions of ‘chair’, ‘grandmother’, ‘dessert’, or whatever, you usually need not only transducer vocabulary but also primative terms which express quite abstract notions like ‘is possible’, ‘intends’, ‘causes’ and heaven knows what more. Miller and Johnson-Laird (quite correctly) doubt that such abstract notions can be reconstructed in MLT. Hence they hold that there is no operational analysis for such English sentences as contain words defined in terms of ‘intends’, ‘causes’ and the rest. Hence they claim not to be verificationists. Given a dilemma, you get to choose your horn. Miller and Johnson-Laird do, I think, avoid verificationism, but at an exhorbitant price (as should be evident from the discussion in section 2). In particular, they have no semantic theory for most of the sentences of English. For, consider the sentence E, which goes over into a (compiled) representation of the form‘........ causes . . . . . . . . ‘. The internal expression ‘causes’ isn’t part of ML proper since it isn’t interpreted by reference to the states and processes of the machine; and it isn’t part of MLT either since, by hypothesis, ‘causes’ isn’t operationally
Tom Swift and his procedural grandmother
239
definable. But the domain in which MLT is semantically interpreted is exhausted by machine states and transducer states. So ‘causes’ is uninterpreted in the internal representation of E. So Miller and Johnson-Laird don’t have a semantic theory for E. Some procedural semanticists prefer to be impaled upon the other horn. One of the few explicit discussions of the relation between PS and CS is to be found in Woods (1975). Woods takes a classical semantic theory to be a mechanism for projecting the (roughly) truth conditions of syntactically complex sentences from a specification of the semantic properties of their constituent atomic sentences. However “they [classical semantic theories] fall down on the specification of the semantics of the basic ‘atomic’ propositions (sic; surely Woods means ‘atomic sentences’) (page 39).” Enter procedural semantics: “In order for an intelligent entity to know the meaning of such sentences it must be the case that it has stored somehow an effective set of criteria for deciding in a given possible world whether such a sentence is true or false (Ibid).” It’s a bit unclear how Woods wants his quantifiers ordered here; whether he’s claiming that there exists a procedure such that for every possible world . . . . . . . . ) or just that for every possible world there exists a procedure such that . . . . . . . . Even on the latter reading, however, this must be about the strongest form of verificationism that anybody has ever endorsed anywhere. Consider, for example, such sentences as: ‘God exists,’ ‘positrons are made of quarks,’ ‘Aristotle liked onions,’ ‘I will marry an Armenian, ’ ‘the set of English sentences is RE,’ ‘Procedural semantics is true, ’ ‘there are no secondary qualities,’ ‘Nixon is a crook,’ ‘space is four dimensional,’ etc. According to Woods, I am, at this very moment and merely in virtue of having learned English, in possession of an algorithm (“an effective set of criteria”, yet!) for determining the truth value of each of these sentences. Whereas, or so one would have thought, one can know English and not know how to tell whether God exists or what positrons are made of. Good grief, Tom Swift: if all us English speakers know how to tell whether positrons arc made of quarks, why doesn’t somebody get a grant and find out?” It is of some interest that, having announced that enunciating “such procedures for determining truth or falsity [of atomic sentences] ” is the goal of procedural semantics, what Woods actually discusses in the body of his (very interesting) paper is something quite else.
‘Classical verificationism claimed only that, for a sentence to be meaningful, there must be (Platonically) a method of verification. This is, of course, much weaker than the present claim that to understand a sentence is to know what the method of verification is.
4. PS as perceptual
psychology
Thus far, I’ve argued as follows: (a) Compilers won’t be semantic theories unless they take us into an interpreted target language. (b) It’s patent that the intended interpretation of English sentences can’t be captured in ML proper. (c) The proposal that we should compile into MLT is the only recommendation that procedural semanticists have made for coping with the semantic poverty of ML. (d) It’s adherence to that suggestion which gives PS its characteristic Empiricist-verificationist cast. Essentially similar remarks apply to PS treatments of perception; having gone this far, the major theoretical options are, to all intents and purposes, forced. Suppose that F is a formula in MLT such that F expresses the meaning of ‘chair’. It presumably follows that determining that ‘a is F’ is true is a sufficient condition for determining that a is a chair. But now, the (non-syntactic) vocabulary of F is, by hypothesis, exclusively transducer (viz. sensory) vocabulary; so that the normal way of determining that ‘a is F’ is true would be by reference to the sensory properties of u. However, determining that something is a chair by reference to its sensory properties is a plausible candidate for perceiving that it’s a cllair.7 So we have an easy bridge from an atomistic view of the semantics of the lexicon (‘chair’ is proceduraly definable in MLT) to an atomistic view of perception (the ‘percept’ chair is constructed from sensations; constructed, indeed, by precisely the operations that procedural definitions specify). Semantic reductions thus emerge as perception recipes. In saying that the options are forced, I don’t mean to imply that procedural semanticists generally resist this line of thought; on the contrary, they generally rush to embrace it. The advertising for PS standardly includes the claim that it provides a theory of the interface between language and perception. So far as I can tell, the theory proposed always runs along the lines just sketched: semantic decomposition of the lexicon parallels sensory decomposition of percept; the ‘translation’ of ‘chair’ into semantic features corresponds to the analysis of chairs into perceptual features. 70f course, not every case of determining that something is a chair by rcfercnce to its sensory properties counts as perceiving it; consider the case where somebody tells you what its sensory properties are and you infer from the description that it’s a chair. A serious discussion would have to beef up the condition, perhaps by adding that the determination of the sensory propertiec must involve an appropriate kind of causal excitation of the transducers by a. I’m not, however, attempting to construct an account of perception here; just to indicate how the PS theory arises naturally from the PS treatment of language.
Tom Swift and his procedural grandmother
241
This isn’t the place for a diatribe against atomistic views of perception (though I admit to being tempted). Suffice it to make three points. In the first place, it’s notable that - whether or not such views are right they’re remarkably old news. Once again, PS offers no theoretical advance over Locke and Hume (except that, whereas the Empiricists had to use association to stick sensations together, the procedural semanticists can avail themselves of a rather more powerful combinatorial apparatus borrowed from list processing or set theory). I think this point to be worth emphasizing since the flossy technology of computer implementation may make it look as though PS offers an approach to the language-and-perception problem that’s importantly novel in kind. To the contrary: if there are reasons for believing the Locke-Hume program was fundamentally wrong about perception, there are the same reasons for believing that the PS program is fundamentally wrong about perception; it’s the same program. Nothing has changed except the gadgets. The second point is that, so far as I can tell, no new arguments for the Locke-Hume program have been forthcoming from the PS camp.’ This is unsurprising if, as I’ve been arguing, it’s not facts about perception but facts about computers that are forcing the procedural semanticists’ hand. Data processes specified in ML won’t reconstruct perception unless ML is an interpreted language. And the only interpretation that we currently know how to provide for ML embraces assignments to its terms of (strictly) computational states and relations, together with what can be defined over the names of the transducer states. The perception we can simulate is, therefore, the reduction of complex percepts to transducer outputs. There is, however, no reason for believing that percepts aye reducible to transducer outputs except that, if they’re not, the PS theory of perception won’t be true. The fact is that, for some three hundred years, Empiricists have been playing ‘heads I win, tails you loose’ with the principle that perceptual states reduce to sensory states.’ The proof that the reductions are possible consists of an appeal to the principle. The proof of the principle is that, were it false, the reductions wouldn’t be possible. No workable examples are ever given. PS is playing exactly the same game, only it started later.Now, this last remark may strike you as a smidgen unfair. For - you might ask - doesn’t the Winograd program (for example) provide precisely ‘1 say that no mw arguments have been forthcoming; but there’s an old argument (in fact, Locke’s) which is wry much in the air. Viz.: if pcrccpts aren’t reducible to sensations, how could concepts be learned? A more extensive treatment than this one might have fun tracing the dire consequences of anti-nativism in both classical and I’S versions of the Fmpiricist program (and in much of current cognitive psychology, for that matter.) SW the discussion of Hume’s handling of the Empiricist principle in Flew (1964).
242
J. A. Fodor
the kind of illustrations of successful reductionistic analyses that are here said not to exist’? Answer: no. The whole point about the Winograd program, the trick, as it were, that makes it work, is that the block world that SHRDLU (nominally) lives in is constructed preciseI?>, so as to satisfy the epistemological and ontological requirements of verificationism; in particular, each object is identifiable with a set of features, each feature is either elementary or a construct out of the elementary ones, and the elementary features arc (by assumption) transducer-detectible. What the Winograd program shows, then is that verificationism is logically possible; there are possible worlds. and possible languages, such that verificationism would be a good semantics for those languages in those worlds (and, mutatis mutandis, such that reductionism would be a good theory of the way that percepts are related to scnsations in those worlds). The problem, however, is that nobody has ever doubted that verificationism is consistent; what there is considerable reason to doubt is that it’s true. The Winograd program would bear on this latter question if somebody could figure out how to generalize a verificationist semantics for Block World into a verificationist semantics for English. So far, to the best of my knowledge, there are no bids. The final point is that there arc alternatives to reductionistic theories of perception, though you’d hardly know it from the PS literature. In particular, it doesn’t follow from the fact that we are able to recognize (some) chairs (sometimes) that there must be procedures for recognizing chairs, either in the Empiricist sense of sensory check-lists, or for that matter, in any sense at all. To see why this is so will require a (very brief) excursis into philosophy of science. The conventional wisdom in the philosophy of science these days is that there is a principled reason why verificationism won’t work. It’s the following. The question whether that thing over there is an electron isn’t settled by crucial experiment, but by adjudication between the demands that observation and theoretical conservatism jointly impose upon rational belief. WC can’t give a feature list for ‘electron’ because, given enough pull from simplicity and conservatism of theory. we may have to decide that it’s an electron even if it fails our feature lists, and we may have to decide that it’s not one even if it satisfies them. Notice that this is Gestaltism in one sense but not another; there is an emphasis on the influence of our n~lrole science on the determination of truth value for any given sentence that our science contains. But, of course, it doesn’t follow that there arc no tests relevant to deciding whether something is an electron. On the contrary, this (roughly Quinean) account explains, really for the first time in the philosophy of science, why good experiments are so intellectually satisfying and so hard to
Tom Swift and his procedural grandmother
243
devise. If there were feature lists semantically associated with theoretical terms, then verifying theories would be a relatively dull matter of finding out whether the features are satisfied. Whereas, the present account suggests that anythirzg we know (or think we know) could, in principle and given sufficient experimental ingenuity, be brought to bear on the confirmation of any bit of our science. A sufficiently clever experimenter might be able to connect the question whether that’s an electron with the annual rainfall in Philadelphia (via, of course, an enormous amount of mediating theory). In which case, more power to him! If the experiment works, we’ll have learned something important about electrons (and about Philadelphia). Now that, of course, is ‘philisophy’, not ‘psychology’. Unless, however, psychology is that way too. It may be that there is no procedural analysis of the concept chair precisely because perceptual recognition is fundamentally like scientific problem solving (they are, after all, both means to the fixation of belief.) On this view, in the limiting case, perception would involve bringing to bear our whole cognitive resources on the determination of how the world is (what ‘perceptual categories’ it instantiates). And, of course, every intermediate position is also open. It may be that some (but not all) of our cognitive resources are available for perceptual integration: that perceptual integration can be distinguished from problem solving at some interesting point that we don’t yet know how to specify. (I think it’s likely that this is the case; else how explain the persistance of perceptual illusions even when the subject knows that the percept is illusory?) My present point is just that there is a vast range of empirically respectable alternatives to PS atomism. These alternatives suggest the possibility (in some unimaginable and euphoric future) of an integration of the philosophy of science with the psychology of perception, so that ‘the logician’ can lie down with ‘the psychologist’ at last. It is, in any event, a reason for worrying about PS that if it were philosophy of science, it would be bad philosophy of science.
5. What is PS really about? I’ve thus far argued that PS suffers from: verificationism, operationalism, Empiricism, reductionism, recidivism, atomism, compound fractures of the use/mention distinction, hybris, and a serious misunderstanding of how computers work. What, then is wrong with PS? Basically, that it confuses semantic theories with theories of sentence comprehension. There is a philosophical insight - which goes back at least to Russell and Frege, and, in a certain way of looking at things, perhaps to Aristotle - that can be put like this: the surface form of a sentence is a bad guide to many of
the theoretically interesting aspects of its behavior. If, therefore, you want a theory whose principles formally determine the behavior of a sentence, your best strategy is (a) to pair the sentence with some representation more perspicuous than its surface form; and then (b) to specify the theoretical principles over this more perspicuous representation. So, for example, if you want a theory that formally determines the way a sentence behaves in (valid) arguments, the way to proceed is (a’) to pair the sentence with a representation of its ‘logical form’ (e.g. a paraphrase in a canonical logical notation) and then (b’) specify the valid transformations of the sentence over that representation. For reasons too complicated to rehearse here, philosophers have usually taken it that representations of logical form would be appropriate candidates for the domain of rules of (classical) semantic interpretation. In any event, recent work in the ‘cognitive sciences’ has added three wrinkles to the basic idea of canonical paraphrase. First, that there’s no particular reason why the canonical representation of a sentence should be sensitive only to its logico-semantic properties. In principle, we might want a representation of a sentence which captures not just its logical form (in the proprietary sense sketched above) but which also provides an appropriate domain for principles which govern syntactic transformation, or memory storage, or interaction with perceptual information, or learning, or whatever. The more we think of the construction of a theory of canonical paraphrase as a strategy in psychology, the more we shall want to extend and elaborate such constraints. There’s no a priori argument that they cltn be satisfied simultaneously, but there’s also no a priori argument that they can’t. Trade-offs, moreover, are not out of the question; WC arc open to negotiation. The second important recent idea about canonical paraphrase is that it might be effected more or less algorithmically: there might be computational principles which associate each sentence with its appropriate canonical representation (or rcprescntations, if that’s the way things turn out). This is quite different from what Russcl and Aristotle had in mind; they were content to lcave the mapping from a sentence to its logical form relatively inexplicit, so long as the logico-semantic behavior of the sentence was formally dctermincd given its canonical rcprosentation. The third idea is that the theory which maps sentences onto their canonical paraphrases (which, as it were, compiles them) might be construed as a model .- more or less realistic, and more or less real-time ~ for what the speaker/hearer does when he rtmlerstamds a sentence. That is, we might think of a spcakcr/hearcr as, to all intents and purposes, a function from canonical paraphrases (taken, now, as mental representations) onto forms of uttc’rance. Patently, the more kinds of constraints internal representations can be made
Tom Swift and his procedural grandmother
245
to satisfy qua domains for mental operations, and the more real-time-like the function from canonical paraphrases to surface forms can be. shown to be, the more reason we shall have to take this sort of model of speaker/ hearers as empirically plausible. I take it that this general approach is common to all current work on cognition, barring only Gibsonians and eccentrics.” That is, it’s common ground to AI, PS, linguistics, cognitive psychology, psycholinguistics, and for that matter, me. What’s special about PS is only a bundle of proclivities within this strategic commitment: procedural semanticists tend to emphasize that canonical paraphrases should be constrained to provide domains for principles which specify the interactions between sentential and contextual information; they tend to emphasize real-time constraints on the recovery and coding of canonical representations; they tend to countenance trade-offs which buy feasibility at the price of generality; they tend to be relatively uninterested in constraining canonical representations .by considerations of linguistic plausibility. Such proclivities amount, in effect, to an implicit research strategy within the general cognitivist camp. Like any such strategy, it is to be judged by its pay-off. My own view, for what it’s worth, is that the pay-off has, thus far, been negligible; that, so far, practically nothing has been learned about language processes from the PS models, whereas quite a lot has been learned by investigators who have taken opposed views of how a theory of internal representation might best be constrained. That, however, is as it may be. My present point is two-fold. First, if I’m right about what PS and the rest of us are up to, then there’s no new account of language (or of language-andthe-mind) explicit or implicit in PS; there’s simply a set of recommended tactics for approaching the problem we all take ourselves to be dealing with: how should we best model the speaker/hearer, given the assumption that the speaker/hearer is to be viewed as a function from utterances onto internal representations. That is what we all are doing is trying to provide a model of sentence comprehension (/production); a model which says (a) what the speaker/hearer has in his head insofar as having that in his head constitutes understanding a sentence; and (b) which explains how whatever he has in his head when he understands a sentence manages to get there. The second, and final, point is that what y2oyle of us are doing (including NB, PS) is providing a semantics for a natural (or any other) language: a theory of language-and-the-world. What we’re all doing is really a kind of logical syntax (only psychologized); and we all very much hope that when we’ve got a reasonable internal language (a formalism for writing down “For a development
of this theme,
see Fodor
(1975)
245
.I. A. Fodor
canonical representations) someone very nice and very clever will turn up and show us how to interpret it; how to provide it with a semantics. What has disguised this fact from the procedural semanticists is that everybody else has given up supposing that the semantics will be verificationist. This difference makes a difference, as we’ve seen. In particular, it has allowed the procedural semanticists to suppose that a theory of how you understand a sentence can do double duty as a theory of what the sentence means; to confuse compiling with semantic interpretation, in short. Whereas, because the rest of LIS are root verificationists, we can live with the fact that ‘chair’ refers to chairs; we don’t have to go around supposing that ‘chair’ refers to bundles of sense data. It is, of course, not very interesting to say that ‘chair refers to chairs. since WC have no theor?, of reference and we have no mwlzunisrn to realize the theory.” A fortiori, we don’t know how to build a robot which can USC ‘chair’ to refer to chairs. But, though “chair refers to chairs’ isn’t interesting, we don’t mind that so much since reference isn’t what we’re working on. We’re working on logical syntax (psychologized). Moreover, “chair’ refers to chairs’ has one striking advantage over ‘chairs arc made of sense data’; it’s not interesting, but at least it’s true. So, Tom
Swift, here is how things stand: understanding a natural language sentence UZU):be sort of like compiling. That is, it may be a matter of translating from the natural language into a paraphrase in some canonical target language. If so, the target language must have a very rich semantics (nothing like MLT) and must be syntactically perspicuous in a way that natural languages are not. Nobody has the foggiest idea of how to connect this sys“I’am distinguishing between (a) a ‘theory of rcfcrence’ (and, more generally, a classical semantic theory) \vhich consists of a function from the cxpresgions of a lanpuqc onto the object< which interpret them; and (b) a theory of the mechanism which realizes the xmantics, viz. the kind of psychological theory which answers the question: ‘what about a given language, or about the way that a pivcn organism uses it. makes one or another scmantic interpretation the right one for that language’?’ (a) and (b) hot/f differ from (2) thcorics that opcratc in the arca that I’ve called pyychologi/rd logical q’ntilu.
Theorirs of type (a) arc familiar from work in claruical semantics. Thcoricc of type (ci arc \vhat representapeople generally have in mind when they talk about “internal” (“canonical ” , “mental”) tion (hence all varicticc of I’S, properly conctrucd. belong to type (c). ;I\ doev practically all of modern linpui ~timulur for \I hich the uttcrancc iy a divcrirninatcd response. Perhap\ it goes without uying that the de4rablc situation ic the one uhcre the formal <emantics (type a). the account of the lopical syntax of the vchiclcs ofrcprcccntation (type c) and the psycholqy of rrfcrcncc (type b) all fit togcthcr.
Tom Swift and
his procedural grandmother
247
tern to the world (how to do the semantics of internal representations) but that’s OK because there are lots of other constraints that we can impose and maybe even meet; constraints inherited from logic, linguistics, psychology and the theory of real-time computation. It might be instructive to try to build a machine which meets some of these constraints. Qua computer, such a machine would carry out sequences of operations upon the (de facto uninterpreted) formulae of some canonical language. Qua simulation, it would provide a psychological model insofar as mental processes can be construed as sequences of formal operations upon mental representations. But do not, Tom Swift, mistake this machine for a semantic theory. A fortiori, DO NOTMISTAKE IT FOR YOUR GRANDMOTHER. Right, Tom Swift: back to the drawing board. ‘*
References Austin, John L. (1966) “Three Ways of Spilling Ink.” P&lox Review, 75 (4). Bobrow, Daniel G. and Collins, Alan (Eds.) (1975) Representation and Understanding, Academic Press, New York. Chisholm, R. (1957) Perceiving, Comcll University Press, Ithaca. Flew, A. (1964) “Hume” in O’Connor (1964). Fodor, Jerry A. (1975) The Language of Thought, Thomas Y. Crowell Co., New York. Johnson-Laird, P. (1977) “Procedural Semantics, Cog., 5, (3), pp. 189-214. Miller, G. and Johnson-Laird, P. (1976) Language and Perception, Harvard University Press, Cambridge. O’Connor, D. J. (ed.) (1964) A Critical History of Western Philosophy, The Free Press of Glencoe, London. Winograd, T. (1971) Procedures as a Representation for Data in A Computer Program for Understanding Natural Language, M.I.T. Project Mac, Cambridge. Woods, W. (1975) “What’s in A Link,” in Bobrow and Collins (1975).
“I wish to thank the many members of the AI community who read the manuscript and offered good advice (like “My God, you can’t publish that!“). I’m especially indebted to Steven Isard, Philip Johnson-Laird and Zenon Pylyshyn for illuminating comments on an earlier draft.
Copzifion,
Discussions
6 (1978) 249-261
What’s wrong with Grandma’s
guide to procedural semantics: A reply to Jerry Fodor*l P. N. JOHNSON-LAIRD
Laboratory of Experimental Psychology University of Sussex
“Procedural semantics” is a label for a loose confederation of theories of meaning that rely on an analogy between ordinary language and high-level programming languages: compiling and executing a program correspond rather naturally to stages in a person’s comprehension of an utterance. The analogy has been most strikingly exploited by workers in artificial intelligence, who have devised a variety of programs that manipulate natural language (see e.g., Winograd, 1971; Woods, 1973; &hank, 1972; LonguetHiggins, 1972; Davies and Isard, 1972). But Miller and Johnson-Laird (1976) have also adopted a procedural approach to the study of the mental lexicon, and have argued that it seems particularly suitable for developing psychological theories of the comprehension and production of discourse (see Johnson-Laird, 1977a). The whole enterprise is attacked by Jerry Fodor (1978) in “Tom Swift and his Procedural Grandmother”, a critique that is a volatile mixture of the theoretical, the rhetorical, and the hobby-horsical. The theoretical remarks are disputable. The rhetoric is amusingly disputatious. There is no disputing against hobby-horses. The gist of Fodor’s case runs as follows: Procedural semantics is parasitic on the classical model-theoretic semantics of truth, reference and modality. Procedural semantics attempts to interpret English by translating (i. e., compiling) it into the machine language of a computer; but for this operation to provide a true senzantic theory, the machine language has to have a classical *Requests for reprints should be sent to: P. N. Johnson-Laird, Centre for Research on Perception and Cognition, Laboratory of Experimental Psychology, University of Sussex, Brighton, BNl 9QG, England. ‘I am indebted to Jerry f70dor for sending me successive versions of his critique of procedural semantics, and for at all times conducting himself according to the Marquis of Queensbury Rules. f am also grateful to Jacques Mehler for his dispassionate refereeing. Likewise, I must acknowledge with thanks the untiring efforts of those who have tried to coach me in the intricacies of procedural and model-theoretic semantics: Steve Isard, Christopher Longuet-fiiggins, George Miller, Mark Steedman, and Bill Woods. Steve Isard very kindly made available to me his own unpublished reply to an earlier version of Jerry Fodor’s paper, from which I have borrowed more than I can acknowledge. Finally, I am most grateful to Stuart Sutherland, my ‘second’ in this matter, who has striven to ensure that I strike my blows in good English. My research is supported by a grant from the Social Science Research Council (Great Britain). OOlO-0277/78/3006-0249s2.25
0 Elsevier Sequoia
S.A., Lausanne
Printed
in the Netherlands
250
P. N. Johnson-Laira
interpretation. Moreover, the interpretation assigned a programming language sentence by the compiler is not, normally, its intended interpretation.’ computers do not know or care what the programs they run are about. And, because machine language is interpreted solely for machine states and processes, there is nothing available in procedural semantics to reconstruct the classical relation of reference that holds between, say, the term ‘Lucy’and the individual, Lucy. Nevertheless proceduralists widely suppose that English can be translated into a machine language provided that it is enriched with the names for the states of sensory transducers. This assumption resurrects the discredited empiricist thesis that concepts (other than logical ones) can be reduced by definitions to expressions in a language of sensations, and that percepts are likewise constructed from check-lists of sensory features. It also leads directly to a form of verificationism, the equally discredited doctrine that only logical truths and empirically testable sentences are meaningful. Proceduralists either subscribe to verificationism or else reject it at the cost of having an incomplete semantic theory. In fact, a natural language such as English cannot be reduced at all: its vocabulary isprobab1.v not much larger than it needs to be given the expressive power of the language. Finally, procedural semantics confuses semantic theories with theories of sentence comprehension, but even here its pay-off has been negligible. I intend to show that each of these assertions is either false or else irrele-
vant to the proper evaluation of procedural semantics. But, since they are largely of a philosophical nature, and almost entirely innocent of empirical consequences, my main aim in this reply is, not to reinterpret psychological phenomena, but to locate and to elucidate the errors in Fodor’s Guide to Procedural Semantics. Procedural semantics is parasitic on the classical model-theoretic semantics of truth, reference and modality. A major thrust of Fodor’s paper is that
procedural semantics is intended as an alternative to classical semantics, the tradition initiated by Frege and brought to fruition in model-theoretic accounts of meaning. Such a semantics for a language is set up by replacing reality with a model, and by postulating an interpretation that connects the language to the model. An interpretation is essentially a function that for each individual constant in the language picks out the corresponding individual in the model, and for each predicate in the language, picks out the set of individuals in the model (or set of ordered sets of individuals where the predicate takes several arguments) that satisfy it. The interpretation function also operates recursively to define the truth or falsity of sentences in terms of such interpretations of their constituents. A model-theoretic semantics for a (fragment of) natural language usually arranges for this recursive machinery to work in parallel with the rules of syntax for the language, and makes use
Reply to Jerry Fodor
251
of a model containing a set of “possible worlds” (i.e., possible states of affairs) in order to interpret modal sentences, and other sentences of a similar sort. The meaning (or intension) of a sentence is accordingly a function that maps the possible worlds onto a truth value. The reference (or extension) of the sentence is its truth value in the particular world that obtains. There can be many different models for a given language: logicians are primarily concerned not with which is the right model but with principles that hold over all models. Despite Fodor’s claim, procedural semantics is not intended to supplant classical model-theoretic semantics. Such a proposal would be misguided as can be shown by a simple example. Suppose a psychologist proposes a procedural model of how people reason with propositions. Fodor arrives on the scene and points out that the theory is parasitic upon the model-theoretic semantics for the propositional calculus, that is, the classical apparatus of truth tables. The claim may well be true, depending on what he means by “parasitic”; but it is irrelevant. The classical theory lias no implications for the mental processes by which people reason: it simply specifies what counts as a valid deduction. Since the psychologist is interested in the particular system of cognitive operations that people employ to make valid deductions, and since it is an empirical task to discover what that system is, modeltheoretic approaches offer no solutions to his problems (Johnson-Laird, 1977a, p. 193). The two sorts of theory are not in competition. Procedural semantics attempts to interpret English classically interpreted machine language. The only
by translating it into a
tactful answer to this assertion is to whistle half a dozen bars of Lillabullero. It is simply not true. But, since the misapprehension lies at the heart of Fodor’s conception of procedural semantics, it should be instructive to examine it in more detail. Fodor starts from three quotations (to which 1 have restored some of the material that he omits): These artificial languages, which are used to communicate programs of instructions to computers, have both a syntax and a semantics. Their syntax consists of rules for writing well-formed programs that a computer can interpret and execute. The semantics consists of the procedures that the computer is instructed to execute. If, for example, a programming language permits an instruction like: x and y -+, it might mean that the computer is to add the values of x and y, and to print the result. ...we might speak of the intension of a program as the procedure that is executed when the program is run, and of the extension of a program as the result the program returns when it has been executed. Johnson-Laird
(1977)
252
P. N. fonshon-Laird
We can call the model of semantics used in our system the “procedure model“. The primary organisation of knowledge is in a deductive program with the power to combine information about the parsing of the sentence, the dictionary meannings of its words, and non-linguistic facts about the subject being discussed. Any relevant bit of knowledge can itself be in the form of a program or procedure to be activated at an appropriate time in the process of understanding. The program operates on a sentence to product a representation of its meaning in some internal language, in our case PLANNER. Winograd (1971. p. 409)
With no further justification whatsoever, Fodor takes for granted that proceduralists propose that the meaning of a sentence is its representation in machine language. The compiler, as Fodor puts it, is the semantic theory, and semantic interpretation consists in translating a sentence into machine language, which in turn should receive a classical semantic interpretation. The equivocation here is subtle, but disastrous. Neither Winograd nor JohnsonLaird spoke of expressions in machine language as representing meanings. Indeed, as Steve Isard has pointed out in an unpublished reply to Fodor, to define the semantics of a high-level language by its compiler is analogous to defining the semantics of English in terms of neural activity. It is blatant reductionism of the sort that elsewhere receives Fodor’s justified scorn. One of the great virtues of a digital computer is precisely that it is a working illustration of the futility of reductionism: a program in a high-level language such as PLANNER concerns goals, objects, and properties, not patterns of bits and storage locations. The organisation of the program has a functional autonomy that is totally independent both of the particular machine language into which it is ultimately compiled and of the physics of the particular computer that runs the program. It is not my aim to teach Fodor’s procedural grandmother to suck eggs, or to tell Fodor something that he already knows, but such arguments are hardly novel (cf., Fodor, 1968; Putnam, 1975). Fodor argues that if proceduralists do not think English can be paraphrased in some sort of machine language. then they owe the world some alternative account of how a programming language can be classically interpreted. In fact, this alternative exists and a brief inquiry into it delivers the cou/~ dc grcice to Fodor’s debilitated notion of procedural semantics. The pioneers of model-theoretic semantics for programming languages, Scott and Strachey (197 l), write as follows: “Compilers of high-level languages are generally constructed to give the complete translation of the programs into machine language. As machines merely juggle bit patterns, the concepts of the original language may be lost or at least obscured during this passage. The purpose of mathematical [i.e., model-theoretic] semantics is to give a correct and meaningful correspondence between programs and mathematical entities in a way
Reply to Jerry Fodor
253
that is entirely independent of an implementation”. To take a specific example, a list-processing language will contain an instruction that returns the head of a list, say, HD(x), which, if x is the list [Tom Dick Harry] , returns “Tom” as its value when it is executed. If you wish to characterise the meaning of this instruction in terms of a model-theoretic semantics, then you would certainly not do so by relating HD(x) first to an expression in machine language and then interpreting this expression. Such a tactic would plainly necessitate a different semantics for each of the many different machine languages into which the list-processing language could be translated. What you would do, following in the steps of Scott and Strachey, would be to set up a direct interpretation of the list-processing language, relating it to an abstract model containing lists, their elements, and various functions. It does not matter how a machine actually represents ‘HD(x)‘, lists, or their constituents, provided that it does so in a way that is in accordance with their semantics. Unfortunately, Fodor’s error here is so egregious that’it largely wrecks the rest of his paper. But, since he does raise some other interesting points, I shall refrain from the argumentum fistulatorium and persevere with my reply. The interpretation assigned a programming language sentence by the compiler is not, normally, its intended interpretation. What Fodor has in mind here is that “machines typically don’t know (or care) what the programs that they run are about; all they know (or care about) is how to run their programs”. Fodor illustrates the point by considering two programs: one simulates the Six Day War and the other simulates a game of chess, and they just happen to be indistinguishable when they are compiled. Of course, exactly the same example can be created in terms of a model-theoretic semantics, and, in fact, the same mathematical language is often interpreted using different models. Indeed, the case can even arise in natural language. Consider a description of the Six Day War in which the various deployments of forces are identified by different codewords, so that “pawns attack knight” actually means that Israeli infantry surround a tank corps. Now, it just so happens (by parity with Fodor’s example) that a description of the Six Day War constitutes a description of a game of chess. What moral should we draw? That a speaker of English could find out what was being referred to, but a computer could not? I see no reason to suppose that computers cannot in principle be programmed to deal with such ambiguities. What Fodor seems to have lost sight of in his example is the distinction between a theory and what the theory is about. All theories are abstractions. It would be silly to criticize a theory of X-rays on the grounds that it, the theory, was not radio-active. Likewise, it seems silly to criticize a computer program that simulates the Six Day War on the grounds that the machine
254
I? IV. Johnson-Laird
does not know its intended interpretation, and silly to criticize a procedural or model-theoretic semantics on the grounds that it does not know (or care about) the intended interpretations that its models. Nothing is available in procedural semantics to reconstruct such classical semanticrelationsas the one that holds between ‘Lucy’and Lucy, or between ‘chocolate cake’ and chocolate cakes. Alas, classical semantics does not reconstruct the relation of reference that holds between expressions such as ‘Lucy’ and entities such as Lucy; it merely postulates a primitive and unanalyzed function that maps terms in the language onto entities in the model. Indeed, so little is it concerned with such matters that mathematicians making use of model-theoretic semantics often do not bother to distinguish between individual constants in the language and the individuals in the model (see Robin, 1969). Fodor’s criticism accordingly applies to semantic theories in general, as he himself concedes rather later in the paper: “it is, of course, not very interesting to say that ‘chair’ refers to chairs, since we have no theory of reference and we have no mechanism to reahse the theory”. Proceduralists suppose that English can be translated into an ENRICHED machine language. After strenuously arguing against the notion that translation into machine language provides a satisfactory semantics, a notion that he has unjustifiably attributed to proceduralists, Fodor finally concedes that no one was ever likely to have held such a view in the first place. What is widely supposed, he suggests, is that English can be translated into an enriched machine language. He has in mind equipping a computer with sensory transducers and a machine language with names for their input (sic) and output states. Many researchers in Artificial Intelligence believe (or had better believe), he says, that English can be translated into this MLT, or Machine Language enriched with nourishing Transducer-state names. But, this doctrine is nothing other than a resurrected version of the empiricist principle that “every non-logical concept is reducible to sensation concepts (or, in trendier versions, to sensation plus motor concepts) via coordinating definitions”. Fodor erred in assuming that procedural semanticists aim to translate English into machine language (enriched or otherwise). His thesis that many workers in AI are committed to a sensation language of the sort envisaged by John Locke is an extraordinary faux pas. If the work on scene analysis has a patron philosopher, it is undoubtedly Immanuel Kant. Here is not the place to review these studies, but it is relevant to point out that one of the clearest morals to have emerged from them is that any simple empiricist program that attempts to build up percepts from the properties of the sensory input alone is unworkable. Programs require a knowledge of a variety of domains from the projective geometry of threedimensional objects (e.g., Waltz, 1975) to the prototypical shapes of certain sorts of object (e.g., Marr and Nishihara,
Reply to Jerry Fodor
255
1976). Moreover, there is no good reason to suppose that the language of sensations corresponds to the output of sensory transducers: “sensations are not psychic atoms in perceptual compounds; they are abstracted from percepts by a highly skilled act of attention” (Miller and Johnson-Laird, 1976, p. 29). Only a theorist committed to a one-tonne relation between mental language and natural language is likely to identify the output of sensory transducers with the language of sensations. Fodor’s misapprehension that vision programs are exercises in naive empiricism leads him astray on what procedural semantics has to say about the relations between language and perception. In his view, proceduralists represent the meaning of a term such as ‘chair’ by relying on the sensory predicates with which they have enriched machine language. Hence, the meaning of ‘chair’ is simply a set of sensory properties: semantic decomposition of the lexicon parallels sensory decomposition of percepts. This thesis is false. Miller and Johnson-Laird (1976, Sec. 4) in fact argue that many aspects of the meaning of words have no perceptual correlates; that- where there is such a link it is mediated by a complex conceptual apparatus; and that a perceptual paradigm for an object is not a set of sensory properties. Fodor’s diatribe against empiricist theories of perception - they are remarkably old news, they are reductionist, they are atomistic, and so on and on - seems otherwise correct, but irrelevant to procedural semantics. Proceduralists eithersubscribe to verification&n or else reject it at the cost of having an incomplete semantic theory. Fodor’s arguments about empir-
icism are perhaps tendentious in that they plainly lead up to the heart of his criticism, namely, that procedural semantics is in a dilemma about verificationism. (In Section 1 of his paper, he claims that procedural semantics is “an archaic and wildly implausible form of verificationism”. Later, he relents and allows that some proceduralists are not verificationists. 1 shall deal with this second line of argument on the grounds that it is nearer to the truth.) The basic doctrine of verificationism is that a sentence is meaningful, as opposed to meaningless, only if its truth or falsity (or probability to some degree) can in principle be established empirically. This view is primarily associated with the Logical Positivists, who sometimes went even further and identified the meaning of a sentence with its method of verification, a central tenet of operationalism. Obviously, the doctrine was not intended to apply to analytic or necessarily true sentences. The dilemma that Fodor poses for procedural semantics is whether or not to embrace veriticationism. He quotes Woods (1975, p. 39): In order for an intelligent entity to know the meaning of such sentences “Snow is white”]
[as it must be the case that it has stored somehow an effective set
256
P. N. Johnson-Laid
of criteria for deciding in a given possible world whether such a sentence is true or false.
And this view, he says, is about the strongest form of verificationism that anyone has ever endorsed: it implies that merely in virtue of having learned English, a speaker possesses an algorithm for determining the truth value of such sentences as: “God exists”, “positrons are made of quarks”, “Aristotle liked onions”. Unfortunately, Fodor has overlooked a qualification that Woods (1975, p. 40) makes on the very next page: The case presented above is a gross oversimplification of what is actually required for an adequate procedural specification of the semantics of natural language. There are strong reasons which dictate that the best one can expect to have is a partial function which assigns true in some cases, false in some cases, and fails to assign either true of false in others. There are also cases where the procedures require historical data which is not normally available and therefore cannot be directly executed. The reader is referred to Woods (1973, 1978) for a further discussion of his actual views. Since no proceduralist appears to be directly impaled on this point of Fodor’s arguments, let us turn to the other horn of the dilemma. According to Fodor, the grounds on which Miller and Johnson-Laird (1976) deny that they are veriticationists are obscure. He accordingly invents an account on their behalf, which has the consequence that their theory cannot give an interpretation for a term such as, “cause”. Since this argument rests on Fodor’s fallacy of the enriched machine language into which sentences are supposed to be interpreted, this horn is crumpled and impales no one. In fact, Miller and Johnson-Laird reject veriticationism as a possible basis for a psychological theory of meaning on several grounds: verification is only one of a number of different conceptual operations that may be carried out as a consequence of understanding a sentence; and it runs into problems with the vagueness of ordinary language. The crux of their argument is that understanding a sentence is possible even when verification is not, e.g., “There’s a gorilla in that closet whenever no one is trying to find out that there is”. And, in their discussion of the process of verifying a claim such as “That is a book”, they write: . ..a meaning of the sentence must be clear before you undertake to verify it; if it were not, you would not know how to proceed with its verification. Undcrstanding is antecedent to verification, not a consequence of verification. (Ibid. p. 126.) Their
quotation:
theory
of the
meaning
of a word
is encapsulated
in the following
Reply to Jerry*F&or
257
The meaning of ‘book’ is not the particular book that was designated, or a perception of that book, or the class of objects that ‘book’ can refer to, or a disposition to assent or dissent that some particular object is a book, or the speaker’s intention (whatever it may have been), or the set of environmental conditions (whatever they may have been) that caused him to use this utterance, or a mental image (if any) of some book or other, or the set of other words associated with books, or a dictionary definition of ‘book’, or the program of operations (whatever they are) that people have learned to perform in order to verify that some object is conventionally labelled a book. We will argue that the meaning of ‘book’ depends on a general concept of books; to know the meaning is to be able to construct routines that involve the concept in an appropriate way, that is, routines that take advantage of the place ‘book’ occupies in an organised system of concepts. (Ibid. pp. 127 - 8.)
I hope that these remarks have dispelled any remaining obscurity as to why proceduralists are not necessarily committed to verificationism. The vocabulary of a language like English is probably not much larger than it needs to be given the expressive power of the language. Fodor takes the view that the meanings of English words cannot be reduced to more elementary elements in some theoretical language. This is an interesting point of controversy, but not one that directly relates to procedural semantics. There are proceduralists who believe that the meanings of words can be decomposed into semantic primitives (Schank, 1972), but there are also proceduralists who take the contrary point of view (Winograd, 1974). I have dealt elsewhere with certain aspects of the controversy (Johnson-Laird, 1977b), but let me here take up one of Fodor’s specific points without prejudice to the general issue of the viability of procedural semantics. Fodor argues that “the vocabulary of a language like English is probably not much larger than it needs to be given the expressive power of the language”, and that is why there are so few good examples of definitions of English words. (Spare a thought for Dr. Johnson, Noah Webster, and Sir James Murray, rotating like lathes in their graves.) In fact, the most immediate argument for the feasibility of semantic reduction is the existence of Basic English (Ogden, 1930): with a vocabulary of about 850 words, it is possible to say almost all of what one wants to say, and other words, if need be, can be defined in Basic English. Of course, a word accretes a variety of literary, historical, or scientific connotations, but if these expressive elements are excluded as irrelevant to truth conditions, then it is plain that English could be shorn of defenestration, sexagenarian, privateer, triturate, eleemosJ>nary, renitent, macarize, dilucidate, stiver, defluxion, abjudge, statutable, toise, argute, tritical, tabid, periapt, covin, iracund, obstipation, stridulous, gummous, and thousands of other words, with no loss. Moreover, if you
258
P. N. Johnson-Laid
want to know what such words mean, or are in search of good definitions, then turn to the works of Johnson, Webster, and Murray, and their successors. Indeed, the meanings of many words can often be acquired only from dictionary definitions. Some words are easy to define, other words are extremely difficult to define without falling back on synonyms, ostension, examples of usage, or vicious circles (see Johnson-Laird and Quinn, 1976). This division of vocabulary appears to be one of the central features of the lexicon, and it is naturally accounted for by theories that decompose meanings into semantic primitives. Words that are close to expressing an unadorned primitive notion are hard to define, whereas words that express some combination of primitive notions are easy to define in terms of words for them. It is difficult to resist the conclusion that English contains a larger vocabulary than it strictly needs in order to meet the criteria of classical semantics (of which Fodor is so ardent an advocate). Procedural semantics confuses semantic theories with theories of sentence comprehension, but even here its pay-off has been negligible. As Fodor makes abundantly clear, he demands (like the philosopher, Donald Davidson, 1967) that a semantic theory should relate language directly to the world. Proceduralists, claims Fodor, have made no contribution to such an account, but talk as though they had. It is perfectly true that procedural semantics is not an exercise in relating language directly to reality: what would the computer be doing but getting in the way, interposing itself between them, if it were? But, the claim that proceduralists forget they are in the business of establishing theories of internal representations strikes me as a contrived fiction - I hear the sound of coconut shells struck together as Grandmama rides off on her latest hobby-horse. In fact, many proceduralists doubt whether the Davidsonian philosophy for a semantics of natural language is feasible. They are not alone. Certain philosophers have argued that linguistic expressions do not refer except in a derivative sense: it is people who refer, and they may do so by using linguistic expressions (see, e.g., Strawson, 1950; Austin, 1962; Searle, 1969). When Fodor talks of expressions as referring to objects and criticizes procedural semantics (along with all other theories) for failing to reconstruct this relation, he is perhaps talking of a derivative relation. It is possible that no account of this relation will be forthcoming until a satisfactory theory of mental representations is developed. It is possible that such a theory of mental representations would render the derivative theory otiose. It is even possible that language cannot be usefully related to the world without an intervening mental representation. Is Fodor after all a reconstructed Behaviorist? As Steve Isard remarks in his unpublished reply, it fair takes the breath away to have one of the authors of “The structure of a semantic theory”
Reply to Jerry Fodor
259
coolly toss off “if what you mean by a semantic theory is an account of the relation between language and the world...” as if no other possibility had ever entered his head. The reader may recall that in that paper, Katz and Fodor (1963) argued that the only possible treatment of the effects of linguistic context on the interpretation of a sentence is one in which “discourse is treated as a single sentence in isolation by regarding sentence boundaries as sentential connectives”. They made no suggestions as to how such a theory would work other than claiming that the great majority of sentence breaks could be treated as a&conjunctions. I mention this old argument, which perhaps Fodor no longer subscribes to, simply to try to rebut his charge that procedural semantics has had a negligible pay-off even as a theory of comprehension. Proceduralists have in fact shown how linguistic context affects the construction and interpretation of referring expressions, how general knowledge can be used to disambiguate a sentence in context, and how the choice of such matters as tense and connectives is affected by contextual considerations (see Johnson-Laird, 1977a, for the references). They have made progress in an area that Fodor once wrote off as impossible. A disinterested reader may yet regard this payoff as negligible, but how can Fodor? Conclusion I have shown that Fodor’s critique is essentially an argument against a position that is not held. There remains only a single mystery: from what did he derive his misguided notion of procedural semantics? It is tempting to reply with a Johnsonian: “Ignorance, ma’am, pure ignorance”. But, after an exercise of some scholarship, I have located the source. It is in a work from which I now quote liberally: The only psychological models of cognitive processes that seem even remotely plausible represent such processes as computational. But, I think, nevertheless, that the core of the empiricist theory of perception is inevitable. In particular, the following claims about the psychology of perception seem to be almost certainly true and entirely in the spirit of empiricist theorizing: (1) Perception typically involves hypothesis formation and confirmation. (2) The sensory data which confirm a given perceptual hypothesis are typically internally represented in a vocabulary that is impoverished compared to the vocabulary in which the hypotheses themselves are couched. . ..what happens when a person understands a sentence must be a translation process basically analogous to what happens when a machine ‘understands’ (viz., compiles) a sentence in a programming language. I sh.all try to show . . . that there are broadly empirical grounds for taking this sort of model seriously.
It may be that complex concepts (like, say. ‘airplane’) decompose into simpler concepts (like ‘flying machine’). We shall see . . . that this sort of view is quite fashionable in current semantic theories; indeed, some or other version of it has been around at least since Locke. But it may be true for all that, and if it is true it may help. . ..a compiler which associates each formula in the input language with some formula in the computing language can usefully be thought of as providing a semantic theory for the input language... On the present account then, it would be plausible to think of a theory of meaning for a natural language (like English) as a function which carries English sentences into their representations in the putative internal code.
There they all are - the computational metaphor, the compiler as semantic theory, the sensation language of empiricism ~~ all the notions that Fodor castigates unsparingly. And who is this benighted author? Why. none other
than the Procedural Grandmother of them refutes himself, not procedural semantics.
all. Jerry
Fodor
(1976).
Fodor
References Davidson. D. (1967) Truth and meaning. Svnthese, 17, 304-323. Davies, D. J. M. and Isard, S. D. (1972) Utterances as programs. In D. Michie (cd.), Machine Intelligence, 7, Edinburgh University Press, Edinburgh. ITodor, J. A. (196X) Psychological I%planation: An Introduction to the Philosophy of Psvchologv, Random Ilouse, New York. ITodor, J. A. (1976) The I,anKuageof‘TllorrXI?t, Ilarvcster, Sussa. I:odor, J. A. (1978) Tom Swift and his Procedural Grandmother. Cog.. 6, 229.-247. Johnson-l.aird, P. N. (1977a) Procedural semantics. Cog., 5, 189-214. Johnson-Laird, I’. N. (39770) Psycholinguistics without linguistics. In N. S. Sutherland (cd.), Tutorial Essajas in Psychology, Ihl. f, Erlbaum, Hillsdale. N.J. Johnson-I.aird, P. N. and Quinn. J. G. (1976) To define true meaning.Nafure, .?64, 635436. Lonpuet-II&ins, 11. C. (1972) The algorithmic description of natural language. Proc. Roy. SM. Land. B., IS?, 255-m276. hlarr, D. and Nishihara, H. K. (1976) Representation and recognition of the spatial organization ot three-dimensional sliapes. MIT Al Laboratory Mcmornndum 377, Cambridge, Mass. Miller. G. A. and Johnson-Laird. P. N. (1976) Loqua@ and Perception. Harvard University Press, (‘ambridge, Mass. Ogden. C. K. (1930) Basic English. Kepan Paul, Trench, and Trubner, London. Putnam, II. (1975) Philosophy and our mental life. Philosophical Papers, Vol. 2: Mind. Latzguage and Reality, (I’ambridse University Press, Cambridge. Robbin, J. W. (1969) Mathematical I,ogic: A First Course, Benjamin, New York. Schank, R. <‘. (1972) Conceptual dependency: a theory of natural lanjiuage understanding. C’o,q:. PsychoI.,
Scott.
3, 552431.
l).. and Strachcy.
C’ (1971)
ik,ys of the Svn~posium
1971.
Toward
a mathematical
on Computers
andAutomata.
semantics for uomputcr Polytechnic Institute
languages. Proceed-
of Brooklyn.
April,
Reply to Jewy Fodor
Waltz,
D. L. (1975)
261
Understanding line drawings of scenes with shadows. In P. H. Winston (cd.), The ofCompurer Vision, McGraw-Hill, New York. Winograd, T. (1971) Procedures asa Representation for Data in a Computer Program for Understanding Natural Language. MIT, AI Laboratory, Report A.I.-TR-17., Cambridge, Mass. Winograd, T. (1974) Five lectures on artificial intelligence. Artificial Intelligence Laboratory, Memo AIM No. 240, Stmford University, California. Woods, W. A. (1973) Meaning and machines. Proceedings of the Int. Conf. on Computational Linguistics, Pisa, Italy. Woods, W. A. (1975) What’s in a link: foundations for semantic networks. In D. G. Bobrow and A. Collins (eds.), Representation and Understanding: Studies in Cognitive Science. Academic Press, New York. Woods, W. A. (1978) Procedural semantics as a theory of meaning. Paper delivered at the Sloan Workshop on Computational Aspects of Linguistic Structure and Discourse Setting, May, 1978, University of Pennsylvania. Psychdogy