Discourse, Vision, and Cognition
Human Cognitive Processing (HCP) Human Cognitive Processing is a bookseries presenti...
49 downloads
996 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Discourse, Vision, and Cognition
Human Cognitive Processing (HCP) Human Cognitive Processing is a bookseries presenting interdisciplinary research on the cognitive structure and processing of language and its anchoring in the human cognitive or mental systems.
Editors Marcelo Dascal
Tel Aviv University
Raymond W. Jr. Gibbs
University of California at Santa Cruz
Editorial Advisory Board Melissa F. Bowerman
Eric Pederson
Wallace Chafe
François Recanati
Philip R. Cohen
Sally Rice
Antonio Damasio
Benny Shanon
Morton Ann Gernsbacher
Lokendra Shastri
David McNeill
Paul Thagard
Nijmegen
Santa Barbara, CA Portland, OR Iowa City, IA
Madison, WI Chicago, IL
Volume 23 Discourse, Vision, and Cognition by Jana Holšánová
Eugene, OR Paris
Edmonton, Alberta Jerusalem
Berkeley, CA
Waterloo, Ontario
Jan Nuyts
University of Antwerp
Discourse, Vision, and Cognition Jana Holšánová Lund University
John Benjamins Publishing Company Amsterdam / Philadelphia
8
TM
The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.
Library of Congress Cataloging-in-Publication Data Holšánová, Jana. Discourse, vision, and cognition / Jana Holšánová. p. cm. (Human Cognitive Processing, issn 1387-6724 ; v. 23) Includes bibliographical references and index. 1. Discourse analysis. 2. Oral communication. 3. Psycholinguistics. 4. Visual communication. I. Title. P302.H635 2008 401'.41--dc22 2007049098 isbn 978 90 272 2377 7 (Hb; alk. paper)
© 2008 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co. · P.O. Box 36224 · 1020 me Amsterdam · The Netherlands John Benjamins North America · P.O. Box 27519 · Philadelphia pa 19118-0519 · usa
Table of contents
Preface chapter 1 Segmentation of spoken discourse 1. Characteristics of spoken discourse 1 1.1 Transcribing spoken discourse 3 2. Segmentation of spoken discourse and ‘cognitive rhythm’ 4 2.1 Verbal focus and verbal superfocus 6 3. Segmentation rules 10 4. Perception of discourse boundaries 12 4.1 Focus and superfocus 13 4.2 Other discourse units 15 5. Conclusion 16 chapter 2 Structure and content of spoken picture descriptions 1. Taxonomy of foci 19 1.1 Presentational function 21 1.1.1 substantive foci 21 1.1.2 substantive foci with categorisation difficulties 21 1.1.3 substantive list of items 22 1.1.4 summarising foci 22 1.1.5 localising foci 23 1.2 Orientational function 24 1.2.1 evaluative foci 24 1.2.2 expert foci 24 1.3 Organisational function 25 1.3.1 interactive foci 25 1.3.2 introspective foci & metacomments 26
xi 1
19
vi
Discourse, Vision and Cognition
2. How general are these foci in other settings? 28 2.1 Presentational function 30 2.1.1 substantive foci (subst) 30 2.1.2 substantive foci with categorisation difficulties 31 2.1.3 subst list of items 31 2.1.4 summarising foci (sum) 31 2.1.5 localising foci (loc) 32 2.2 Orientational function 33 2.2.1 evaluative foci (eval) 33 2.2.2 expert foci (expert) 33 2.3 Organisational function 34 2.3.1 interactive foci (interact) 34 2.3.2 introspective foci & metacomments (introspect/meta) 34 2.4 Length of description 35 3. Conclusion 37 chapter 3 Description coherence & connection between foci 1. Picture description: Transitions between foci 39 1.1 Means of bridging the foci 42 1.1.1 Discourse markers 43 1.1.2 Loudness and voice quality 45 1.1.3 Localising expressions 46 2. Illustrations of description coherence: Spontaneous description & drawing 46 2.1 Transitions between foci 50 2.2 Semantic, rhetorical and sequential aspects of discourse 51 2.3 Means of bridging the foci 52 3. Conclusion 53 chapter 4 Variations in picture description 1. Two different styles 55 1.1 The static description style 57 1.2 The dynamic description style 59 1.3 Cognitive, experiential and contextual factors 64
39
55
Table of contents vii
2. Verbal or visual thinkers? 66 3. Effects of narrative and spatial priming on the two styles 69 3.1 The effect of (indirect) spatial priming 70 3.2 The effect of narrative priming 73 4. Conclusion 76 chapter 5 Multimodal sequential method and analytic tool 1. Picture viewing and picture description: Two windows to the mind 80 1.1 Synchronising verbal and visual data 81 1.2 Focusing attention: The spotlight metaphor 83 1.3 How it all began … 84 2. Simultaneous description with eye tracking 85 2.1 Characteristics of the picture: Complex depicted scene 86 2.2 Characteristics of the spoken picture description: Discourse level 87 2.3 Multimodal score sheets 88 2.3.1 Picture viewing 89 2.3.2 Picture description 91 3. Multimodal sequential method 94 4. Conclusion 98 chapter 6 Temporal correspondence between verbal and visual data 1. Multimodal configurations and units of visual and verbal data 100 1.1 Configurations within a focus 101 1.1.1 Perfect temporal and semantic match 101 1.1.2 Delay between the visual and the verbal part 101 1.1.3 Triangle configuration 102 1.1.4 N-to-1 mappings 104 1.1.5 N-to-1 mappings, during pauses 104 1.1.6 N-to-1 mappings, rhythmic re-examination pattern 105 1.2 Configurations within a superfocus 106 1.2.1 Series of perfect matches 106 1.2.2 Series of delays 107 1.2.3 Series of triangles 109
79
99
viii Discourse, Vision and Cognition
1.2.4 Series of N-to-1 mappings 109 1.2.5 Series of N-to-N mappings 113 1.3 Frequency distribution of configurations 114 2. Functional distribution of multimodal patterns 115 3. Discussion: Comparison with psycholinguistic studies 120 4. Conclusion 121 chapter 7 125 Semantic correspondence between verbal and visual data 1. Semantic correspondence 126 1.1 Object-location relation 128 1.2 Object-path relation 128 1.3 Object-attribute relation 129 1.4 Object–activity relation 129 2. Levels of specificity and categorisation 130 3. Spatial, semantic and mental groupings 131 3.1 Grouping concrete objects on the basis of spatial proximity 131 3.2 Grouping multiple concrete objects on the basis of categorical proximity 132 3.3 Grouping multiple concrete objects on the basis of the composition 133 3.4 Mental zooming out, recategorising the scene 133 3.5 Mental grouping of concrete objects on the basis of similar traits and activities 134 3.6 Mental grouping of concrete objects on the basis of an abstract scenario 135 4. Discussion 137 4.1 The priming study 138 4.2 Dimensions of picture viewing 142 4.3 Functions of eye movements 143 4.4 The role of eye fixation patterns 143 4.5 Language production and language planning 146 5. Conclusion 148 chapter 8 Picture viewing, picture description and mental imagery 1. Visualisation in discourse production and discourse comprehension 152 2. Mental imagery and descriptive discourse 157
151
Table of contents
2.1 Listening and retelling 159 2.1.1 Measuring spatial and temporal correspondence 161 2.1.2 Eye voice latencies 161 2.2 Picture viewing and picture description 164 3. Discussion: Users’ interaction with multiple external representations 167 4. Conclusion 168 chapter 9 Concluding chapter
171
References Author index Subject index
179 193 197
ix
Preface
While there is a growing body of psycholinguistic experimental research on mappings between language and vision on a word and sentence level, there are almost no studies on how speakers perceive, conceptualise and spontaneously describe a complex visual scene on higher levels of discourse. This book aims to fill this gap by exploring the relationship between language, vision and cognition in spoken discourse. In particular, I investigate the dynamic process of picture discovery, the process of picture description and cognitive processes underlying both. What do we attend to visually when we describe something verbally? How do we construct meaningful units of a scene during the scene discovery? How do we describe a scene on different occasions for different purposes? Are there individual differences in the way we describe and visualise a scene? The point of departure in this book are complex units in spoken language description and visual units during the process of picture viewing. Observers perceive the picture on different levels of detail, mentally group the elements in a particular way and interpret both WHAT they see and HOW the picture appears to them. All this is reflected in the stepwise process of picture viewing and picture description. The transcripts of spoken discourse – including prosody, pausing, interactional and emotional aspects – contain not only ideas about concrete objects, their shapes, qualities and spatial relations but also the describers’ impressions, associations and attitudes towards them. This enables us to study the complex dynamics of thought processes going on during the observation and description of a scene. In this book, I combine discourse analysis with a cognitively oriented research and eye movement protocols in order to gain insights about the dynamics of the underlying cognitive processes. On the one hand, eye movements reflect human thought processes. It is easy to determine which elements attract the observer’s gaze, in what order and how often. On the other hand, verbal foci formulated during picture description are the linguistic expressions of a conscious focus of attention. With the help of a segmented transcript of the discourse, we can learn what is in the verbal focus at a given time. Thus, verbal
xii Discourse, Vision and Cognition
and visual data (the contents of the attentional spotlight) are used as two windows to the mind. Both kinds of data are an indirect source to shed light on the underlying cognitive processes. It is, of course, impossible to directly uncover our cognitive processes. If we want to learn about how the mind works, we have to do it indirectly, via overt manifestations. The central question is: Can spoken language descriptions and eye movement protocols, in concert, elucidate covert mental processes? To answer this question, we will proceed in the following steps: Chapter 1 presents a segmentation of spoken discourse and defines units of speech – verbal focus and verbal superfocus – expressing the contents of active consciousness and providing a complex and subtle window on the mind. Chapter 2 focuses on the structure and content of spoken picture descriptions. It describes and illustrates various types of foci and superfoci extracted from picture descriptions in various settings. Chapter 3 takes a closer look at how speakers create coherence when connecting the subsequent steps in their picture descriptions. Chapter 4 discusses individual description styles. While the first four chapters of this book deal with characteristics of picture descriptions in different settings, in the remaining four chapters, the perspective has been broadened to that of picture viewing. These chapters explore the connection between spoken descriptive discourse, picture viewing and mental imagery. The discourse segmentation methodology is therefore extended into a multimodal scoring technique for picture description and picture viewing, leading up to an analysis of correspondence between verbal and visual data. Chapter 5 deals with methodological questions, and the focus is on sequential and processual aspects of picture viewing and picture description. The reader gets acquainted with the multimodal method and the analytical tools that are used when studying the correspondence between verbal and visual data. In Chapters 6 and 7, the multimodal method is used to compare the content of the visual focus of attention (specifically clusters of visual fixations) and the content of the verbal focus of attention (specifically verbal foci and superfoci) in order to find out whether there is correspondence in units of picture viewing and simultaneous picture description. Both temporal and semantic relations between the verbal and visual data are investigated. Finally, clusters on different levels of the discourse hierarchy are connected to certain functional sequences in the visual data. Chapter 8 focuses on the issue of visualisations in discourse production and discourse comprehension and presents studies on mental imagery associated with picture viewing and picture description.
Preface xiii
The concluding Chapter 9 looks back on the most important issues and findings in the book and mentions implications of the multimodal approach for other fields of research, including evaluation of design, users’ interaction with multiple representations, multimodal systems, etc. This book addresses researchers with a background in linguistics, psycholinguistics, psychology, cognitive science and computer science. The book is also of interest to scholars working in the applied area of usability and in the interdisciplinary field concerned with cognitive systems involved in language use and vision.
Acknowledgements I wish to thank Professor Wallace Chafe, University of Santa Barbara, for inspiration, encouragement and useful suggestions during my work. I also benefited from discussions and criticism raised by doctoral students, researchers and guest researchers during the eye tracking seminars at the Humanist Laboratory and Cognitive Science Department at Lund University: Thanks to Richard Andersson, Lenisa Brandão, Philip Diderichsen, Marcus Nyström and Jaana Simola. Several colleagues have contributed by reading and commenting on the manuscript. In particular, I want to thank Roger Johansson and Kenneth Holmqvist for their careful reading and criticism. I also wish to thank the anonymous reviewer for her/his constructive suggestions. Finally, thanks are due to my family: to our parents, my husband and my children Fredrik and Annika. The work was supported by the post-doctoral fellowship grant VR 2002– 6308 from the Swedish Research Council.
chapter 1
Segmentation of spoken discourse
When we listen to spontaneous speech, it is easy to think that the utterances form a continuous stream of coherent thought. Not until we write down and analyse the speech, do we realise that it consists of a series of small units. The speech flow is segmented, containing many repetitions, stops, hesitations and pauses. Metaphorically one could say that speech progresses in small discontinuous steps. In fact, it is the listeners who creatively ‘fill in’ what is missing or ‘filter out’ what is superfluous and in this way construe ‘continuous’ speech. The stepwise structure of speech can give us many clues about cognitive processes. It suggests units for planning, production, perception and comprehension. In other words, the flow of speech reflects the flow of thoughts.
The aim of this chapter is to present a segmentation of spoken discourse and to define units of speech – verbal focus and verbal superfocus. This will be used as a starting point for further analysis of spoken descriptive discourse (Chapters 2–4) and for the comparison of spoken and visual data (Chapters 5–8). First, we will get acquainted with the content and form of transcriptions and important conventions for easier reading of the examples. Second, we will address the issue of discourse segmentation and deal with verbal focus and verbal superfocus as the most important units in spoken descriptive discourse. Third, we will formulate segmentation rules and operationalise the definitions for segmentation of spoken data. Fourth, we will consider perception of dicourse boundaries on different levels of discourse.
1.
Characteristics of spoken discourse
Spontaneous speech tends to be produced in spurt-like portions, not in wellformulated sentences. Therefore, transcription has to be performed in spoken
Discourse, Vision and Cognition
language terms (Linell 1994: 2) and include such features as pauses, hesitations, unintelligible speech, interruptions, restarts, repeats, corrections and listener feedback. All these features are typical of speech and give us additional information about the speaker and the context. The first example stems from a data set on picture descriptions in Swedish and illustrates some of the typical features of spoken discourse. Example 1 0851 → 0852 → 0853 → 0854 → 0855 0856
ehmm’ till höger på bilden’ ehmm’ to the right in the picture’ så finns de/ ja juste, there are/ oh yes, framför framför trädet’ in front in front of the tree’ så e det . gräs, so there is . grass, eh och till höger om det här trädet’ eh and to the right of this tree’ så har vi återigen en liten åker-täppa eller jord,
so again we have a little field or piece of soil,
The example above illustrates the main characteristics of spontaneous speech, i.e. how we proceed in brief spurts and formulate our message in portions. After some hesitation the informant introduces the right hand side of the picture (0851). He then interrupts the description – seemingly reminding himself of something else – and turns to another part of the picture. First after he has described the region in front of the tree (0853–0854), he returns to the right hand side of the picture again (0855). Such interruptions, jumps and digressions are very common in spoken discourse (see Chapter 3 for details). Speakers are not always consistent and disciplined enough to exhaust one idea or aspect completely before they turn to the next one. Besides formulating ideas on the main idea track, there are often steps aside. It is possible to abandon a certain aspect for a while although it is not completed in the description, turn to other aspects, and then again return to what was not fully expressed. Often, this is done explicitly, by marking the steps backs to the main track. It could also be that something has been temporarily forgotten but gets retrieved (I saw … three eh four persons in the garden). The relation between discourse production and the visual behaviour during pauses and hesitations will be considered in later Chapters (4 and 7).
Chapter 1. Segmentation of spoken discourse
As we turn our ideas into speech, the match between what we want to say and what we actually do say is rarely a perfect one. The stepwise production of real-time spoken discourse is associated with pauses, stutterings, hesitations, contaminations, slips of the tongue, speech errors, false-starts, and verbatim repetitions. Within psycholinguistic research, such dysfluencies in speech have been used as a primary source of data since they allow us insights into the actual process of language production. This kind of dysfluencies has been explored in order to reveal planning, execution and monitoring activities on different levels of discourse (Garett 1980; Linell 1982; Levelt 1983; Strömqvist 1996). Goldman-Eisler (1968) has shown that pauses and hesitation reflect planning and execution in tasks of various cognitive complexity. Pausing and hesitation phenomena are more frequent in demanding tasks like evaluating (as opposed to simply describing), as well as at important cognitive points of transition when new or vital pieces of information appear. Also, the choice points of ideational boundaries are associated with a decrease in speech fluency. Errors in speech production give us some idea of the units we use. Slips of the tongue can appear either on a low level of planning and production, as an anticipation of a following sound (he dropped his cuff of coffee), or on higher levels of planning and production (Thin this slicely instead of Slice this thinly; Kess 1992). Garett (1975, 1980) suggests that speech errors provide evidence for two levels of planning: semantic planning across clause boundaries (on a ‘functional level’) and grammatical planning within the clause range (on a ‘positional level’). Along similar lines, Levelt (1989) assumes macroplanning (i.e. elaboration of a communicative goal) and microplanning (decisions about the topic or focus of the utterance etc.). Linell (1982) distinguishes two phases of utterance production: the construction of an utterance plan (a decision about the semantic and formal properties of the utterance) and the execution of an utterance plan (the pronunciation of the words) (see also Clark & Clark 1977).
1.1
Transcribing spoken discourse
In order to analyse spoken discourse, it needs to be transcribed by using a notation system. Let me now mention some practical matters concerning the transcripts that the reader will find in this book. Each numbered line represents a new idea or ‘focus of thought’. The transcription includes (a) verbal features, (b) prosodic/acoustic features (intonation, rhythm, tempo, pauses, stress, voice quality, loudness), and (c) non-verbal features (gestures, laughter etc.). The arrow in front of a unit means that this unit is discussed and explained in the
Discourse, Vision and Cognition
Table 1. Transcription symbols . ... (3s) eh yes, yes’ tree (faster) LAUGHS one of/ xxx
short pause longer pause measured pause longer than 2 seconds hesitation falling intonation rising intonation stress paralinguistic feature: tempo, voice quality, loudness non-verbal features: laughter, gestures interrupted construction unintelligible speech
presentation. The first two numbers identify the informant (04), the other two or three numbers are the speech units in that order. The transcribed speech is not adapted to the syntax and punctuation of written language. Also, it includes forms that do not exist in written language (uhu, mhm, ehh) – maybe apart from instant messaging. The data has been translated into spoken English. Compared to the orthography of written language, some spoken English language forms are used in the transcript (unless the written pronunciation is used): he’s, there’s, don’t. For easier reading, Table 1 summarises the symbols that are used to represent speech and events in the transcription.
2.
Segmentation of spoken discourse and ‘cognitive rhythm’
Information flow in discourse is associated with dynamic changes in language and thought: an upcoming utterance expresses a change in the speaker’s and listener’s information states. Chafe (1994) presents a consciousness-based approach to information flow and compares it with other approaches that do not account for consciousness, such as the Czech notion of functional sentence perspective (Mathesius 1939), communicative dynamism (Firbas 1979), Halliday’s functional grammar (Halliday 1985), Clark & Haviland’s given-new contract (Haviland & Clark 1974), Prince’s taxonomy of given-new information (Prince 1981) and Givón’s view of grammar as mental processing instructions (Givón 1990). For quite a long time, linguists, psycholinguists and psychologists have been discussing the basic structuring elements in speech and spoken discourse. There are different traditions regarding what the basic discourse structuring
Chapter 1. Segmentation of spoken discourse
Figure 1. The motif is from Nordqvist (1990). Kackel i grönsakslandet. Opal. (Translated as ‘Festus and Mercury: Ruckus in the garden.’)
elements are called: idea units, intonation units, information units, information packages, phrasing units, clauses, sentences, utterances etc. Many theories within the research on discourse structure agree that we can focus on only small pieces of information at a time, but there are different opinions of what size these information units should have. In the following, I will define two units of spoken discourse and formulate segmentation rules. But first, let us look at the next example. Example 2 illustrates the segmentation of spoken discourse. In a spoken description of a picture (cf. Figure 1), one aspect is selected and focused on at a time and the description proceeds in brief spurts. These brief units that speakers concentrate on one at a time express the focus of active consciousness. They contain one primary accent, a coherent intonation pattern, and they are often preceded by a hesitation, a pause or a discourse marker signalling a change of focus. Note that the successive ideas expressed in units 0404–0410 are in six cases marked and connected by discourse markers och då, sen så, å sen, å så, å då (and then, then, and then, and so, and then). Example 2 0402 0403 → 0404
de e Pettson och katten Findus i olika skepnader då, ja’ well it’s Pettson and Findus the cat in different versions, och Pettson gräver’ and Pettson is digging’ . och då har han hittat nånting som han håller i handen, . and he’s found something that he’s holding in his hand,
Discourse, Vision and Cognition
→ → → →
0405 0406 0407 0408 0409 → 0410
sen så . fortsätter han å gräva’ and then . he continues digging’ å sen så räfsar han’ and then he rakes’ med hjälp av katten Findus’ with the help of Findus the cat’ å . så ligger han på knä å . sår frön’ and . then he’s lying on his knees and . sowing seeds’ (snabbare) å då hjälper honom Findus med nån snillrik vattenanordning, SKRATTAR (faster) and then Findus the cat is helping him with a clever watering system, LAUGHS å sen … ligger katten Findus och vilar,
and then … Findus the cat is lying down having a rest,
2.1 Verbal focus and verbal superfocus In the later chapters on picture viewing and picture description (Chapters 5– 9), I will use verbal focus and visual focus as ‘two windows to the mind’. These activated units in verbal and visual data will be used as a protocol to throw light on the underlying cognitive processes (Holsanova 1996). They will be analysed by using a multimodal sequential method (see Chapter 5, Section 2.3 and 3). Thus, for the purpose of my studies, I am interested in units that reflect human attention, or, more precisely, the stage of attentional activation and active consciousness. Concerning the flow of information and spoken language units, I will address the form, content and limitation criteria of the units that I call ‘verbal focus’ and ‘verbal superfocus’. Let us start with example 3, which illustrates a verbal focus. Each numbered line in the transcript represents a new verbal focus expressing the content of active consciousness. Example 3 → 0203
det finns ett träd i mitten’ there is a tree in the middle’
A verbal focus denotes the linguistic expression of a conscious focus of attention. Verbal focus is the basic discourse-structuring element that contains activated information. It contains the information that is currently conceived of as central in a speech unit. A verbal focus is usually a phrase or a short clause,
Chapter 1. Segmentation of spoken discourse
delimited by prosodic, acoustic features and lexical/semantic features. It has one primary accent, a coherent intonation contour and is usually preceded by a pause, hesitation or a discourse marker (Holsanova 2001: 15f.). How does verbal focus relate to other units suggested in the literature? Similar to Chafe’s delimitation of an intonation unit, it implies that one new idea is formulated at a time (Chafe 1994) and that focused information is replaced by another idea at short intervals of approximately two seconds (Chafe 1987). Units similar to the intonation unit have been discussed in Anglo-Saxon spoken language research (compare ‘tone unit’, Crystal 1975, ‘intonation phrase’, Halliday 1970, ‘prosodic phrase’, Bruce 1982; Levelt 1989: 307f.). Selting (1998: 3) and her research colleagues introduced the concept ‘phrasing unit’ to account for the spoken language units produced in conversation. Van Donzel (1997, 1999) analyses the semantic and acoustic/prosodic clues for recognising spoken language units on different levels of discourse hierarchy. Other researchers concluded on the basis of their empirical studies that the clause is the basic structuring unit in spoken language (Bock et al. 2004; Griffin 2004; Kita & Ozyürek 2003). Already in 1980, Chafe suggested that we express one idea at a time. The reason for discontinuous jumping in small idea units lies in the limitations of human cognitive capacities. We receive a large amount of information from our perception, our emotions and our memory, but we can fully activate only a part of the accessible information. According to our present needs, interests and goals, we concentrate our attention on small parts, one at a time. The small unit of discourse (called idea unit or intonation unit) expresses the conscious focus of attention. Such a unit consists of that amount of information to which persons can devote their central attention at a time. The expression of one new idea at a time reflects a fundamental temporal constraint on the mind’s processing of information. I will also use the term idea unit but in another sense than Chafe and others. Instead of using it as a linguistic manifestation, ‘idea unit’ in my study will be the mental unit that is the object of investigation. In a schematic way, Figure 2 shows the relation between verbal focus, visual focus and ‘idea unit’ (see Chapter 5, Section 1.2). At the same time, attention is conceived of as a continuum of active, semiactive and inactive foci (Chafe 1994). This means that the smallest units of discourse that are at the focus of attention are embedded in larger, thematic units of discourse. These larger units, serving as a background, are only semiactive.
Discourse, Vision and Cognition
Figure 2. Verbal and visual focus.
We are coming closer to the question of macroplanning in discourse. Example 4, lines 0313–0316, illustrates the verbal superfocus. Example 4 → 0313 → 0314 → 0315 → 0316
(1s) en fågel sitter på sina ägg’ i ett rede’ (1s) one bird is sitting on its eggs’ in a nest’ (1s) och den andra fågeln (SKRATTAR) verkar sjunga’ (1s) and the other bird (LAUGHING) is singing’ samtidigt som en . tredje . hon-fågeln liksom at the same time as the . third . like female bird’ (1s) piskar matta eller nånting sånt, (1s) is beating a rug or something,
Several verbal foci are clustered into a more complex unit called verbal superfocus. Verbal superfocus is a coherent chunk of speech that surpasses the verbal focus and is prosodically finished. This larger discourse segment, typically a longer utterance, consists of several foci connected by the same thematic aspect and has a sentence-final prosodic pattern (often a falling intonation). A new superfocus is typically preceded by a long pause and a hesitation, which reflects the process of refocusing attention from one picture area to another. Alternatively, transitions between superfoci are made with the aid of discourse markers, acceleration, voice quality, tempo and loudness. An inserted description or comment uttered in another voice quality (creaky voice or dialect-imitating voice), deviating from its surroundings can stretch over several verbal foci and form a superfocus. The acoustic features simplify the perception of superfoci borders. The referents often remain the same throughout the superfocus, only
. Chafe (1979, 1980) found significantly longer pauses and more hesitation signals at the borders between superfoci than at the borders between verbal foci.
Chapter 1. Segmentation of spoken discourse
some properties change. When it comes to the size of superfoci, they often correspond to long utterances. Superfoci can be conceived of as new complex units of thought. According to Chafe (1994), these larger units of discourse (called centres of interest) represent thematic units of speech that are guided by our experience, intellect and judgements, and thus correspond roughly to scripts or schemata (Schank & Abelson 1977). Superfoci are in turn parts of even larger units of speech, called discourse topics. In our descriptive discourse, examples of such larger units would be units guided by the picture composition (in the middle …, on the left…, on the right…, in the background…) or units guided by the conventions in descriptive discourse (impression, general overview-detailed description, description-evaluation). In Chafes terminology, these bigger units are called basic level topics or discourse topics, verbalising the content of the semiactive consciousness. We usually keep track of the main ideas expressed in these larger units. Table 2 summarises all the building blocks in the hierarchy of discourse production: foci, superfoci and discourse topics. Let me briefly mention other suggested units of speech and thought. Butterworth (1975) and Beattie (1980) speak about hesitant and fluent phases in speech production. Butterworth asked informants to segment monologic discourse into ‘idea units’ that were assumed to represent informal intuitions about the semantic structuring of the monologues. The main result of his studies was that borders between idea units coincided with the hesitations in the speech production, suggesting that the successive ideas are planned during these periods and formulated during the subsequent fluent periods (Butterworth 1975: 81, 83). Beattie (1980) found the cyclic arrangement of hesitant and fluent phases in the flow of language. Schilperoord & Sanders (1997) analyse cyclic pausing patterns in discourse production and suggest that there is a ‘cognitive rhythm’ in our discourse segmentation that reflects the gross hierarchical distinction between ‘global’ and ‘local’ transitions. Yet another tradition in psycholinguistics is to speak about information units or information packages and look at how information is packaged and received in small portions that are ‘digestible’, in a certain rhythm that gives the recipient regular opportunities for feedback (Strömqvist 1996, 1998, et al. 2004; Berman & Slobin 1994). Next, I will specify the set of rules that I used for segmentation of spoken descriptive discourse.
10
Discourse, Vision and Cognition
Table 2. The discourse hierarchy and the building blocks in free picture description Discourse topics, (comparable to paragraphs) ------------------------Examples: Composition Evaluations Associations Impressions
Descriptive discourse
Superfoci long utterances, clauses dealing with the same topic ------------------------------Example: Pettson is first digging in the field’ and then he’s sowing quite big white seeds’ and then the cat starts watering these particular seeds’ and it ends up with the fellow eh raking this field,
Impression of the whole Superfocus 1 picture
Focus 1
Global overview-detailed description
Focus 2 Focus 3 Focus 4 Focus 5 Focus 6 Focus 7
Superfocus 2
Description-evaluation Superfocus 3
In the middle
Superfocus 4 Superfocus 5
Superfocus 6
3.
Foci phrases, clauses (noun phrases grouped on semantic or functional grounds), short utterances ------------------------Examples: in the background; on the left; Pettson is digging; in the middle is a tree;
To the left
Superfocus 7
To the right
Superfocus 8 Superfocus 9 Superfocus 10
In the background Etc.
Superfocus 11 Etc.
Focus 8 Focus 9 Focus 10 Focus 11 Focus 12 Focus 13 Focus 14 Etc.
Segmentation rules
The discussions on the segmentation of speech have often dealt with the difficulty to clarify and exactly delimit such units. Chafe (1987) mentions the functional properties of the smallest unit in discourse: “An intonation unit is a
Chapter 1. Segmentation of spoken discourse
s equence of words combined under a single, coherent contour, usually preceded by a pause. (…) Evidently, active information is replaced by other, partially different information at approximately two seconds intervals” (Chafe 1987: 22). Later on, Chafe (1994) states: “The features that characterize intonation units may involve any or all of the following: changes in fundamental frequency (perceived as pitch), changes in duration (perceived as the shortening or lengthening of syllables or words), changes in intensity (perceived as loudness), changes in voice quality of various kinds, and sometimes changes of turn” (Chafe 1994: 58). The following questions arose when I was trying to operationalise these segmentation criteria on my data: Are all the features of equal value or is there a hierarchy? Can a feature govern others or do the features have to cooperate so that the demands for a unit border are fulfilled? Is there a minimal set of criteria that are always necessary? In order to operationalise the steps in segmentation, I modified Chafe’s delimitation of units, suggested a hierarchy of different criteria and formulated segmentation rules (Holsanova 2001). Apart from prosodic criteria being the overriding ones, I also introduced semantic criteria and discourse markers as supportive cues into the discourse segmentation. The interplay between the prosodic and acoustic features was decisive for the segmentation (rules 1–3). Rules 4–6 can be conceived of as complements to the prosodic rules. They are connected to semantic criteria or to lexical markers as cues to the natural segmentation of speech. 1. When dividing the transcript into units, the primary (focal) accent together with a coherent intonation pattern – falling or rising pitch contour – are the dominant and most decisive features for segment borders. 2. Apart from the two above-mentioned features, segment borders can be recognised by pauses and hesitation that appear on the boundary between two verbal foci. 3. Additional strength is added if these features are supported by changes in loudness, voice quality, tempo and acceleration (especially for the segmentation of larger chunks of speech). 4. Verbal focus is a unit that is seldom broken up internally by pauses. But when rules 1–3 are fulfilled and there is a short pause in the middle of a unit where the speaker looks for the correct word or has trouble with pronunciation or wording, it is still considered to be one unit. around in the air there are . a number of insects flying’ but it’s getting close to the genre of com/ comic strips,
11
12
Discourse, Vision and Cognition
5. When there are epistemic expressions or modifying additions that clearly semantically belong to the statement expressed in the unit but are tied to it without a pause, they are considered one unit.
and a third one that . is standing and singing I would say’
6. Additional strength to the criteria 1–3 is added by lexical or discourse markers signalling a change of focus in spoken discourse and demarking a new unit (and then, anyway, but, so then). Such markers can be used as additional clues into the segmentation of discourse (see Example 2 and Chapter 3, Section 1.1.1 for details).
4.
Perception of discourse boundaries
Although language production and language comprehension are traditionally treated as separate research areas, there are similarities between the process of language production and that of comprehension (Linell 1982). One idea is that units in language production and comprehension coincide and that we therefore can verify the segmentation criteria using the listeners’ intuition about how spoken discourse is structured. In my book on picture viewing and picture description (Holsanova 2001: 16f.), I presented the results of a perception study where I explored the extent to which listeners can perceive and identify discourse boundaries in spoken language descriptions. The aim of the study was to see whether listeners agree on the perceived boundaries at various levels of discourse (focus and superfocus) and what cues might contribute to the detection of those boundaries (prosodic, semantic, discourse markers, etc.). Ten Swedish and ten non-Swedish informants of different professional and linguistic backgrounds and with different mother tongues were listening to an authentic, spontaneous picture description in Swedish. They were asked to mark perceived discourse boundaries in the verbatim transcription of this spoken discourse by means of a slash. Each space in the transcribed text represented a possible border in the segmentation of discourse. For each of these 346 spaces, I noted whether the 20 informants drew a slash or not. The same transcript had been analysed by two discourse analysts and segmented into verbal foci and verbal superfoci (agreement 93%). According to the experts’ coding, the extract contained 33 focus boundaries, 18 superfocus boundaries and 295 non-boundaries. . The data were collected at Lund University and at the UCSB. I would like to thank John W. Du Bois and Wallace Chafe for their help with this study.
Chapter 1. Segmentation of spoken discourse
All authentic border markings of the Swedish and non-Swedish group have been compared and related to the different types of borders coded by the experts.
4.1 Focus and superfocus First, the aim of the study was to find out whether discourse borders are more easily recognised at the higher discourse level of verbal superfoci than at the lower discourse level of verbal foci. The hypothesis was that the boundaries of a superfocus – a unit comparable to long utterances, higher in the discourse hierarchy – will be perceived as heavier and more final and thus will be more easily identified than a focus. The agreement on focus boundaries, superfocus boundaries, and non-boundaries by Swedish and non-Swedish informants is illustrated in Figure 3. As we can see, the hypothesis has been confirmed: Swedish and non-Swedish informants agree more on superfocus boundaries than on focus boundaries. The agreement on focus boundaries reached a level of 45 percent in the Swedish and 59 percent in the non-Swedish group, whereas the agreement on superfocus boundaries reached 78 percent in the Swedish and 74 percent in the non-Swedish group. Apart from agreement on different kinds of boundaries, we can also consider listeners’ agreement on what was not Agreement on focus boundaries, superfocus boundaries and non-boundaries by Swedish (N=10) and non-Swedish informants (N=10) 100 90 80 70 60
Focus boundaries Superfocus boundaries Non-boundaries
50 40 30 20 10 0 1 Swedish
2 Non-Swedish
Figure 3. Agreement on focus boundaries, superfocus boundaries and non-boundaries by Swedish and non-Swedish informants.
13
14
Discourse, Vision and Cognition
erceived as a boundary. Both the Swedish and non-Swedish group perceived p the majority of spaces as non-boundary with a high level of agreement – almost 92 percent in the Swedish and 90 percent in the non-Swedish group. The second aim of the study was to find out how the discourse structure is perceived by Swedish and non-Swedish listeners and what criteria the boundary markings rely on (prosodic, semantic, syntactic, hesitational, discourse markers etc.). The hypothesis was that the non-native speakers would be governed by the prosodic and acoustic criteria only and that the comparison between native and non-native speakers would reveal what criteria are crucial for perception of discourse segments. This part of the study required a qualitative analysis. Many of the marked boundaries in the two groups coincided and the agreement on the placement of boundaries between informants was good, whether they understood the language or not. Roughly the same spaces in the text were chosen as boundaries in the Swedish and in the non-Swedish groups. Still, when investigating a selection of boundaries on which there was disagreement between both groups, and in which the non-Swedish group was more willing to place a boundary, it could be confirmed that the segmentation of the non-Swedish group relied more heavily on prosody. At certain places, however, prosody alone was misleading. Instead, semantic criteria and discourse markers played an important role for the recognition of discourse borders. For instance, this happened in cases of an embedded re-categorisation that was not prosodically signalled, in cases of topic changes without a pause or change in voice quality and in cases of an attached commentary that was not separated by prosodic means. It turned out that at exactly those places, there was a high agreement on border segmentation within the Swedish group. When prosodic and semantic boundaries coincided, there was agreement on segment boundaries in both the Swedish and the non-Swedish group. When prosodic and semantic boundaries did not coincide, the Swedish group had the advantage of being guided by both prosodic and semantic criteria, whereas the non-Swedish group had to rely on the prosody alone. Concerning segmentation criteria, my conclusions are as follows: A superfocus – a unit comparable to long utterances – is easier to identify and both cognitively and communicatively relevant. This result is also in line with Herb Clark’s tenet of language use: “In language use, utterances are more basic than sentences.” (Clark 1992: xiii). Prosody plays a very important role for recognition of discourse boundaries. This is also in accordance with our everyday experience. When listening to a foreign language, we can perceive the spurts
Chapter 1. Segmentation of spoken discourse
of ideas although we do not understand the content. However, the recognition of discourse boundaries can be facilitated if the prosodic and acoustic criteria are further confirmed by semantic criteria and lexical markers. Horne et al. (1999) draw a similar conclusion. They suggest that a constellation of both prosodic and lexical means gives us an important hint about the way segmentation should be produced and understood in spontaneous speech. Second, it has been shown that both speakers and listeners keep track of more global structure in discourse and that listeners can tell when a new discourse unit begins and ends (van Donzel 1997, 1999). According to our investigations, discourse borders are more easily recognised at the higher discourse level of verbal superfoci (comparable to long utterances) than on the lower discourse level of verbal foci (comparable to phrases or clauses). The hierarchical structure of discourse and its rhythmic form is also the focus of Schilperoord & Sanders’ investigation (1997). They conclude that the position of long pauses coincides with the semantic structure of discourse. Finally, Strömqvist (2000: 6) comments on the issue of production and recognition of units: Both speakers and writers aim at information packages such as clauses, sentences, and LUs [larger units of discourse, my explanation]. Syntactic and semantic properties help language users identify such packages, both in speech and writing. In speech, intonation plays a crucial role for signalling when such a package is completed – in our Swedish example, notably, the phrase final rise. In the spoken narration, longer pauses participated in marking the boundary between larger units of discourse and no silent pauses were found (Strömqvist 2000: 6) clause-internally.
4.2 Other discourse units In the previous part, the reader got to know that the interplay of prosodic, acoustic and semantic criteria facilitates the recognition of boundaries in descriptive discourse. It should be noted, however, that there are argumentative parts of descriptive discourse where the rhetorical build-up plays a role for the discourse segmentation. These units are often based on the relation event– cause or state–cause. Another type of unit is the question-answer pair (and other types of ‘adjacency pairs’). In dialogic discourse, listener’s feedback and laughter play an important role in confirming segmentation borders. Also, the non-verbal action, such as body language, gestures etc. can be important for recognising the start/end of a unit or action (cf. Holsanova 2001, Chapter 5).
15
16
Discourse, Vision and Cognition
Korolija (1998) has investigated global thematic units in spontaneous conversation that she calls topical episodes and that bear similarity to basic level topics or superfoci. She characterises topical episodes as follows: An episode is the manifest product of how several people manage to focus attention as well as sense-making efforts on a few aspects of the world during limited sequences in time. Within one single episode, some features are selected and focused upon, while others are disregarded; an episode is a segment of the flow of impressions, events, and experiences […]. Episodes in jointly attended conversations constitute actors’ attempts at achieving intersubjectivity in understanding some particular things they are talking about. (Korolija 1998: 119)
In a study on radio debates, Korolija (1998) reached the conclusion that listeners most easily recognise leitmotifs or macro topics.
5.
Conclusion
This chapter has been devoted to the segmentation of spoken descriptive discourse. I started out by explaining that speech is produced in spurt-like portions and contains many dysfluencies, such as pauses, hesitations, interruptions, restarts etc. These phenomena provide clues about the cognitive processes and suggest units for planning, production, perception and comprehension. In order to analyse spoken language production, we first have to transcribe it. The transcription in our data includes verbal features, prosodic/acoustic features, as well as non-verbal features. As a next step, we had a closer look at the segmentation of spoken discourse. The way speakers segment discourse and create global and local transitions seems to reflect a certain ‘cognitive rhythm’ in discourse production. Many theories agree that we can focus only on small pieces of information at a time, but there are different opinions concerning what size and form these information units should have. For the purpose of my further correlation studies (Chapters 5–8), I delimited the most important units of spoken descriptive discourse that reflect human attention, verbal focus and verbal superfocus. Further, rules for segmentation of spoken data were formulated. Apart from prosodic/acoustic criteria being the overriding ones, I also introduced semantic criteria and discourse markers as supportive cues into the discourse segmentation. Finally, the segmentation criteria were then tested against listeners’ intuition about discourse segmentation in a perception study. There were two main conclusions: The recognition
Chapter 1. Segmentation of spoken discourse
of discourse boundaries is facilitated if the interplay of prosodic and acoustic criteria is further confirmed by semantic criteria and lexical markers. It is easier for listeners to identify boundaries on the higher levels of discourse (superfoci). In the following chapter, we will investigate the structure and content of spoken picture descriptions in further detail.
17
chapter 2
Structure and content of spoken picture descriptions
When we describe something, a scene or an event, to a person who did not witness it, we often experience it anew. The scene is once again staged before our eyes and our attention moves from one aspect to another as our description progresses. We retell, step-by-step, what we see and focus on the different aspects of the scene, and we use language to describe our impressions. We could say that our consciousness about this scene is moving through our thoughts. We select some elements that we have experienced, arrange them in a certain order, maybe abandoning some elements while emphasising others, and cluster and summarise a series of experiences into more complex categories. While doing so, we give a form to our thoughts and offer the listener a certain presentational format.
This chapter will focus on the structure and content of spoken picture descriptions. First, I will describe and illustrate various types of units (foci and superfoci) extracted from an off-line picture description (i.e. descriptions from memory) in an interactive setting. Second, I will compare the distribution of the different foci in this setting to data from other studies.
1.
Taxonomy of foci
Let me first introduce a transcript extract from a study where informants described the picture off-line in an interactive setting (Holsanova 2001: 7ff.). Twelve informants had each examined a complex picture for a period of time (1 minute). Then the picture was removed and they described the contents of the picture to a listener. Let us look at an extract from this study (Example 1). One informant in the second half of his description mentions the tree with birds in it.
20 Discourse, Vision and Cognition
Example 1 0637 0638 0639 0640 0641 0642 0643 0644 0645 0646 0647 0648 0649 0650 0651 0652
in the middle there was a tree’ the lower . part of the tree was crooked’ there were . birds’ it looked like a happy picture, everybody was pleased and happy, … eh … and (2 s) the fields were brown, eh it was this kind of topsoil, it wasn’t sand, and then we had . it was in this tree, there were . it was like a . metaphor of . the average Swede there, That you have a happy home’ the birds are flying’ the birds were by the way characteristic, They were stereotyped pictures of birds, big bills’ Xxx human traits’
As we can see in this example, the informant describes the birds and at the same time evaluates the atmosphere of the picture (lines 0640–0641), describes the quality of the soil (lines 0642–0644), formulates a similarity by using a metaphor (lines 0646–0648) and finally characterises some of the elements depicted (lines 0649–0652). He is not only expressing ideas about states, events and referents in the picture but also evaluating these in a number of ways. This can be generalised for all picture descriptions in this study: The informants do not only talk about WHAT they saw in the picture – the referents, states and events – but also HOW they appeared to them. In other words, informants also express their attitudes towards the content; they evaluate and comment on different aspects of the scene. The way we create meaning from our experience and describe it to others can be understood in connection to our general communicative ability. According to Lemke (1998: 6) “when we make meaning we always simultaneously construct a ‘presentation’ of some state of affairs, orient to this presentation and orient to others, and in doing so create an organised structure of related elements.” The presentational, orientational and organisational functions . These elements of spoken descriptions reflect the fact that our consciousness is to a large extent made up of experiences of perceptions and actions, accompanied by emotions, opinions and attitudes etc. In addition to perceptions, actions and evaluations, there are sometimes even introspections or meta-awareness (Chafe 1994: 31).
Chapter 2. Structure and content of spoken picture descriptions
c orrespond to ideational, interpersonal and textual linguistic metafunctions described in Halliday’s functional grammar (Halliday 1978). In the following, we will be looking at how the speakers structured their spoken presentation: what kind of contents they focus on and in what order. The off-line picture descriptions contained different types of foci. The classification of foci was inspired by Chafe’s classification (Chafe 1980) and modified in Holsanova (2001). In the grouping of the different types of foci, I adopt the three functions mentioned above: the presentational, the orientational and the organisational function (Lemke 1998). Each type can be used either as a single verbal focus or in a more complex unit of thought, as a verbal superfocus, a thematically and prosodically coherent chunk of discourse that consists of several verbal foci (cf. Chapter 1, Section 2.1). In the following, each type of focus will be described and illustrated.
1.1
Presentational function
Let me start with the presentational function, which includes substantive, summarising and localising foci. In these units of speech, speakers present us to a scene, to scene elements (animate and inanimate, concrete and abstract) and their relations, to processes and to circumstances.
1.1.1 substantive foci In a substantive unit, speakers describe referents, states and events in the picture (there is a bird; Pettson is digging; there is a stone in the foreground). Occasionally, speakers make validity judgements from their observations and use epistemic expressions in substantive units. We find a dominance of so-called mental state predicates (I think), usually appearing in a parenthetic position, as well as a number of modal adverbs (such as maybe) and modal auxiliary verbs (such as it could have been). 1.1.2 substantive foci with categorisation difficulties Substantive units reflect the ongoing categorisation process and, sometimes, categorisation difficulties. When determining a certain kind of object, activity or relation, the effort to name and categorise is associated with cognitive load. This effort is manifested by dysfluencies such as hesitation and pauses, with multiple verbal descriptions of the same thing (lexical or syntactic alternation, repetitive sequences), sometimes even with hyperarticulation (this tree seems
21
22
Discourse, Vision and Cognition
to be a sort of/I don’t remember what is it called, there are those things hanging there, sort of seed/ seed things in the tree). Often, this focus type constitutes a whole superfocus.
1.1.3 substantive list of items Another type of focus that typically clusters into larger units, consists of a series of substantive foci and appears as a superfocus is a substantive list of items. This type of superfocus is very frequent in picture descriptions. Different items that belong to the cluster are enumerated and described in detail. Illustration of a list can be found in Example 2. Example 2 0415 0416 0417 0418
and then in the tree there are bird activities’ one is tidying up the nesting box’ one is standing around singing’ and one is brooding on the eggs,
sum subst list subst list subst list
A substantive list of items is often explicitly announced in a summarising focus either by a general introduction (in the tree there are bird activities) or by what I have called (Holsanova 1999a, 2001) a signalised quantitative enumeration (I see three birds). After that, the speaker goes on with the elaboration of the particular items (one is tidying up the nesting box, one is standing around singing’ and one is brooding on the eggs).
1.1.4 summarising foci In a summarising unit (sum), speakers connect picture elements with similar characteristics at a higher level of abstraction and introduce them as a global gestalt (in the air there are a number of insects flying around). Summarising foci appear as single verbal foci or superfoci. They have a structuring function. To the listener, they signal that a more detailed description may follow after the introduction, often in the form of a list. The semantic relations between sum and list can vary. In the example below, the informant starts with a superordinate concept or a hyperonym (clothes), which is followed by a detailed account of subordinate concepts or hyponyms (hat, vest). Example 3 --> 0945 0946 0947
one can say something about his clothes’ he has a funny hat’ he has … if I am not wrong a vest’
sum subst list subst list
Chapter 2. Structure and content of spoken picture descriptions
Another common pattern is that the speaker uses an abstract name for activities (cultivating process) that s/he then elaborates within the following cluster of verbal foci (list) as in Example 4. Example 4 0508 0509 0510 0511
and then one can see . different parts /eh well, steps in the . cultivating process he’s digging, he’s raking’ then he’s sowing’,
sum subst list subst list subst list
As we can see from Example 5, some speakers introduce a semantically rich concept (sowing) that in turn makes the listener associate to its parts (a sowing person, dig in the soil, sow seeds, water, rake). Example 5 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111
it’s a . picture about . sowing’ and . it’s a fellow’ I don’t remember his name’ Pettson’ I think, who is first digging . in the field’ and then he’s sowing’ quite big white seeds’ and then the cat starts watering’ these particular seeds’ and it ends up with the fellow . eh . raking, . this field,
sum subst introspect subst subst list subst list subst subst list subst subst list
The relation between the summarising focus (1102) and the subsequent substantive foci is characterised as a part-whole relation or as conceptual dependence (Langacker 1987: 306). Sowing here subsumes a schema for an action: a person (1103–1105) and a cat (1109) are sowing and performing several activities (1106–1111) in a specific temporal order (first, and then, it ends).
1.1.5 localising foci The last type of foci that serves the presentational function are localising foci (loc). In localising units, speakers focus on the spatial relations in the picture (in front of the tree, at a distance, to the right). Sometimes, spatial expressions do not constitute a whole independent unit of talk but are embedded in substantive units (there are cows in the background).
23
24
Discourse, Vision and Cognition
1.2 Orientational function The next types of units, the evaluative and expert foci, belong to the orientational function. Here, speakers evaluate the scene by expressing their attitude to the content, to the listener or even to the listener’s attitude. The emphasis on different elements and the way they are evaluated contributes indirectly to the speaker’s positioning. For example, the speaker can position herself/himself as an expert.
1.2.1 evaluative foci It is natural that we interpret everything we see in the light of our previous experiences, knowledge and associations. We simply cannot avoid interpreting. Therefore, in evaluative units, speakers include attitudinal meaning. They make judgements about the picture as a whole, about properties of the picture elements and about relations between picture elements. Therefore, evaluative foci are often connected to substantive and summarising foci. Evaluative foci in the study concerned: Table 1. Evaluative foci
Transcription examples
on the whole there are very many things in the picture ii. Speakers’ categorisation of the different eh . the birds are . of indefinite sort, they are fantasy birds, I would think picture elements iii. Speakers’ attitude to the presentation of Pettson’s facial expression is fascinating, although he seems to be quite anonymous’ the different picture elements he reflects a sort of emotional expression I think iv. Speakers’ comments on the visual proper- the daffodils’ are enlarged, almost tree-like, they are the wrong size in relation to a real ties of a picture element daffodil v. Speakers’ comments on the interaction of Findus the cat helps Pettson as much as he can, they sow the seeds very carefully’ place the characters in the picture them one after the other it appears’ i.
Speakers’ reaction to the picture
1.2.2 expert foci In expert foci, speakers express judgements that are based on their experience and knowledge. The informants mention not only concrete picture elements,
Chapter 2. Structure and content of spoken picture descriptions
but also, at a higher level, the genre and painting technique of the picture. By doing this, they show their expertise. In interpersonal or social terms, the speakers indirectly position themselves towards the listener. Example 6 --> 1015 --> 1016 --> 1017 --> 1018
… the actual picture is painted in water colour, so it has/ eh . the picture is not/ it is naturalistic in the motif (theme)’ but it’s getting close to the genre of com/ comic strips,
Expert foci include explanations and motivations where the informant show their expertise in order to rationalise and motivate their statements or interpretations. Such explanatory units are often delivered as embedded side sequences (superfoci) and marked by a change in tempo or acceleration. The comment is started at a faster speed, and the tempo then decreases with the transition back to the exposition. The explicative comment can also be marked by a change in voice quality, as in the following example, where it is formulated with a silent voice (cf. Chapter 3, Section 1.1.2). In Example 7, the expert knowledge of the participant is based on a story from a children’s book and the experience of reading aloud to children. Example 7 0112 0113 --> 0114 --> 0115 --> 0116 0117
1.3
And Findus, he has sown his meatball, (lower voice) you can see it on a little picture, this kind of stick with a label’ he has sown his meatball under it’ (higher voice) and then there is Findus there all the time,
Organisational function
The last group of units, including the interactive, introspective and metatextual units, constitutes the third, organisational function.
1.3.1 interactive foci Informants use signals to show that they are taking the floor and starting to describe the scene or that they are about to finish their description (ok; well, these were the things I noticed; this was the whole thing). These units fulfil a mainly regulatory function.
25
26 Discourse, Vision and Cognition
1.3.2 introspective foci & metacomments Describers do reflect explicitly on speech planning, progression and recapitulation. They seem to be monitoring their description process, step by step. Their comments build a broad and rather heterogeneous group of introspective and metatextual comments. Table 2. Introspective foci & metacomments
Transcription examples
i.
Informants think aloud
ii
comment on the memory process
let’s see, have I missed something? I’ll think for a moment: Pettson is the old guy and Findus is the cat ehm, it’s starting to ebb away now; I don’t remember his name’; eh actually, I don’t know what the fellow in the middle was doing let’s see if I continue to look at the background; I have described the left hand side and the middle I don’t know if I said how it was composed
iii. make procedural comments
iv. express the picture content on a textual metalevel, refer to what they had already said or not said about the picture v. ask rhetorical questions and reveal steps of planning in their spoken presentation through an inner monologue vi. think aloud in a dialogic form: One person systematically posed questions to himself, and then immediately answered them by repeating the question as a full sentence. The speaker’s metatextual foci pointed either forward, toward his next focus, or backward vii. recapitulate what they have said or summarise the presented portions of the picture or the picture as a whole
what more is there to say about it?; what more happened? eh what more can I say, I can say something about his clothes’; eh … well, what more can I say, one can possibly say something about the number of such Pettson characters
eh I mentioned the insects’/ I mentioned small animals’; so that’s what’s happening here; that’s what I can say; that was my reflection
To sum up, the off-line descriptions from memory in an interactive setting contained different types of foci (see Table 3). Speakers used substantive, summarising and localising foci, in a presentational function. These types of foci contributed to information about the picture content. Speakers also included
Chapter 2. Structure and content of spoken picture descriptions
Table 3. Various types of foci represented in off-line picture description from memory, with a listener present. Types of foci Presentational function
Orientational function
Organisational function
substantive foci (subst; subst. cat. diff.; subst list) speakers describe the referents, states and events in the picture summarising foci (sum) speakers connect picture elements with similar characteristics at a higher level of abstraction and introduce them as a global gestalt localising foci (loc) speakers state the spatial relations in the picture evaluative foci (eval) speaker judgement of the picture as a whole speaker judgement of properties of the picture elements speaker judgement of relations between picture elements expert foci (expert) speaker judgements of picture genre, painting technique interactive foci (interact) speakers signal the start and the end of the description introspective foci and metatextual comments (introspect & meta) thinking aloud memory processes procedural comments, monitoring expressing the picture content on a textual metalevel speech planning speech recapitulation
evaluative and expert foci, representing the orientational (or attitudinal) function. In these types of foci informants express their opinions about various aspects of the picture contents and comment on the picture genre from the viewpoint of their own special knowledge. Finally, speakers used interactive, introspective and metatextual foci, serving the organisational function. They were thinking aloud and referring to their own spoken presentation on a textual metalevel. The informants used each focus type either as a single verbal focus or as a superfocus consisting either of a multiple foci of the same type (subst list) or a combination of different focus types (e.g. sum and list, subst and loc, subst and eval etc.). Apart from reporting WHAT they saw, the informants also focused on HOW the picture appeared to them. In other words, the informants were involved in both categorising and interpreting activities.
27
28
Discourse, Vision and Cognition
2.
How general are these foci in other settings?
In the above section, a thorough content analysis of the picture descriptions was presented. The question is, however, whether the developed taxonomy would hold up in other settings. For instance, would the repertoire of focus types vary depending on whether the informants described the picture in the presence or in the absence of listeners? Would there be any influence on the taxonomy and distribution of foci in a narrative task where the informant has to tell a story about what happens in the picture? Would other types of foci constitute the discourse when informants describe the picture either off-line (from memory) or on-line (simultaneously with the description)? Last but not least, what does the structure and content of descriptions look like in a spontaneous conversation? Do the same types of foci occur and is their distribution the same? In the following section, we will discuss the above-mentioned types of foci and their distribution in different settings by comparing the presented taxonomy with data from different studies on picture description. My starting point for this discussion will be the data from the off-line descriptions in an interactive setting reported above and from three additional studies that I have conducted later on. One of the studies concerns simultaneous verbal description: the informants describe the complex picture while, at the same time, their visual behaviour is registered by an eye tracker (cf. Chapter 5, Section 2). The next study concerns off-line description after a spatial task: the informants describe the same picture from memory while looking at a white board in front of them. Previously, they conducted a spatially oriented task on mental imagery (cf. Chapter 4, Section 3.1). Yet another study is concerned with simultaneous (on-line) description with a narrative priming: the informants described the same picture while looking at it, and the task is to tell a story about what is happening in the picture (cf. Chapter 4, Section 3.2). Apart from these four data sets, I will also briefly comment on the structure and content of spontaneous descriptions in conversation (Holsanova 2001: 148ff.). Before comparing the structure and content of picture descriptions from different settings on the basis of focus types, let me start with some assumptions. i. The repertoire of focus types is basically the same and can thus be generalised across different settings. ii. However, the proportion of focus types in descriptions from different settings will vary.
Chapter 2. Structure and content of spoken picture descriptions
iii. In the off-line description, informants have inspected the picture beforehand, have gained a certain distance from its contents and describe it from memory. The off-line setting will thus promote a summarising and interpreting kind of description and contain a large proportion of sum, introspect and meta. The metacomments will mainly concern recall. Finally, the informants will find opportunity to judge and evaluate the picture, and the proportion of eval will be high. iv. In the on-line description, informants have the original picture in front of them and describe it simultaneously. Particularities and details will become important, which in turn will be reflected in a high proportion of subst, list foci and superfoci. The real-time constraint on spoken production is often associated with cognitive effort. This will lead to a rather high proportion of subst foci with categorisation difficulties (subst cat.diff.). The picture serves as an aid for memory and the describers will not need to express uncertainty about the appearance, position or activity of the referents. Thus, the description formulated in this setting will not contain many modifications. If there are going to be any metacomments (meta), they will not concern recall but rather introspection and impressions. The fact that the informants have the picture in front if them will promote a more spatially oriented description with a high proportion of loc. v. In the interactive setting, informants will position themselves as experts (expert) and use more interactive foci (interact). The presence of a listener will stimulate a thorough description of the picture with many subst foci and bring up more ideas and aspects which will cause a longer description (length). The informants will probably express their uncertainty in front of their audience, which can affect the number of epistemic and modifying expressions. Finally, the presence of a listener will contribute to a higher proportion of foci with orientational and organisational functions. vi. In the eye tracking condition, informants will probably feel more observed and might interpret the task as a memory test. In order to show that they remember a lot, they will group several picture elements together and use a high proportion of sum. The fact that they are looking at a white board in front of them will promote a more static, spatially focused description with many local relations (loc). The description will be shorter than in the on-line condition where the picture is available and during the interaction condition where the listener is present (length).
29
30
Discourse, Vision and Cognition
vii. In a narrative setting, the informants will be guided by the instruction to tell a story about what is happening in the picture. They will focus on referents and events, which in turn will give rise to a high proportion of subst, and evaluate the described picture elements and relations in a number of eval. The narrative priming will promote a dynamic description with a low priority for spatial relations or compositional layout (loc). It is now time to return to the question of whether the developed taxonomy of foci holds up for picture descriptions in other settings. We will systematically compare the data per focus type in order to find out how the different situations have influenced the structure and content of picture descriptions.
2.1 Presentational function 2.1.1 substantive foci (subst) As we can see from Figure 1 and 2, substantive units containing descriptions of referents, states and events are an important part of the picture descriptions. Substantive foci (first diagram in Figure 1) are normally distributed across the four different settings with one exception: Substantive foci are dominant in simultaneous (on-line) descriptions where the informants tell a story about what is happening in the picture (57 percent). The proportion is significantly higher than in the on-line eye tracking setting (Anova, one-way, F = 3,913; p = 0,015, Tukey HSD test). In accordance with our assumptions, particularities and details become important, the focus is on referents and events and both factors give rise to a high proportion of subst foci.
Proportion of substantive foci with categorization difficulties in four studies on picture description
proportion of substantive foci in four studies on picture description 0,07
0,60
0,06
0,50
0,05
0,40 off-line ET on-line narrative off-line interact on-line ET
0,30
off-line ET on-line narrative off-line interact on-line ET
0,04 0,03
0,20 0,02
0,10
0,01 0,00
0,00 1
1
Studies 1-4
Studies 1-4
Figure 1. The proportion of substantive foci (subst) and substantive foci with categorisation difficulties (subst cat.diff.) in four studies on picture description.
Chapter 2. Structure and content of spoken picture descriptions
2.1.2 substantive foci with categorisation difficulties As can be seen from the second diagram in Figure 1, we find significant differences between the four settings concerning the substantive foci with categorisation difficulties (Anova, one-way, F = 9,88; p < 0,001). As assumed, categorisation difficulties appear in the simultaneous on-line description, the explanation being the real-time constraint on spoken production that is often associated with cognitive effort. A quite large proportion of categorisation difficulties occur, however, even in the off-line setting with eye tracking. The explanation one would first think of is that the informants were trying to remember picture elements that are no longer present, using the empty white board as a support for memory, and – while looking at the respective localisations of the objects – made efforts to categorise the objects. On the linguistic surface, we can find traces of obvious categorisation difficulties: some of the substantive foci contain reformulations and sequences with lexical or syntactic alternation. 2.1.3 subst list of items The highest proportion of list foci (18 percent) is found in the on-line description with eye tracking (cf. first diagram in Figure 2). This was expected since in this simultaneous description, particularities and details will become important which in turn is reflected in a high proportion of subst. list foci and superfoci. The difference is not significant. 2.1.4 summarising foci (sum) As we can see from the second diagram in Figure 2, summarising foci are an integral part of picture descriptions in all settings. However, as assumed, the summarising foci dominate in the eye tracking setting where informants describe the picture off-line (from memory). This result is significant (Anova, one
Proportion of LIST foci in four studies
Proportion of summarizing foci in four studies on picture description
0,20 0,35
0,18
0,30
0,16 0,14
0,25
0,12
off-line ET on-line narrative off-line interact on-line ET
0,10 0,08 0,06
off-line ET on-line narrative off-line interact on-line ET
0,20 0,15 0,10
0,04 0,05
0,02
0,00
0,00 1
1
Studies 1-4
Study 1-4
Figure 2. The proportion of subst list foci and summarising foci (sum) in four studies on picture description.
31
32
Discourse, Vision and Cognition
way, F = 6,428; p = 0012, Tukey HSD test). It can partly be explained by the temporal delay between picture viewing and picture description: informants who inspected the picture beforehand gained distance from the particularities in the picture contents and were involved in summarising and interpreting activities. The other explanation is the fact that the grouping of several picture elements can be an effective way of covering all the important contents of the picture.
2.1.5 localising foci (loc) The proportion of localising foci (Figure 3) dominated significantly both in the off-line and on-line eye tracking setting (11 percent each). This is, however, not very surprising. First, the fact that the informants were looking at a white board in front of them while they described what they had seen on the picture might have promoted a more spatially focused description. Also, they were indirectly spatially primed since in a previous task, they were listening to a spatial description of a scene. Second, when the picture was displayed in front of them, the informants delivered a more spatially oriented kind of description. In accordance with our expectations, the narrative instruction has lowered the proportion of localising foci (2 percent) which is significantly lower than in off-line eye tracking (Anova, one-way, F = 5,186; p = 0,0041, Tukey HSD test). When concentrating on an event or a story, the informants probably did not focus on the spatial or compositional layout of the picture elements since it was not relevant for the task. A description rich in localisations has a tendency to be a more static, whereas a description rich in narrative aspects tends to be more dynamic, as will be shown in Chapter 4 in connection to different description styles.
Proportion of localizing foci in four studies on picture description 0,12
0,10
0,08 off-line ET on-line narrative off-line interact on-line ET
0,06
0,04
0,02
0,00 1 Studies 1-4
Figure 3. The proportion of localising foci (loc) in four studies on picture description.
Chapter 2. Structure and content of spoken picture descriptions
Table 4. Distribution of foci serving the presentational function in four studies on picture description. Comm. functions
Average # foci
Studies Types of foci
off-line + ET
on-line + off-line, on-line + Sign. * narrative + interact ET
subst subst cat. diff. subst list sum loc
39% 5% 10% 29% 11% 26
57% 0% 9% 12% 2% 20
46% 0% 12% 12% 6% 51
31% 6% 18% 13% 11% 57
* * NS * * *
So far, we have systematically compared the distribution of foci serving the presentational function. Table 4 summarises the results. With the help of this table, we can follow the characteristics of the data in four different settings (columns) and discuss the cases where there were significant differences between the settings concerning a specific type of focus (rows). In all four settings, all types of foci are represented – with exception of the substantive foci with categorisation difficulties that is missing in two of the settings. We will now continue with foci serving the orientational and organisational functions.
2.2 Orientational function 2.2.1 evaluative foci (eval) The assumption that the narrative priming in the on-line description and the presence of a listener in the off-line description will positively influence the proportion of evaluative foci has been confirmed. The proportion of evaluative foci in these two conditions reached 10 percent (cf. first diagram in Figure 4) and is much higher than in the other two conditions, the difference being close to significant (p = 0,05). 2.2.2 expert foci (expert) Expert foci were found only in the off-line conditions: in the off-line description with eye tracking (1 percent) and in the interactive setting (3 percent). The explanations at hand are the following: the off-line condition promotes an interpreting kind of description and the presence of a listener encourages positioning as an expert.
33
34
Discourse, Vision and Cognition
Proportion of expert foci in four studies of picture description
Proportion of evaluative foci in four studies on picture description 0,04 0,12
0,10
0,03
0,03
0,08 off-line ET on-line narrative off-line interact on-line ET
0,06
off-line ET on-line narrative off-line interact on-line ET
0,02
0,04
0,01 0,01
0,02
0,00
0,00 1
1
Studies 1-4
Studies 1-4
Figure 4. The proportion of evaluative foci (eval), and expert foci (expert) in four studies on picture description.
2.3 Organisational function 2.3.1 interactive foci (interact) Contrary to our expectations of finding the interactive foci mainly in the interactive setting, this type of foci is represented in all settings (cf. second diagram in Figure 5). A relatively large proportion of interactive foci is found in the narrative setting and in the simultaneous description with eye tracking (4 percent each). 2.3.2 introspective foci & metacomments (introspect/meta) The highest proportion of introspect/meta foci (8,5 percent) can be found in the interactive setting when the informants were describing the picture from memory (cf. first diagram in Figure 5). As we assumed, the presence of a listener contributed to a higher proportion of foci with orientational and organisational functions. Also, the character of the introspective verbal foci had changed slightly. The metacomments were no longer concerned with recall.
Proportion of introspective and metatextual foci in four studies of picture description
Proportion of interactive foci in four studies on picture description
0,1
0,05
0,09
0,04
0,08
0,04 0,04
0,04
0,07
0,03
0,06
off-line ET on-line narrative off-line interact on-line ET
0,05 0,04
0,03 0,03 0,02
0,02
off-line ET on-line narrative off-line interact on-line ET
0,02
0,03
0,01
0,02
0,01
0,01 0
0,00
1
1
Studies 1-4
Studies 1-4
Figure 5. The proportion of introspective foci (introspect/meta) and interactive foci (interact) in four studies on picture description.
Chapter 2. Structure and content of spoken picture descriptions
Table 5. Distribution of foci serving orientational and organisational functions in four studies on picture description. Comm. functions
Studies Types of foci
orientational eval expert organisational interact introspect/meta Average # foci
off-line + on-line + off-line, on-line + Sign. * ET narrative + interact ET 4% 1% 3% 5% 26
10% 0% 4% 7% 20
10% 3% 2% 8,5% 51
4% 0% 4% 3% 57
NS NS NS NS *
Instead, the speakers’ introspection and impressions dominated. A rather high proportion of introspective and metatextual foci can also be found in the narrative setting (7 percent). Table 5 summarises the distribution of foci serving orientational and organisational functions in four studies on picture description. Apart from expert foci, all other types of foci have been represented in the data.
2.4 Length of description Finally, a comment on the length of the descriptions (measured by the average number of foci). Although all of the picture descriptions were self-paced, we can see differences in the duration of descriptions (cf. Figure 6). Informants in the off-line interactive setting and on-line eye tracking setting formulated rather long descriptions. In the off-line interact setting, the average number of foci was 51, which is significantly higher than in the off-line eye tracking setting and the on-line narrative setting (Anova, one-way, F = 11,38; p < 0,001, Tukey HSD test). One possible explanation is that the presence of a listener stimulates the informant to describe the picture more thoroughly and to bring up more ideas and aspects. To our surprise, the average number of foci in the on-line eye tracking setting was 57, which is even higher than in the interactive off-line description and significantly higher than in the off-line eye tracking setting and the on-line narrative setting (p < 0,01). One possible explanation is that the informants who describe a picture that is in front of them simultaneously explore and interpret the picture contents and get new associations ‘along the way’ in the course of the description process. We have now systematically compared all types of foci and their distribution in four different studies on picture description, serving presentational,
35
36
Discourse, Vision and Cognition
The average length of picture description in four different studies 70 60 50 off-line ET on-line narrative off-line interact on-line ET
40 30 20 10 0 1 Studies 1-4
Figure 6. The average length of picture description in four studies.
rientational and organisational functions. Taken together, the substantive, o summarising and localising foci, serving the presentational function, dominated. Even if this is the case, from a communicative perspective we can state that the informants devoted quite a large proportion of their descriptions to orientational and organisational aspects, in particular to evaluative and metatextual comments. On the other hand, we always find an interplay between these three functions (Lemke 1998). One possible interpretation could be that the presentational meaning – in form of sum-list combinations – structures the descriptions and makes them more coherent, which in turn contributes to the organisational function. Apart from categorisation difficulties and expert foci, all other types of foci appeared in the four different settings. However, the proportion of focus types in descriptions from different settings varied. Another piece of evidence supporting this result comes from a spontaneous description of an environment in a conversation (Holsanova 2001: 148–184, see also Chapter 3). This description was embedded in a casual and amusing type of conversation among friends talking about houses and fulfilled multiple purposes. It included a detailed list of referents, was aimed at amusing and convincing the listeners, as well as at demonstrating the speaker’s talent. Even this spontaneous descriptive discourse contained the basic repertoire of focus types: substantive, summarising, evaluative and interactive foci etc., but the description gained a more argumentative character: the evaluative foci contained a motivation and the substantive foci were systematically structured in an argumentative manner etc. Similarly, the same repertoire of focus types could even be found in a description where one person described a scene to another person who should redraw it (Johansson et al. 2005). In this two-way communication, however, the foci were formulated in a cooperative manner. The result of the communication
Chapter 2. Structure and content of spoken picture descriptions
depended both on the explanations of the instructor, on the questions and answers of the drawer and on their agreement about the size of the picture elements, their proportions, spatial relations and so on.
3.
Conclusion
The aim of this chapter has been twofold: (a) to introduce a taxonomy of foci that constituted the structure and content of the off-line picture descriptions and (b) to compare types of foci and their distribution with picture descriptions produced in a number of different settings. Concerning the developed taxonomy, seven different types of foci have been identified serving three main discourse functions. The informants were involved both in categorising and interpreting activities. Apart from describing the picture content in terms of states, events and referents in an ideational or presentational function, informants naturally integrated their attitudes, feelings and evaluations, serving the interpersonal function. In addition, they also related the described elements and used various resources to create coherence. This can be associated with the organisational function of discourse. Substantive, summarising and localising foci were typically used for presentation of picture contents. Attitudinal meaning was expressed in evaluative and expert foci. A group of interpersonal, introspective and metatextual foci served the regulatory or organising function. Informants thought aloud, making comments on memory processes, on steps of planning, on procedural aspects of their spoken presentation etc. In sum, besides reporting about WHAT they saw on the picture, the informants also focused on HOW the picture appeared to them and why. Three additional sets of data were collected in order to find out how general these focus types are: whether a narrative instruction or a spatial priming in off-line and on-line condition (with or without eye tracking) would influence the structure and the content of picture description and the distribution of various types of foci. Concerning types of foci and their distribution in four different settings, we can conclude that the repertoire of foci was basically the same across the different settings, with some modifications. The expert foci, where speakers express their judgements on the basis of their experience and knowledge, were only found in the off-line condition. This suggests that the off-line condition gives more freedom and opportunities for the speakers to show their expertise and, indirectly, to position themselves. Further, the subgroup of substantive foci,
37
38
Discourse, Vision and Cognition
substantive foci with categorisation difficulties, could be identified only in two of four conditions: in the off-line and on-line description with eye tracking. The reason why they are part of the descriptive discourse is explained by the memory task and the real-time constraint respectively, both associated with cognitive effort. Apart from the four studies analysed, the same inventory of foci was found in descriptions of an environment in a cooperative drawing task and in a spontaneous conversation. We can thus conclude that the typology of foci can be generalised to include different situations. The distribution of foci and their proportion, however, varied across the different settings. For instance, the summarising foci dominated in a setting where the picture was described from memory, whereas substantive foci dominated in simultaneous descriptions with the narrative priming. In the settings with the spatial priming, the proportion of localising foci was significantly higher. The interactive setting promoted a high proportion of evaluative and expert foci – informants were expressing their attitudes to the picture content, making judgements about the picture as a whole, about properties of the picture elements and about relations between picture elements. Introspective and metatextual foci were also more frequent in a situation where the listener was present etc. This chapter focused on the structure and content of spoken picture descriptions. The next chapter will demonstrate how describers connect units of speech when they stepwise build their description.
chapter 3
Description coherence & connection between foci
When we describe something verbally, then depending on our goals, we organise the information so that our partner can understand it. We verbally express and ‘package’ our thoughts in small ‘digestible’ portions and present it, step by step, in a certain rhythm that gives the partner regular opportunities for feedback, questions and clarifications. The information flow is thereby divided into units of speech that focus on one certain aspect a time and in a coherent way are connected to a whole. It happens that we make a digression – a step aside from the main track of our thoughts – and spend some time on comments. But we usually succeed in coming back to the main track (or our partners will remind us). We can also use various means to signal – to ourselves and to our partners – where we are at the moment.
The previous chapter dealt with structure and content of picture descriptions, in particular with the taxonomy of foci. In this chapter, we will take a closer look at how speakers create coherence when connecting the subsequent steps in their descriptions. First, we will discuss transitions between foci and the verbal and non-verbal methods that informants use when creating coherent descriptions in an interactive setting. Second, description coherence and connections between foci will be illustrated by a spontaneous description of a visual environment accompanied by drawing.
1.
Picture description: Transitions between foci
The results in the first part of this chapter are based on the analysis of the offline picture descriptions with a listener (cf. Chapter 2, Section 1 and Holsanova 2001: 38ff.). As was mentioned in connection with discourse segmentation in Chapter 1, hesitation and short pauses often appear at the beginning of verbal foci (eh . in this picture there are also a number of . minor . figures’/ eh fantasy
40 Discourse, Vision and Cognition
figures’/ eh plus a recurrent cat’). Verbal superfoci are usually introduced by longer pauses combined with hesitation and lexical markers (eh . and then … eh . we have some little character in the left corner). Both the speakers and the listeners have to reorient at the borders of foci. Discontinuities such as hesitation reflect the fact that speakers make smaller or larger mental leaps when relating the foci (which also the listeners must do in their process of meaning-making). The following tendencies could be extracted from the on-line picture description with eye tracking and picture descriptions in an interactive setting: •
•
•
•
Pauses and hesitations are significantly longer when speakers move to another superfocus compared to when they move internally between foci within the same superfocus. Pauses between superfoci are on average 2,53 seconds long, whereas internal pauses between foci measure only 0,53 seconds on average (p = 0009, one sided, Ttest). This is consistent with the results of psycholinguistic research on pauses and hesitations in speech (Goldman-Eisler 1968) showing that cognitive complexity increases at the choice points of ideational boundaries and is associated with a decrease in speech fluency. This phenomenon also applies to the process of writing (cf. Strömqvist 1996; Strömqvist et al. 2004) where the longest pauses appear at the borders of large units of discourse. Pauses and hesitations get longer when speakers change position in the picture (i.e. from the left to the middle) compared to when they describe one and the same picture area. This result is consistent with the studies of subjects’ use of mental imagery demonstrating that the time to scan a visual image increases linearly with the length of the scanned path (Kosslyn 1978; Finke 1989). Kosslyn looked at links between the distance in the pictorial representation and the distance reflected in mental scanning. Mental imagery and visualisations associated with picture descriptions will be discussed further in Chapter 8, Section 2. Pauses get longer when the speaker moves from a presentation of picture elements on a concrete level (three birds) to a digression at a higher level of abstraction (let me mention the composition) and then back again. The longest pauses in my data appear towards the end of the description, at the transition to personal interaction, when the description is about to be concluded.
Let us have a closer look at some explanations. I propose that there are three aspects contributing to the mental distance between foci. The first one is the thematic distance from the surrounding linguistic context in which the movement
Chapter 3. Description coherence & connection between foci
takes place. As mentioned earlier, the picture contents (the referents, states and events) are mainly described in substantive and summarising foci and superfoci. Thus, this aspect is connected to the presentational function of discourse. There are two possibilities: the transition between foci can either take place within one superfocus or between two superfoci. For example, speakers can describe within one and the same cluster of verbal foci all the articles of clothing worn by Pettson or all the activities of the birds in the tree. Alternatively, while introducing new referents, the speakers can jump from one area to another in their description (from the tree in the middle to the cows in the background). Since the superfoci are characterised by a common thematic aspect (cf. Chapter 1, Section 2.1), it is easier to track referential and semantic relations within a superfocus than to jump to another topic. Thus, when the thematic aspect changes and the speaker starts on a new superfocus, a larger mental distance must be bridged (cf. Kosslyn 1978). The second aspect is the thematic distance from the surrounding pictorial context. As was concluded earlier, describers do not only focus on a pure description of states, events and referents but also add evaluative comments on the basis of their associations, knowledge and expertise. Because of that, we have to count on a varying degree of proximity to the picture elements or, in other words, with a various extent of freedom of interpretation. This aspect is associated with the change from a presentational to an orientational (or attitudinal) function. A change of focus from a concrete picture element to the general characteristics of the painting technique or to an expertise on children’s book illustrations means not only a change of topic, but also a shift between the concrete picture ‘world’ and the world outside of the picture. Example 1 0737 0738 0739 0740 --> 0741 0742 0743
and you can also see on a new twig small animals’ that then run off probably with a daffodil bulb’ on a . wheelbarrow’ in the bottom left corner’ … eh I don’t know if I said how it was composed, that is it had a background which consisted of two halves in the foreground, (quickly) one on the left and one on the right,
--> 0744 (slowly) ehh . and . eh oh yes, Findus . he’s watering’ In Example 1, we can follow the informant moving from a description at a concrete level, when speaking about small animals (0737–0740) towards a
41
42
Discourse, Vision and Cognition
digression at a higher level of abstraction, when speaking about the composition (0741–0743) and back to the description of a concrete picture element (0744). At such transitions, typically long pauses with multiple hesitations appear. The third aspect is related to the performative distance from the descriptive discourse. This aspect is connected to changes between the presentational and orientational function on the one hand and the organisational function on the other. The largest discontinuities and hesitations are to be found at transitions where the describer steps out from the content of the picture description and focuses on the immediate interaction. The above-mentioned findings about transitions in picture descriptions can be confirmed by the results of film retellings (Chafe 1980). Chafe (1980: 43f.) claims that the more aspects of the story content are changed in the course of the description, the larger reorientations are required; for instance, when the scene is changed, when a new time sequence is added, when new people appear in the story or when the activities of the people change. The largest hesitations are found in connection to the most radical change – when the speaker jumps between the story content and the interaction. To summarise, in spite of the different types of transitions, we can state that transitions between larger units of speech, such as superfoci, are associated with a larger amount of effort and reorientation. In addition, the ‘mental distance’ between the superfoci can increase due to the content and functions of foci. I suggest that there are three aspects that increase the mental distance between foci and, consequently, the effort and reorientation at the borders of foci: the thematic distance from the linguistic context, the thematic distance from the pictorial context and the performative distance from the descriptive discourse. The next question is: In what way do speakers create and signal the transitions between different foci? How do speakers make the connection between foci clear to the listener? What lexical and other means do speakers use?
1.1
Means of bridging the foci
Apart from pauses and hesitations at the borders of speech units, we can find various other bridging cues, such as discourse markers, changes of loudness and voice quality, as well as stressed localisations.
Chapter 3. Description coherence & connection between foci
1.1.1 Discourse markers Discourse markers (or discourse particles, cf. Aijmer 1988, 2002; Gülich 1970; Quasthoff 1979; Stenström 1989) include conjunctives, adverbs, interjections, particles, final tags and lexicalised phrases (in English: okay, so, but, now, well; in Swedish: å så, å då, men, sen så, ok). From a linguistic perspective, discourse markers can be characterised as “sequentially dependent elements that delimitate spoken language units” (Schiffrin 1987: 31). “Discourse markers are used to signal the relation between the utterance and the immediate context” (Redeker 1990: 372). From a cognitive perspective, discourse markers can be conceived of as more or less conscious attentional signals that mark the structuring of the speech and open smaller or larger steps in its development. Discourse operators are conjunctions, adverbials, comment clauses, or interjections used with the primary function of bringing to the listeners’ attention a particular kind of linkage of the upcoming discourse unit with the immediate (Redeker 2000: 16) discourse context.
In the data from the off-line picture descriptions with a listener, we can find many discourse markers that fulfil various functions: they reflect the planning process of the speaker, help the speaker guide the listener’s attention and signal relations between the different portions of discourse. Grosz & Sidner (1986) use the summary term ‘cue phrases’, i.e. explicit markers and lexical phrases that together with intonation give a hint to the listener that the discourse structure has changed. Example 2 0513 --> 0514 0515 0516 0517 --> 0518 0519 0521 --> 0522
it’s bright green colours . and bright sky’ then we have the cat’ that helps with the watering’ and sits waiting for the . seeds to sprout’ and he also chases two spiders’ and then we have some little character in/down in the ... left corner, that take away an onion, I don’t know what (hh) sort of character’ then we have some funny birds . in the tree’
In Example 2, the speaker successively introduces three new characters – the cat, the spiders and the birds – and creates the transitions between the superfoci with the help of the markers then (0514), and then (0518), then (0522). The transitions can be schematised as follows (Figure 1):
43
44 Discourse, Vision and Cognition
Figure 1. A paratactic transition closes the referential frame of the earlier segment and opens a new segment.
A transition that closes the current referential frame and opens a new segment is called a paratactic transition. According to Redeker (2006), a paratactic sequential relation is a transition between segments that follow each other on the same level, i.e. a preplanned list of topics or actions. The attentional markers introducing such a transition have consequences for the linear segments with respect to their referential availability. The semantic function of the paratactic transitions is to close the current segment and its discourse referents and thereby activate a new focus space. Example 3 0112 0113 --> 0114 --> 0115 --> 0116 0117
and Findus, he has sown his meatball, (lower voice) you can see it on a little picture, this kind of peg/stick with a label’ he has sown his meatball under it’ (higher voice) and then there is Findus there all the time,
In Example 3, the speaker is making a digression from the main track of description (substantive foci 0112–0113) to a comment, pronounced in a lower voice (0114–0116). She then returns to the main track by changing the volume of her voice and by using the marker and then (0117). This kind of transition can be schematised as follows (Figure 2). A transition that hides the referential frame from the previous segment in order to embed another segment, and later returns to the previous segment is called a hypotactic transition. According to Redeker (2006), hypotactic sequential relations are those leading into or out of a commentary, correction, paraphrase, digression, or interruption segment. Again, the attentional markers introducing such a transition have consequences for the embedded segments with respect to their referential availability. The hypotactic transitions signal an embedded segment, which keeps the earlier referents available at an earlier level. Such an embedded segment usually starts with a so-called push marker or next-segment marker (Redeker 2006) (such as that is, I mean, I guess, by the way) and finishes with a so-called pop marker or end-of-segment marker (but
Chapter 3. Description coherence & connection between foci
push
pop
Figure 2. A hypotactic transition hides the referential frame from the previous segment, while embedding another segment, and later returning to the previous segment.
anyway, so). The reference in the parenthetic segment is thereby deactivated, and the referent associated with the interrupted segment is reactivated. As we have seen, speakers more or less consciously choose linguistic markers to refocus on something they have already said, or to point ahead or backward in the discourse. Let us now continue with other means that are used to bridge the gaps between foci.
1.1.2 Loudness and voice quality In Example 3, we could see an example of marking a transition also by the use of loudness. Prosodic means, such as loudness, voice quality and acceleration (cf. Chapter 1) are used at borders between units of speech in various functions. Suddenly talking with a louder voice can indicate a new focus. In contrast, talking with a lower voice can mean that a digression is being inserted. Changing voice quality can signal quotation of another person (cf. Holsanova 1998b). A change in speed also makes us interpret the utterance differently. Speaking faster often indicates a side sequence, while speaking more slowly and with emphasis signals that this is important information and that we are on the main line. Example 4 shows an inserted side comment where the speed changes (lines 0743–0745): Example 4 0741 0742 --> 0743 --> 0744
… eh I don’t know if I said how it was composed, that is it had a background which consisted of two halves in the foreground, (quickly) one on the left and one on the right, (slowly) ehh . and . eh oh yes, Findus . he’s watering’
. In a recent study, Bangerter & Clark (2003) study coordination of joint activities in dialogue and distinguish between vertical and horizontal transitions. Participants signal these transitions with the help of various project markers, such as uh-huh, m-hm, yeah, okay, or allright.
45
46 Discourse, Vision and Cognition
1.1.3 Localising expressions Apart from discourse markers and prosodic means, there is another way of introducing a new focus in the picture description data: verbal attention can be refocused by using stressed localising expressions. In this way, the speaker can structure the presentation of the picture and build up contrasts (in the foreground, in the background, in the middle, on the left). The picture descriptions in the analysis above were elicitated. In the second section of this chapter, we will look closer at what happens ‘outside the laboratory’. People talk about their experiences all the time, they describe visual scenes and environments on different occasions. Is there a different way of creating description coherence in spontaneous discourse? How do speakers focus and refocus in the hierarchical structure of the description? How do they tie together the sequential steps in the description process and mark the relations between them to make the comprehension process easier for the listeners? 2.
Illustrations of description coherence: Spontaneous description & drawing
To answer these questions we will now look at an extract from a spontaneous discourse where the speaker uses both language, gestures and drawing to describe a visual environment (a bathroom) to his friends (Holsanova 2001: 148ff.). Example 5 contains a longer transcribed extract from the conversation. The transcript contains the number of unit and speaker (leftmost column), verbal and prosodic/acoustic features (second column), non-verbal action (third column) and different types of foci (fourth column). Example 5 624(A) … a standard bathroom in Canada’ 625(A) .. can look like this, 626(A) …2.08 we have a door’ 627(A) … without a threshold, 628(A) they have no thresholds there, 629(A) they don’t exist, 630(A) …1.14 then there is the bathtub.. here, 631(A) … the bathtub is built-in, 632(A) …1.10 so it’s … in the wall’ 633(A) .. and down to the floor,
sum subst list 1a expert com.
subst list 1b subst detail <SHOWS>
Chapter 3. Description coherence & connection between foci
634(B) .. mhm’ 635(A) …1.22 then we have .. eh …1.06 where the faucet comes out’ 636(A) …1.97 and then we have .. all the other .. furnishings here, 637(A) …1.53 we have= usually … a=eh 638(A) …4.00 vanity they call it there, 639(A) …1.04 where . the washbasin is built in’ 640(B) …mhm’ 641(A) but it’s a …1.09 a .. piece,…a part of the washbasin, 642(B) …0.62 mhm’ 643(A) … and it sits .. on the counter itself, 644(A) … which means that if the water overflows’ 645(A) then you have to try to .. force it over the rim, back again, 646(B) …1.00 which is very natural’ .. for us’ 647(A) …1.09 so … the washbasin is actually resting upon it, 648(A) … so if you have a … cross-section here, 649(B) … m[hm] 650(A) [here we] have … the very counter’ 651(C) mhm’ 652(A) …then we have the washbasin, 653(A) it goes up here like this’ 654(A) and down,
subst list 1c subst list 1d subst list 2 naming design
subst detail, design <SHOWS> loc arg cause (if-then) eval (attitude) loc sum
subst list
subst
655(A) …1.67 and up, 656(A) …1.24 of course when the water <SHOWS> arg 1 already has come over here’ conseq. (if-then) 657A) then it won’t go back again, 658(B) … m[hm] 659(A) [y]ou have to force it over, 660(A) .. and these here’ <SHOWS> arg 2 661(A) …1.04 eh=if the caulking under here is conseq. (if-then) not entirely new and perfect’
47
48 Discourse, Vision and Cognition
662(A) … then of course the water leaks in <SHOWS> under .. under here, 663(A) and… when … the water leaks in un/ arg 3 .. under’ 664(A) … and there is wood under here’ <SHOWS> conseq. (if-then) 665(A) after a while then certain small funny things are formed’ 666(B) [mhm’] 667(A) ... which are not/ 668(B) hn … ha ha [ha] 669(A) … which means that you have to .. replace the whole … piece of furniture, 670(B) [ha ha ha] 671(A) [anyway we have the batht]ub here, <SHOWS> subst 672(B) …mhm
During this description, the speaker A illustrates what he is saying by spontaneous drawing. Figures 3 and 4 are two of the pictures that were produced by the speaker during this conversation. Drawings reveal a lot about thinking. According to Tversky (1999: 2), “drawings (…) are a kind of external representation, a cognitive tool developed to facilitate information processing. Drawings differ from images in that they reflect conceptualisations, not perceptions, of reality.” Further, she proposes that “the choice of and representation of elements and the order in which they are drawn reflect the way that domain is schematized and conceptualized”. At the beginning of the extract, the speaker introduces the bathroom (focus 624). The speaker does not get back to it until in focus 675 (!), and this time he
Figure 3.
Figure 4.
Chapter 3. Description coherence & connection between foci
calls it the room. The bathtub is introduced in 630 and refocused in 671. How do the interlocutors handle referential availability? How do they know ‘where they are’ and what the speaker is referring to? How can speakers and listeners retain nominal and pronominal reference for such a long time? This is a continuation of my general question posed in Chapter 1, whether our attentional resources enable us only to focus on one activated idea a time or if we, at the same time, can keep track of the larger units of discourse. Linde (1979) and Grosz & Sidner (1986) suggest that the interlocutors can do so by simultaneously focusing on a higher and a lower level of abstraction. The use of the same item for accomplishing these two types of reference suggests that, in discourse, attention is actually focused on at least two levels simultaneously – the particular node of the discourse under construction and, also, the discourse as a whole. Thus, if the focus of attention indicates where we are, we are actually at two places at once. In fact, it is likely that the number is considerably greater than two, particularly in more complicated discourse (Linde 1979: 351) types.
In this entertaining conversation, the speaker is trying to achieve a certain visualisation effect for the listeners – both with the help of his drawing and his spoken description. Therefore, the partners have to handle multiple representations: The speaker si talking about an object visually present in his drawing here-and-now (the vanity), later on describing the same object under a virtual flooding scenario, leaving it to the listeners’ imagination how it will look like some time after the flooding when certain parts of it are attacked by germs. He is referring to the same concrete object on a higher level of abstraction as a piece of furniture, as an instance of poor construction, or even focusing on the discourse as a whole. In this discourse, we find a dissociation between external visual representations and the discourse-mediated mental representations that are built up in the course of the conversation (cf. also Chapter 7, Sections 1 and 3). The ability to simultaneously focus on a higher and a lower level of abstraction and to handle multiple representations can be explained within Chafe’s theoretical account (1994), where intonation units are embedded in a periphery consisting of semiactive information that forms a context for each separate focus. The simultaneity is thus possible because the present, focused segment is active while the previously mentioned, interrupted segments are merely semiactive. Apart from that, an additional explanation can be found in the partners’ situation awareness and visual access. Since the interlocutors are present in the situation, they share the same visual environment. They can observe each
49
50
Discourse, Vision and Cognition
other’s gaze behaviour, mimics, pointing gestures etc. We must not forget that the speaker’s gaze (at the drawing, at his own gestures, at the artefacts in the environment) can affect the attention of the listeners. Furthermore, the drawing that is created step by step in the course of the verbal description is visible to all interlocutors. The speaker, as well as the others in the conversation, can interact with the drawing; they can point to it and refer to things and relations non-verbally. Thus, the drawing represents a useful tool for answering ‘where are we now?’ and functions as a ‘storage of referents’ or as an external memory aid for the interlocutors. This also means that the interlocutors do not have to keep all the referents in their minds, nor always mention them explicitly. Apart from that, the drawing has been used as a support for visualisation and as an expressive way of underlining what is being said. Finally, it serves as a representation of a whole construction problem discussed in the conversation. I suggest that the common/joint focus of attention is created partly via language, partly by the non-verbal actions in the visually shared environment. For these reasons, the discourse coherence becomes a situated and distributed activity (cf. Gernsbacher & Givón 1995; Gedenryd 1998: 201f.). This may be the reason why we use drawings to help our listeners understand complex ideas.
2.1 Transitions between foci There is a considerable complexity in this spoken description. Figure 5 visualises the complexity of different levels of focusing suggested by Linde (1979) and Grosz & Sidner (1986). This graphic representation illustrates the hierarchical step-by-step construction of the first sequence in the analysed spoken description (624–672). It also indicates the speaker’s lexical means of transition (upper part of the figure) and listener feedback, mainly found at segment borders (bottom part of the figure). As we can see, the description includes digressions, insertions, comments and evaluations on several levels, as well as returns to the main line. For example, the topic ‘bathtub’ – which was introduced in focus 630 – is resumed much later in the conversation, in focus 671. We can also see how thoroughly some bathroom objects are described: vanity, an example of furniture in the bathroom, was mentioned in focus 637–640, and more detailed information about its design and place/localisation is added in lines 641–642 and 643. Later on, an argumentative comment with a logical consequence has been inserted in foci 644–645 (if the water overflows you have to force it over the rim back again). The comment was eventually followed by an evaluation (which
Chapter 3. Description coherence & connection between foci
then
then
an so
but
an
which which which
so
so
anyway
624-625
bathroom SUM
630
626-627
door SUBST list
bathtub 628-629
threshold EXP . COM.
SUBST list
631
built in SUBST detail
632-634
built in
635
636
tap
furnitur e
SUBST list
SUBST list
671-672
bathtub 637-640
SUBST
vanity SUBST list
LOCAL
641-642
form
643
647
placement
SUBST detail
LOCAL
644-645
648-649
washbowl ontop
cross section
LOCAL
SUM.
conseq ARG.
646
650-652
desk& washbowl SUBST list
attitude EVAL.
mhm
mhm
mhm
653-655
form SUBST detail
mhm
mhm
656-659
660-662
663-670
conseq 1
conseq 2
conseq 3
ARG.
ARG.
ARG.
mhm
haha haha haha
mhm
Figure 5. The hierarchy of the bathroom description (624–672) including the discourse markers of the speaker and the feedback of the listener.
is very natural for us) before the speaker refocuses the localisation of the washbasin (647). In other words, what we follow are the hypotactic (embedded) and the paratactic (linear) relations between the verbal foci.
2.2 Semantic, rhetorical and sequential aspects of discourse This spontaneous description fulfils multiple goals in the conversation: to describe an environment, to convince the partners, to amuse them, to demonstrate the speaker’s talent etc. Consequently, the description contains many evaluative and interactive foci and its character shifts towards the rhetorical and argumentative aspects. Therefore, it is suitable to apply Redeker’s Parallel Component Model (Redeker 2000, 2006), which states that every unit of speech is evaluated according to one of three aspects representing three parallel structures in discourse: i. semantic structure (i.e. how its contents contribute to the discourse) ii. rhetorical structure (i.e. how it contributes to the purpose of the discourse segment) iii. sequential structure (i.e. which sequential position it has in the discourse). One of these structures is usually salient. Redeker (1996: 16) writes: In descriptive or expository discourse, rhetorical and sequential relations will often go unnoticed, because semantic relations are a priori more directly relevant to the purposes of these kinds of discourse. Still, there remains some sense in which, for instance, the explication of a state of affairs is evidence for the writer’s claim to authority, and the elaboration of some descriptive detail can support or justify the writer’s more global characterization.
51
52
Discourse, Vision and Cognition
This can be applied to our spontaneous description. Although the semantic aspect seems to be salient, not all verbal foci are part of the semantic hierarchy on the content level. Apart from descriptions of bathroom objects (door, bathtub, faucet, other furnishings), their properties and their placement – even argumentative, interactive and evaluative foci – are woven into the description. The schematic figure illustrates the complexity in the thicket of spoken descriptions. Despite this brushwood structure and many small jumps between different levels, the speaker manages to guide the listeners’ attention, lead them through the presentation and signal focus changes so that the description appears fluent and coherent.
2.3 Means of bridging the foci Which means are used to signal focus changes and to create coherence in the data from spontaneous descriptions? The answer is that verbal, prosodic/acoustic and non-verbal means are all used. The most common verbal means are discourse markers (anyway, but anyway, so, and so, to start with) that are used for reconnecting between superfoci (for an overview see Holsanova 1997a: 24f.). A lack of explicit lexical markers can be compensated by a clear (contrastive) intonation. Talking in a louder voice usually means a new focus, while talking in a lower voice indicates an embedded side comment. A stressed rhythmic focus combined with a synchronous drawing (up–down–up) makes the listener attend to the drawn objects. Prosody and the acoustic quality thus give us important clues about the interpretation of the embedding and coherence of an utterance. Last but not least, interlocutors often use deictic expressions (like this, and this here) in combination with non-verbal actions (drawing, pointing, gesturing). Deictic means help move the listeners’ attention to the new objects in the listeners’ immediate perceptual space (cf. ‘attention movers’ in Holmqvist & Holsanova 1997). Demonstrative pronouns are sometimes used to draw attention to the corresponding (subsequent) gesture. Attention is directed in two ways: the speakers are structuring their speech and also guiding the listener’s attention. . Deictic gestures in spatial descriptions can co-occur with referential expressions and anchor the referent in space on an abstract level. Alternatively, they work on a concrete level and make a clear reference to the actual object by mapping out the spatial relationship explicitly (cf. Gullberg 1999).
3.
Chapter 3. Description coherence & connection between foci
Conclusion
In this chapter, we have focused on how speakers connect the subsequent steps in their description and thereby create discourse coherence. We discussed different degrees of mental distance between steps of description and various means that informants use to connect these descriptive units of talk. To summarise, discontinuities, such as pauses and hesitations, appear within and between foci, but are largest at the transition between foci and superfoci. Both speakers and listeners must reorient themselves at transitions. I proposed that the mental leaps between foci are influenced by three factors: (a) the thematic distance to the surrounding linguistic context, (b) the thematic distance to the surrounding pictorial context, and (c) the performative distance to the descriptive discourse. In other words, the discontinuity is dependent on which (internal or external) ‘worlds’ the speaker moves between. If the speaker stays within one thematic superfocus, the effort at transitions will not be very big. On the contrary, the lower the degree of thematic closeness, the larger the mental leaps. If the speaker moves between a description of a concrete picture element and a comment on the painting technique, i.e. between a presentational and an orientational function, larger reorientations will be required. The biggest hesitations and the longest pauses are found at the transitions where the speaker steps out of the description and turns to the metatextual and interactional aspects of the communicative situation (fulfilling the organisational discourse function). I also concluded that transitions between the sequential steps in the picture descriptions are marked by lexical and prosodic means. Focus transitions are often initiated by pauses, hesitations and discourse markers. Discourse markers are more or less conscious signals that reveal the structuring of the speech and introduce smaller and larger steps in the description. Speakers may refocus on something they have already said or point forward or backward in the discourse. Such markers have consequences for the linear (paratactic) and the embedded (hypotactic) segments with respect to referential availability. Moreover, speakers use prosodic means such as changes in loudness or voice quality to create transitions. Finally, in the process of focusing and refocusing, speakers use stressed localising expressions to bridge the foci. In a spontaneous description formulated ‘outside the laboratory’, the hierarchical structure contains many small jumps between different levels. Verbal, prosodic/acoustic and non-verbal means are all used to signal focus changes and to create coherence. Although the semantic aspect seems to be salient, apart from descriptions of objects, their properties and their placement, many
53
54
Discourse, Vision and Cognition
argumentative and evaluative foci are woven into the description. Despite the complexity, the speaker manages to guide the listeners’ attention and to lead them through the presentation. The interlocutors seem to retain nominal and pronominal references for quite a long time. There are several explanations to this phenomenon: a) simultaneously focusing on both a higher and a lower level of abstraction, b) switching between active and semiactive information and c) using situation awareness and mutual visual access (e.g. by observing each others’ pointing, gazing and drawing). In the spontaneous description, the linguistic and cognitive structuring of the description is situationally anchored. The interlocutors make use of both verbal means and non-verbal means of focusing and refocusing. Thus, joint focus of attention is created partly through language, partly through non-verbal actions in the visually shared environment. The drawing then gets the function of a referent storage, of an external memory aid for the interlocutors. The interlocutors look for and provide feedback, and a continuous mutual adaptation takes place throughout the description (Strömqvist 1998). In the current chapter, we took a closer look at how speakers create coherence when connecting the subsequent steps in their descriptions. The next chapter will be devoted to different description styles.
chapter 4
Variations in picture description
Usually, when we describe something, the choice of form and content in our presentation mainly depends on the purpose of the description. If you describe your dream apartment to someone, you can imagine the description as a guided tour, an imaginary visit, with information about the decoration, the colours and the atmosphere. If you instead describe your flat in order for someone to draw a sketch of it, then information about room size, form and spatial arrangement will probably be in focus. If you want to rent the flat, you are also interested in the rent, how it is furnished etc. Also, the presentation is affected by individual choices: different individuals can focus on different aspects and abandon others in their descriptions.
In the previous two chapters we focused on structure and content of descriptive discourse and the coherence that speakers create between different parts of the description. The aim of this chapter is to discuss various styles in picture descriptions. First, we will characterise and exemplify the dynamic and the static description styles. Second, we will discuss the two identified styles in the framework of the literature on verbal versus visual thinkers. Third, the results will be compared with data from two other studies on picture descriptions in order to find out whether the style of description can be affected by spatial and narrative priming.
1.
Two different styles
At this point, I would like to remind the reader of our discussion regarding the distribution of different types of foci in Chapter 2. Already there, it was assumed that a description rich in localisations would have a tendency to be a more static, whereas a description rich in narrative aspects would tend to be more dynamic. In the results of the analysis of the picture descriptions from
56
Discourse, Vision and Cognition
memory in an interactive setting, I found considerable differences in the picture description concerning this aspect. Twelve informants (six women and six men) with different backgrounds studied the picture for a limited time. The picture was then removed and they described it to a listener from memory. The instruction was as follows: ‘I’ll show you a picture. You can look at it for a while and afterwards, you will be asked to describe it.’ The spoken descriptions were recorded and transcribed. The average length of a picture description was 2 minutes and 50 seconds. After a qualitative analysis of these picture descriptions, I could extract two different styles of description, deploying different perspectives. Attending to spatial relations was dominant in the static description style, while attending to the flow of time was the dominant pattern in the dynamic description style. Consider the following two examples: Example 1 0601 0602 0603 0604 0605 0606 0607 0608 0609 0610
well it’s a picture’ rectangular’ and … it was mainly green’ with a blue sky, divided into foreground and background, in the middle there’s a tree’ uh and in it three birds are sitting, the lowest bird on the left sits on her eggs’ and above her’ there is a bigger bird standing up,
Example 2 0702 0703 0704 0705 0706 0707 0708
it’s quite early in the spring’ and Pettson and his cat Findus are going to sow, Pettson starts with digging up his little garden’ then he rakes’ and . then . he sows plants’ uh . puts potatoes out later on’ and when he’s ready’
0709
he starts sowing lettuce,
Example 1 is a prototypical example of a static description style. In the static style, the picture is typically decomposed into fields that are then described systematically, using a variety of terms for spatial relations. In the dynamic style (as in Example 2), on the contrary, informants primarily focus on temporal
Chapter 4. Variations in picture description
relations and dynamic events in the picture. Although there is no temporal or causal order inherent in the picture, viewers infer it (why they do it will be discussed later). They explicitly mark that they are talking about steps in a process, about successive phases, about a certain order. Let me describe and exemplify the two styles in detail.
1.1
The static description style
The first group of picture descriptions is conditioned by a preference to focus spatial relations between picture elements. The informants enumerate the objects in all their detail and divide the picture into different fields or squares which they then describe systematically one after the other. Also the picture elements inside a field are systematically enumerated and ticked off: Example 3 0859 0860 0861 0862 0863 0864 0865 0866 0867 0868
and then you can see the same man as on the left of the picture’ in three different . positions’ on the very right’ . ehh, there he stands and turns his face to the left, and digs with the same spade, and I think that he has his left arm down on the spade, his right hand further down on the spade, eh and then he has his left foot on the spade, to the right of this figure’ the same man is standing’
0869
turning his face… to the right –
The informants using the static style describe the objects in great detail. They give a precise number of picture elements, state their colour, geometric form and position. They deliver a detailed specification of the objects and when enumerating the picture elements, they mainly use nouns: Example 4 0616 0617 0618 0619 0620 0621
eh the farmers were dressed in the same way’ they had . dark boots’ . pants’ light shirt’ such a armless’ . armless’ . jacket, or what should I say’ hat’ light hat’
57
58
Discourse, Vision and Cognition
0622 0623
beard’ glasses’
0624
eh a pronounced nose,
Moreover, presentational and existential constructions such as ‘there is’, ‘there are’, ‘it is’ and ‘it was’ are frequently used. Also, many auxiliary verbs (we have) appear in static descriptions. The impersonal form man ser/man kan se (you see/you can see) seems to be characteristic of this style. When the informants use other verbs, these verbs are position verbs (stand, sit, lie) but hardly dynamic verbs. Finally, the passive verb form is preferred. The next group of typical features concerns localisations. The informants use many expressions for spatial relations. These spatial expressions are used according to different orientations systems: (a) left–right orientation (to the left, in the middle, to the right, the left bottom corner of the picture, furthest to the left, to the right of the tree, turns his face to the left), (b) up–down orientation (at the bottom left corner, the upper left in the picture, a little further down), (c) background–foreground orientation (in the foreground, divides the picture into a closer and a more distant part, up front, towards the front, further back in the picture, in the middle of the horizontal line). The orientations correspond to the three dimensions or axes that a viewer sitting upright has (Herskovits 1986; Tversky 1992), namely the vertical, the viewer and the horizontal axes (see also Lang, Carstensen & Simmons 1991:28–53). Localisations are not only frequent but also have an elaborate form. Example 5 shows a multiple reformulated, precise localisation, spread over several units. Example 5 --> 0834 0835 --> 0836 --> 0837 --> 0838 0839 --> 0840 --> 0841 0842
in the middle of the picture there’s a tree’ uh . and in it three birds are sitting in … to the left in the tree, at the very bottom’ or or the lowest bird on the left in the tree’ she sits on her eggs’ uuh and above her’ still on the left I think’ there is a a a bigger bird’
0843
standing up’
In the course of the description, the informants establish an elaborate set of referential frames and landmarks. From then on they can refer to this place without having to localise the object in the picture anew. Finally, the localising
Chapter 4. Variations in picture description
expressions also indirectly mediate the speaker’s way of moving about in the discourse. In the static description style, focusing and refocusing on picture elements is done by localising expressions in combination with stress, loudness and voice quality (cf. Chapter 3, Section 1.1.2). Few or no discourse markers were used in this function.
1.2 The dynamic description style The other group of picture descriptions shows a primary perceptual guidance by dynamic events in the picture. Although there is no explicit temporal or causal order in the picture, the informants have identified the cause of events. Their descriptions follow a schema. They start with an introduction of the main characters (the picture represents Pettson and Findus the cat), their involvement in the various activities (they seem to be involved in the spring sowing; what they are doing is … farming in different stages) and a description of the scene (and then there is a house further away so it’s in the countryside). In some cases, the informants add a specification of the season and a description of the overall mood (it’s from the spring because there are such giant lilies in one corner). This introductory part often resembles the first phase in a narrative (Holsanova 1986, 1989; Holsanova & Korycanska 1987; Labov & Waletzky 1973; Sacks 1968/1992). One of the typical features of this dynamic description style is the sequential description of events. Speakers explicitly mark that they are talking about steps in a process, about successive phases, about a certain order: Example 6 --> 0704 --> 0705 --> 0706 --> 0707 --> 0708
Pettson starts with digging up his little garden’ then he rakes’ and . then . he sows plants’ uh yeah puts potatoes out later on, and when he’s ready’
--> 0709
he starts sowing lettuce’
The different phases are introduced by using temporal verbs (starts), temporal adverbs (then, and then, later on), and temporal subordinate clauses (when he’s ready). The successive phases can also be introduced using temporal prepositions (from the moment when he’s digging … to the moment when he’s raking and sowing).
59
60 Discourse, Vision and Cognition
Example 7 shows a careful description of the various steps in the process: the entire activity is named in an introduction (1102), the different steps are enumerated (1106–1110) and this part of the description is concluded by a summarising line (1111). Example 7 --> 1102 1103 1104 1105 --> 1106 --> 1107 1108 --> 1109 --> 1110
it’s a . picture . of . a sowing’ and . there is a . fellow’ I don’t remember his name, Pettson’ I think, who is digging . in the field first’ and then . he’s sowing’ rather big white seeds’ and then the cat is watering’ these seeds’ and it ends up with the fellow . raking . this field,
--> 1111
. so that’s what’s happening here,
Some speakers are particularly aware of the temporal order and correct themselves when they describe a concluded phase in the past time: Example 8 0711 0712 --> 0713 --> 0712
eh on the one half of it’ one can see . eh Pettson’ when he’s . digging in the field’ and/ or he’s done the digging of the field,
0715
when he’s sitting and looking at the soil,
Informants have also noticed differences in time between various parts of the picture. The informant in the next example has analysed and compared both sides of the picture and presented evidence for the difference in time: Example 9 --> 1151 --> 1152 --> 1153
. in a way it feels that the left hand side is . the spring side’ because . on the right hand side’ there is some . raspberry thicket . or something, that seems to have come much longer’
--> 1154
than the daffodils on the left hand side,
The dynamic quality is achieved not only by the use of temporal verbs (starts with, ends with) and temporal adverbs (first, then, later on), but also by a
Chapter 4. Variations in picture description
f requent use of motion verbs in the active voice (digs, sows, waters, rakes, sings, flies, whips, runs away, hunts). Also the frequent use of so-called pseudo coordinations (constructions like fågeln sitter å ruvar på ägg; the bird is sitting and brooding on the eggs) contribute to the rhythm and dynamic character of the description (for details, cf. Holsanova 1999a: 56, 2001: 56f.). Example 10 --> 0404 0405 0406 --> 0407 0408 --> 0409 0410 0411 0412 0413 0414 0415 0416 --> 0417 --> 0418 0419 0420
then . he continues to dig’ and then he rakes’ with some help from Findus the cat’ and . then he lies on his knees and . sows’ . seeds’ and then Findus the cat helps him with some sort of ingenious watering device’ LAUGHS and then … Findus the cat is lying and resting, and he jumps around in the grass’ xxx among ants, and then a little character in the lower left corner’ with an onion and a wheelbarrow’ a small character’ and then in the tree there are bird activities’ one is tidying up the nesting box’ one is standing and singing’ and one is lying and brooding on the eggs, then you can see some flowers’ whether it’s . yellow anemone or yellow star-of-Bethlehem or something’
--> 0421
and . cows go grazing in the pasture,
In the dynamic description style, speakers do not ascribe the spatial perception the same weight as they do the temporal perception. Thus, we do not find precise localisations, and spatial expressions are rare. The few localising expressions that the informants use are rather vague (in the air, around in the picture, at strategically favourable places, in the corners, in a distance). Another distinguishing feature is that discourse markers are used to focus and refocus the picture elements, and to bridge and connect them (cf. Chapter 3, Section 1.1). Last but not least, the difference in the description style was also reflected on the content level, in the number of the perceived (and reported) characters. The informants with the prototypical dynamic style perceived
61
62
Discourse, Vision and Cognition
Table 1. The most important features of the two description styles. Static Description Style
Dynamic Description Style
Global dominance of spatial perception dominance of temporal perception characteristics numerous and precise localisations many temporal expressions sequential description of events a detailed description according to a schema the picture is subdivided into squares focusing on temporal differences in the picture and described systematically dynamic description temporal verbs, temporal adverbs, temporal subordinate clauses, prepositions many and exact spatial expressions few spatial expressions, less spatial expressions precise focusing and refocusing on picture focusing and refocusing on picture means of refocusing elements by localising expressions elements by mainly using discourse combined with stress, loudness and markers voice quality few pseudo coordinations frequent usage of pseudo pseudo coordinations coordination a high frequency of presentational few ‘there is’, ‘there are’ there is, there are, you can see, expressions: ‘there is’, ‘there are’, ‘it is’, ‘it was’ etc. ‘to have’ dynamic mainly nouns, few dynamic verbs, many dynamic motion verbs motion verbs mostly auxiliary and position verbs temporal expressions
static description no temporal expressions
one Pettson (and one cat) figure at several moments in time, whereas the static describers often perceived multiple figures in different positions. How are these results related to the focus types presented in Chapter 2? One could assume that localising foci and substantive foci would be characteristic of the static style, whereas evaluative foci, introspective foci and a substantive listing of items (mentioning different activities) would be typical of a dynamic style. A Ttest showed, however, that localising foci (loc) were not the most important predictors of the static style. They were frequently present but instead, expert foci and meta foci reached significant results as being typical of the static style (p = 0,03). Concerning the dynamic style, evaluative foci were quite dominant but not significant. list of items was typical and close to significant (p = 0,06). Both the temporal aspect and the dynamic verbs, which turned out to be typical of a dynamic description style, were part of the substantive and
Chapter 4. Variations in picture description
Table 2. The distribution of the most important linguistic variables in the twelve descriptions (off-line interact). Subject No.
# foci
1 4 5 7 11
28 25 41 69 66
Dynamic Dynamic Dynamic Dynamic Dynamic
3 5 10 12 15
0,11 0,20 0,24 0,17 0,23
1 3 4 12 14
0,04 0,12 0,10 0,17 0,21
11 15 6 10 11
0,39 0,60 0,15 0,14 0,17
16 16 20 35 23
0,57 0,64 0,49 0,51 0,35
2 10 12
24 64 29
MIX MIX MIX
6 19 8
0,25 0,30 0,28
5 8 4
0,21 0,13 0,14
6 4 2
0,25 0,06 0,07
0 21 3
0,00 0,33 0,10
3 6 8 9 Mean Sum
29 65 98 72 50,83 610
Static Static Static Static
18 28 23 33
0,62 0,43 0,23 0,46
5 20 46 9
0,17 0,31 0,47 0,13
0 4 0 0
0,00 0,06 0,00 0,00
4 9 25 13
0,14 0,14 0,26 0,18
Style
# there is %there is #spatial
%spatial
#temporal
# dynamic % temporal verbs
% dynamic verbs
summarising foci. Table 1 summarises the most important features in the two description styles. When it comes to the frequency and distribution of these two styles, the dynamic description style was mostly evident with five informants (P1, P4, P5, P7 and P11), whereas the static description style was dominant with four informants (P3, P6, P8 and P9). The descriptions of the three remaining informants (P2, P10, P12) showed a mixture of a dynamic and a static style. They usually began with the dynamic style but, after a while, they started with lists and enumerations, focusing on spatial aspects, interpretative comments and comparisons. Thus, the style at the end of their descriptions was closer to the static description style. In order to substantiate the characteristics of the two description styles, I have quantified some of the most important linguistic aspects mentioned above. Table 2 shows the distribution of the linguistic variables in the twelve descriptions from memory. The informants are grouped according to the predominating description style. The overview in Table 2 encourages a factor analysis – a way to summarise data and explain variation in data (see the two screen dumps in Figure 1). It might be interesting to find out which variables are grouped together. Since my data were not so extensive, the factor analysis has only an explorative character. Let me only briefly mention the results. Data from eleven informants were used as a basis for the factor analysis. One informant was excluded from the analysis because of her age. Ten variables have been analysed: the length of the whole description, the length of the free description, number of foci, number of temporal expressions, number of spatial expressions, refocusing with discourse
63
64 Discourse, Vision and Cognition
Figure 1. Summary of the factor analysis
markers (DM), number of pseudo coordinations, number of there is/there are etc. and the number of dynamic motion verbs. An extra variable was added to those mentioned previously: whether the informants call the characters by names. As a result, three factors were suggested by the system. Factors 1 and 2 together explained 70 percent of the variation in the data. They were also easy to interpret: Factor 1 summarised the static style while Factor 2 characterised the dynamic style. Thereby, the hypotheses about the two description styles were confirmed. Factor 3 was more difficult to find a name for. On the basis of the variables grouped by this factor, including explicit connectors and coherence markers, I interpreted this factor as expressing the dynamics of the description coherence. To sum up, in the twelve descriptions from an interactive setting, two dominant styles were identified: the static and the dynamic description style. Attending to spatial relations was dominant in the static description style, while attending to the flow of time was the dominant pattern in the dynamic description style.
1.3
Cognitive, experiential and contextual factors
What are the possible explanations for these two styles found in the picture descriptions? One observation concerns gender differences. If we compare the results in Table 2 with the informants’ characteristics, we can see a preliminary tendency for women to be more dynamic and for men to have a more static description style. However, more studies are needed to confirm this pattern. Another possibility would be the difference between ‘visual thinkers’ and ‘verbal thinkers’ (Holsanova 1997b), which will be discussed in the next section.
Chapter 4. Variations in picture description
Yet another possibility is that these description styles are picture specific – in contrast to other ‘sources of perception’. It could be the case that some of the results are due to the fact that the informants either verbalise the picture as a representation or focus on the content of the represented scene. The question whether these results are valid also for scene description and event description in general has to be tested empirically. Another possibility is that this particular picture, with its repetitive figures, may have affected the way it was described. Furthermore, an additional source of explanation is the non-linguistic and contextual aspects that might to some extent have influenced what people focus on during picture viewing, what they remember afterwards and what they describe verbally (and how). For instance, previous knowledge of the picture, of the genre, of the book, of the characters or of the story may be the most critical factors for the distinction of the two description styles, since not knowing the genre and/or characters may lead to a less dynamic description. One could expect that if the informants have read the book (to themselves or to their children), the picture will remind them of the story and the style will become dynamic and narrative. If the informants know the Pettson and Findus characters from Sven Nordqvist’s books, films, TV programmes, computer games, calendars etc., they can easily identify the activities that these characters are usually involved in, and the description will become dynamic. Finally, if the informants know the particular story from the children’s book, they can go over to a ‘story-telling-mode’ and deliver a dynamic description with narrative elements. On the other hand, some informants may follow the instruction more strictly and on the basis of their discourse genre knowledge formulate a rather static picture description. Here is some evidence from the data. In fact, ten informants mentioned the characters Pettson and Findus by name (P1, P2, P4, P5, P7, P8, P9, P10, P11, P12). Two of the informants even seem to know the story, since they included details about the meatballs that Findus has planted. Three informants recognised and explicitly mentioned the source where the picture comes from or the illustrated children’s book genre (P3, P9, P10). The only informant who does not mention either the genre or the characters is P6. If this is because he does not know them, this could have influenced his way of describing the scene. Table 3 summarises various cognitive, experiential and contextual factors that might have played a role. So if the complex picture primed the knowledge of the story-book, and if this knowledge of the story book was the critical factor that influenced the
65
66 Discourse, Vision and Cognition
Table 3. Overview over cognitive, experiential and contextual factors that might have influenced the descriptions Cognitive, experiential and contextual Example factors Scene schema knowledge Pictorial genre knowledge Knowledge of the characters Particular knowledge of the book Particular knowledge of the story Particular knowledge of the picture Discourse genre knowledge Informant’s way of remembering things Informant’s background and interests Informant’s expertise Informant’s associations Informant’s linguistic and cultural background The interactional setting
semantic characteristics of the scene: rural landscape, gardening in the spring children’s book illustration old guy Pettson and his cat Findus
spoken picture description fauna and flora on painting techniques, farming, gardening etc. activities that Pettson and Findus usually are involved in, spring, harmony language-specific ways of classifying and structuring scenes and events description to a specific listener
dynamic (narrative) style of the picture description, then we can assume the following: • •
a narrative priming will increase the dynamic elements in the picture descriptions a spatial priming will, on the contrary, increase the static elements in the picture description.
We will test these hypotheses and describe the effects of spatial and narrative priming later on in this chapter. Before that, let us turn to the second section, which will be devoted to the discussion of individual differences in general and verbal and visual thinkers in particular.
2.
Verbal or visual thinkers?
Quite often, the distinction between verbal and visual thinkers is made in psychology, pedagogy, linguistics and cognitive sciences. The question is whether we can draw parallels between the dynamic and the static picture description style on the one hand and the verbal and visual thinkers on the other. In the
Chapter 4. Variations in picture description
following, I will discuss the two extracted styles from different theoretical and empirical perspectives: from studies on individual differences, experiments on remembering and theories about information retrieval and storage. The results concerning the two description styles are supported by Grow’s (1996) study on text-writing problems. Grow has analysed the written essays of students and divided them into verbal and visual thinkers on the basis of how they express themselves in written language. He points out some general problems that visual thinkers have when expressing themselves in written language. According to Grow, visual thinkers have trouble organising expository prose because their preferred way of thinking is fundamentally different from that of verbal thinkers. Visual thinkers do not focus on the words but rather think in pictures and in non-verbal dimensions such as lines, colours, texture, balance and proportion. They therefore have trouble expressing themselves in writing, i.e. breaking down ideas that turn up simultaneously into a linear order of smaller units, as required by language. They also have trouble presenting clear connections between these units. The fact that visual thinkers let several elements pop up at the same time, without marking the relation between them, means that it is up to the listener to draw conclusions, interpret and connect these elements. In contrast, verbal thinkers analyse, compare, relate and evaluate things all the time. Visual thinkers often list things without taking a position on the issues, do not order them or present them as events. The description becomes a static one. Furthermore, the ability of visual thinkers to dramatise and build up a climax is weak. They do not build the dynamics and do not frame the description in a context. Verbal thinkers more easily linearise and dramatise. The features in the static picture description style noted in my study greatly resemble the general features of the ability to produce written texts that visual thinkers exhibit according to Grow. On the one hand, we have a dynamic, rhythmic and therefore very lively style, where relations between ideas are explicitly signalled using discourse markers, close to Grow’s verbal thinkers. On the other hand, the static character of the picture description style, with its perceptual dominance of spatial relations, where the picture is divided into fields and many details are mentioned but no explicit connection is established between them, resembles the visual thinker. Despite the difference in medium (written vs. spoken language), it is therefore easy to draw parallels between the dynamic and the static picture description style on the one hand and the verbal and visual thinkers on the other. Concerning information retrieval, Paivio (1971a, b, 1986) suggests that humans use two distinct codes in order to store and retrieve information. In his
67
68 Discourse, Vision and Cognition
dual code theory, Paivio (1986: 53f.) assumes that cognition is served by two modality-specific symbolic systems that are structurally and functionally distinct: the imagery system (specialised for representation and processing of nonverbal information) and the verbal system (specialised for language). Currently, the cognitive styles of the visualisers and the verbalisers have been characterised as ”individual preferences for attending to and processing visual versus verbal information” (Jonassen & Grabowski 1993: 191; cited in Kozhevnikov et al. 2002). While visualisers rely primarily on imagery processes, verbalisers prefer to process information by verbal-logical means. According to the current research (Baddeley 1992; Baddeley & Lieberman 1980), working memory consists of a central executive (controlling attention) and two specialised subsystems: a phonological loop (responsible for processing verbal information) and a visuospatial sketchpad (responsible for processing visuospatial information). Since the stimulus picture in the study has been described off-line, let me also mention classic works on memory. Bartlett (1932: 110f.) distinguishes between visualiser and vocaliser when reporting the results of his experiments on remembering. According to his observations, visualisers primarily memorise individual objects, group objects based on likeness of form and, sometimes, even use secondary associations to describe the remembered objects. Vocalisers, on the other hand, prefer verbal-analytic strategies: their descriptions are influenced by naming, they use economic classifications for groups of objects and rely much more on analogies and secondary associations (it reminds me of so and so). They also frequently describe relations between objects. When speaking about verbal-analytic strategies, Bartlett mentions the possibility of distinguishing between verbalisers and vocalisers, but does not give any examples from his data. Nevertheless, there is still the possibility of a mixed type of description, using both the verbal and the visuospatial type of code. Krutetskii (1976; cited in Kozhevnikov et al. 2002: 50) studied strategies in mathematical problem solving and distinguished between three types of individual strategies on the basis of performance: the analytic type (using the verbal-logical modes), the geometric type (using imagery) and the harmonic type (using both codes). Individual differences are in focus also in Kozhevnikov et al. (2002), who revise the visualiser-verbaliser dimension and suggest a more fine-grained distinction between spatial and iconic visualisers. In a problem-solving task, they collected evidence for these two types of visualisers. While the spatial visualisers in a schematic interpretation focus on the location of objects and on spatial relations between objects, the iconic visualisers in a pictorial
Chapter 4. Variations in picture description
i nterpretation focus on high vividness and visual details like shape, size, colour and brightness. This finding is consistent with neurophysiological evidence for two functionally and anatomically independent pathways: one concerned with object vision and the other with spatial vision (Ungerleider & Mishkin 1982; Mishkin et al. 1983). Moreover, the recent research on mental imagery suggests that imagery ability consists of distinct visual and spatial components (Kosslyn 1995). If we want to apply this above mentioned distinction of spatial and iconic visualisers to my data, we could conclude the following: Apart from associating the static description style with the visualisers and the dynamic description style with the verbalisers, one could possibly find evidence even for these more fine-grained preferences: verbal, spatial and iconic description styles. For instance, when we look at the distribution of different types of foci (cf. Chapter 2, Section 1) and the categorising devices on the linguistic surface, the dominance of localising foci (loc) could be taken as one characteristic of the spatial visual style, whereas the dominance of substantive (subst) and evaluative foci (eval) describing the colour, size and shape of individual objects could serve as an indicator of the iconic visual style. Finally, the dominance of associations (the dragonfly reminds me of an aeroplane, the three birds in the tree are like an average Swedish Svensson-family), mental groupings and interpretations (it starts with the spring on the left) and economic summarising classifications (this is a typical Swedish landscape with people sowing) would be the typical features of a verbal description style.
3.
Effects of narrative and spatial priming on the two styles
As we saw in Section 1, the data from a picture description in an interactive setting showed a clear tendency towards two styles deploying different perspectives: the static style and the dynamic description style. Attending to spatial relations in the picture was dominant in the static style, while attending to the flow of time was the dominant pattern in the dynamic style. In the previous section, these results were supported by various studies on individual differences, experiments on remembering and theories about information retrieval and storage. Since one of the possible explanations for a dynamic style was that the knowledge of the book or story would add narrative character to the description, two more studies were conducted, one with narrative and one with spatial priming.
69
70 Discourse, Vision and Cognition
In this last section, we will compare the results from the interactive setting with these two sets of data on picture descriptions in order to find out whether the style of picture descriptions can be affected by spatial and narrative priming. The first set of data consists of 12 off-line picture descriptions with spatial priming. The second data set consists of 15 on-line picture descriptions with narrative priming. •
•
3.1
Hypothesis 1 is that the spatial priming will influence a more static style of picture description. Spatial priming will be reflected in a large proportion of existential constructions, such as ‘there is’, a high number of spatial expressions and other characteristics of the static description style. Spatial priming will also lower the number of dynamic verbs and temporal expressions in the descriptions. Hypothesis 2 is that narrative priming will influence a more dynamic style of picture description. Narrative priming will result in a large proportion of temporal expressions, a high number of dynamic verbs and other characteristics of the dynamic description style. Narrative priming will also lower the number of spatial expressions and existential constructions.
The effect of (indirect) spatial priming
The first set of data consists of picture descriptions with (indirect) spatial priming. Twelve informants (six men and six women), matched by age, were viewing the same picture for 30 seconds. Afterwards, the picture was covered by a white board and the informants were asked to describe it off-line, from memory. Their eye movements were measured both during picture viewing and during picture description. All picture descriptions were self-paced, lasting on average 1 minute and 55 seconds. The instruction was as follows: ‘I will show you a picture, you can look at it for a while and then I will ask you to describe it verbally. You are free to describe it as you like and you can go on describing until you feel done.’ The spatial priming consisted of the following step: Before this description task, all informants were listening to a pre-recorded static scene description that was systematically structured according to the scene composition and contained a large proportion of ‘there is’- constructions and numerous spatial descriptions (‘There is a large green spruce at the centre of the picture. There is a bird sitting at the top of the spruce. To the left of the spruce, and at the far left in the picture, there is a yellow house with a black tin roof and white corners. …’).
Chapter 4. Variations in picture description
This description lasted for about 2 minutes and created an indirect priming for the current task. Would such a ‘suggested’ way of describing a picture have effects on the way the informants consequently described the stimulus picture? In particular, would such an indirect spatial priming influence the occurrence of the static description style? If yes, were all informants primed to the same extent? When looking closely at the data, we can see that most descriptions were affected by spatial priming but that some descriptions still contained dynamic parts. Example 11 illustrates a dynamic part of picture description and Examples 12 and 13 demonstrate a static picture description in this setting. Example 11 0401 Ok, 0402 I see the good old guy Pettson and Findus’ där ser jag den gode gamle Pettson och Findus’ 0403 eh how they are digging and planting in a garden. Eh hur de gräver och planterar i en trädgård, 0404 it seems to be a spring day’ det är en vårdag tydligen’ 0405 the leafs and the flowers have already come out’
löven å blommorna har kommit ut’ Example 12 0601 Ehm in the middle to the left … Pettson is standing’ Hm i mitten till vänster står … Pettson ’ 0602 And looking at something he has in his hand’ och tittar på nånting han har i sin hand’ 0603 in the middle of the garden’ mitt i trädgårdslandet’ 0604 And it is … green all around’ å så är det… grönt runtomkring’ 0605 Some birds on the left in the corner’ några fåglar till vänster i hörnet’ 0606 And then in the middle’ sen så i mitten’ 0607 Stands a tree’ står ett träd’ 0608 with … three birds and a flower’
med … tre fåglar och en blomma’
71
72
Discourse, Vision and Cognition
Table 4. The overall number and proportion of linguistic parameters in off-line interact compared to off-line + ET (spatial priming). Linguistic indicators
off-line, interact
off-line + spatial priming
Sign.
1’there is’ 2 spatial expressions 3 temporal expressions 4 dynamic verbs Number of foci Average duration Mean # foci
180 (30 %) 131 (21 %) 69 (11 %) 185 (30 %) 610 2 min 50 sec 50,83
110 (35 %) 104 (33 %) 15 (5 %) 104 (33 %) 311 1 min 55 sec 25,92
NS NS * NS * * *
Example 13 1201 ehhaa … on the picture you can see a man’ ehhaa … på bilden ser man en man’ 1202 with a vest and hat and beard and glasses,’ med en väst å hatt å skägg å glasögon, 1203 Eh the same man in four different eh … what should I call it … situations,
Eh samma man i fyra olika eh . vad ska man säga … situationer, In order to substantiate the analysis, a number of important linguistic variables and indicators have been quantified. Table 4 shows the proportion of linguistic parameters (i.e. ‘there is’/‘there are’, spatial expressions, temporal expressions and dynamic verbs) typical for the static and dynamic styles in two settings: in the off-line interactive setting and in the off-line setting with spatial priming. As assumed, the general tendency was that more localisations were used under spatial priming: we found only 21 percent spatial expressions in the interactive condition, whereas they were more frequent in the primed condition (33 percent). One major difference was that picture descriptions with spatial priming contained significantly fewer temporal verbs/ adverbs compared to the off-line interactive setting (Anova, oneway, F = 8,8952; p = 0,006). The length of the description is another parameter typical of a static description style. The number of foci was indeed significantly smaller in the spatial priming condition (p = 0,008). Contrary to our hypothesis, there were, however, no large differences between the two settings concerning the proportion of ‘there is’ (31 percent and 35 percent of all foci respectively) and dynamic verbs (30 percent and 33 percent of all foci respectively). Figure 2 visualises the results of the comparison. The result was that due to the indirect spatial priming – the informants participated in a spatial task directly before the current task – the descriptions
Chapter 4. Variations in picture description
Relative proportion of linguistic parameters in 12 off-line + listener
Relative proportion of linguistic parameters in 12 off-
0,35
0,4
0,3
0,35 0,3
0,25 'there is' spatial expr. temporal expr. dynamic verbs
0,2 0,15 0,1
'there is' spatial expr. temporal expr. dynamic verbs
0,25 0,2 0,15 0,1
0,05
0,05
0
0
1
1
Linguistic parameters 1-4
Linguistic parameters 1-4
Figure 2. Spatial priming: Proportion of linguistic parameters in the interactive setting (left) and spatial priming (right).
contained a larger number of localisations and significantly fewer temporal expressions. The descriptions were also significantly shorter. The overall character of picture descriptions in the data from off-line setting shifts towards the static dimension. An additional explanation of the static character might be that the speakers formulate their description while looking at a white board in front of them instead of having an active listener present during the task. In this way, they are more bound to the actual picture elements and their positions. The fact that some parts of the descriptions were not primed to the same extent and included dynamic descriptions can be explained by a possible ‘conflict’ in priming. If the informants recognised the children’s book or the story when they saw the picture, they were narratively primed by the book and spatially primed by the instruction. Let us now continue with narrative priming.
3.2 The effect of narrative priming The second set of data consists of picture descriptions with narrative priming. Fifteen informants (six men and nine women), matched by age, described the same picture. This time, they were given a narrative priming and described it on-line. The description was self-paced, lasting on average 2 minutes and 30 seconds. The task was to tell a story about what happens in the picture. Would a narrative priming influence the occurrence of a dynamic description style in on-line descriptions? The on-line picture descriptions were analysed thoroughly. The result was that many dynamic descriptions were produced and that the dynamic aspects were strengthened (cf. Table 5).
73
74
Discourse, Vision and Cognition
Table 5. Overall number and proportion of linguistic indicators for two styles in: off-line interactive setting and on-line description with narrative priming. Linguistic indicators
off-line, interact
on-line, narrative priming
Sign.
‘there is’ spatial expressions temporal expressions dynamic verbs Number of foci Average duration Mean # foci
180 (30 %) 131 (21 %) 69 (11 %) 185 (30 %) 610 2 min 50 sec 52,83
28 (9 %) 38 (12,5 %) 172 (57 %) 52 (17 %) 302 2 min 30 sec 20,13
* * * * * NS *
The use of temporal expressions (temporal verbs and adverbs) and dynamic verbs were two of the relevant indicators of a dynamic style. What is most striking when comparing these two data sets is the dramatic increase of temporal expressions and the decrease of existential constructions – ‘there is’ – in descriptions with narrative priming (cf. Figure 3). 57 percent of all foci contain temporal verbs or adverbs compared with only 11 percent in the interactive setting (p = 0,0056). Only 9 percent of all foci contain a ‘there is’-construction – compared to 30 percent in the interactive setting (p = 0,0005). As assumed, the proportion of spatial expressions was significantly lower in the narrative setting than in the interactive one (p = 0.038). Contrary to our expectations, the number of dynamic verbs dropped. It can be concluded that the narrative priming did influence the way the informants described the picture. In particular, the narrative priming mostly
Relative proportion of linguistic parameters in 12 off-line + listener
Relative proportion of linguistic parameters in 15 on-line + narrative 0,70
0,35
0,60
0,3
0,50
0,25 'there is' spatial expr. temporal expr. dynamic verbs
0,2 0,15
0,30
0,1
0,20
0,05
0,10
0
'there is' spatial temporal dynamic verbs
0,40
0,00
1
1
Linguistic parameters 1-4
Linguistic parameters 1-4
Figure 3. Narrative priming: Proportion of linguistic parameters in the interactive setting (left) and narrative priming (right).
Chapter 4. Variations in picture description
enhanced the temporal dynamics in the description. The effect is even stronger than in the indirect spatial priming. Also, the use of ‘there is’ and spatial expressions dropped in this setting. The reason for this might be that the informants did not feel bound to the picture. Some of the informants created a completely new story out of the picture elements, far from the actual constellation. Another possible explanation can be found in double priming. If the informants recognised the characters, the book or the story, they were narratively primed both by the book and by the instruction. These two factors might have supported each other in a tandem fashion. Furthermore, compared to the dynamic style in the off-line condition, we find more types of narrative elements here: the informants frequently use temporal specifications, they connect the foci by means of discourse markers, and they use projected speech (direct and indirect quotation of the figures). Some of the speakers even include a real story with a punch line and a typical storyending (cf. Example 14). Example 14 0712 so. . Pettson starts to loosen up the soil’ så Petterson börjar och luckrar upp jorden’ 0713 and then he finds something weird, och då hittar han något konstigt, 0714 he looks at it . properly, han tittar . ordentligt, 0715 and .– it is a seed that has probably flown here from somewhere, och. det är ett frö som nog har flugit hit från någonstans, 0716 he has never seen such a seed, . han har aldrig sett ett sånt frö, 0717 even if he’s a good gardener, även om han är en kän/ en duktig trädgårdsmästare, 0718 And has seen most of it, och varit med om det mesta, 0719 so he becomes extremely curious, så han blir jättenyfiken, 0720 he . shouts to the little cat: han . ropar till lilla katten: 0721 come and take a waterhose here’ ta nu en vattenslang här’ 0722 take it at once/ I have to plant this seed, ta genast/ jag måste genast plantera den här frön,
75
76
Discourse, Vision and Cognition
0723 0735 0736
and so . he does it’ och . så gör han det’ …… and what happens with this seed’ och vad som händer med den här frön’ that you will get to know in the next episode,
det kommer ni att höra i nästa avsnitt, The aim of this section was to compare the results concerning static and dynamic styles with other data. Spatial and narrative priming had effects on the description style. What seems to be influenced most by the spatial priming is a larger number of localisations, significantly fewer temporal expressions and significantly shorter descriptions. The character of the descriptions shifted towards a static style. The narrative priming gave rise to a significantly larger proportion of temporal expressions and a significant drop in spatial expressions and existential constructions of the type ‘there is/there are’. In particular, narrative priming enhanced the temporal dynamics in the description. The effect was even stronger than in the indirect spatial priming. Questions that remain to be answered are the following: Does the presence of a listener influence the occurrence of these two styles? If we were to not find the two styles in other types of off-line description, then there would be reason to think that these are conversationally determined styles that only occur in face-to-face interaction. To test this, we have to find out whether these styles appear in off-line monological picture descriptions.
4.
Conclusion
In the twelve descriptions, two dominant styles were identified: the static and the dynamic description style. Attending to spatial relations is dominant in the static description style where the picture is decomposed into fields that are then described systematically, with a variety of terms for spatial relations. In the course of the description, informants establish an elaborate set of referential frames that are used for localisations. They give a precise number of picture elements, stating their colour, geometric form and position. Apart from spatial expressions, the typical features of the static description style are frequent use of nouns, existential constructions (‘there is’, ‘it is’, ‘it was’), auxiliary or position verbs and passive voice. Focusing and refocusing on picture elements is
Chapter 4. Variations in picture description
done by localising expressions in combination with stress, loudness and voice quality. All the above-mentioned traits together contribute to a static characteristic of the description. Attending to the flow of time is the dominant pattern in the dynamic description style, where the informants primarily focus on temporal relations and dynamic events in the picture. The speakers describe the picture sequentially; they explicitly make it clear that they are talking about steps in a process, about successive phases, about a certain order. The dynamic quality of this style is achieved by a frequent use of temporal verbs, temporal adverbs and motion verbs in an active voice. Discourse markers are often used to focus and refocus on the picture elements, and to interconnect them. Apart from the above mentioned features, the informants seem to follow a (narrative) schema: the descriptions start with an introduction of the main characters, their involvement in various activities and a description of the scene. In Section 2, the two extracted styles were discussed from different theoretical and empirical perspectives: from studies on individual differences, experiments on remembering and theories about information retrieval and storage. The results were connected to studies about visual and verbal thinkers and spatial and iconic visualisers and were supported by dual coding theory and the neurophysiological literature. Since one of the possible explanations for a dynamic style was that the knowledge of the book or story would add narrative character to the description, two more studies were conducted, one with narrative and one with spatial priming. Thus, Section 3 focused on the comparison between the results from the interactive setting with other data on picture descriptions in order to find out whether the static and dynamic styles of picture descriptions can be affected by spatial and narrative priming. We can conclude that spatial and narrative priming had effects on the description style. Spatial priming leads to a larger number of localisations, significantly fewer temporal expressions and significantly shorter descriptions, whereas narrative priming mostly enhances the temporal dynamics in the description. The first four chapters of this book have dealt with various characteristics of picture descriptions in different settings. In the remaining four chapters, we will broaden the perspective and connect the process of picture viewing to that of picture description. In the next chapter, the reader will get acquainted with the multimodal method and the analytical tools that I used when studying the correspondence between verbal and visual data.
77
chapter 5
Multimodal sequential method and analytic tool
When we describe pictures or scenes, we usually do not express everything we have experienced visually. Instead, we pick out certain picture elements depending on our tasks, preferences and communicative goals. The verbal description and the perceptual experience of the picture can be ‘filtered’ by several principles: for example by the saliency principle (expected, prefer red and important elements are described first or most), by the animacy principle (human beings and other animate elements are described first or most) (Lucy 1992) or by the relevance principle (elements that are relevant to the listener will be selected). We can imagine that the selected elements get ‘highlighted’ from the background of the available elements. What we still do not know exactly is the relation between what we see and what we express, i.e. what we attend to visually and what we describe verbally in the picture or scene. How can we investigate and measure this? Are there any suitable methods?
In Chapters 1–4, the reader has been presented with the characteristics of the spoken picture description. In the following four chapters, I will broaden the perspective and explore the connection between spoken descriptive discourse, visual discovery of the picture and mental imagery. The current chapter deals with methodological questions, and the focus is on sequential and processual aspects of picture viewing and picture description. First, spoken language and vision will be treated as two windows to the mind and the focusing of attention will be conceived of with the help of a ‘spotlight’ metaphor. Second, a new eye tracking study will be described and the multimodal score sheet will be introduced as a tool for analysis of temporal and semantic correspondence between verbal and visual data. Third, a multimodal sequential method suitable for a detailed dynamic comparison of verbal and visual data will be described.
80 Discourse, Vision and Cognition
1.
Picture viewing and picture description: Two windows to the mind
It is, of course, impossible to directly uncover the content of our minds. If we want to learn about how the mind works, we have to do it indirectly, via overt manifestations, be it behavioural measures such as eye and body tracking or psychometric data such as GSR, ERP, fMRI, just to mention a few currently popular methodologies that may complement the analysis of speech. Psychologists, linguists, psycholinguists and cognitive scientists use different approaches and make various attempts to come closer to how the mind works. Eye movements have always been of great interest to cognitive scientists, since conscious control is related to human volition, intention and attention (Solso 1994). Empirical evidence suggests that the direction of eye movements and the direction of attention are linked (Sneider & Deubel 1995; Theeuwes et al. 1998; Juola et al. 1991). Although covert attention has been discussed (Posner 1980), its role in normal visual scanning has been minimised (Findley & Walker 1999). Eye movements have thus been used to obtain valid measurements of a person’s interests and cognitive processes, for instance when watching works of art (Buswell 1935; Yarbus 1967). Just and Carpenter (1976, 1980) conclude from their findings in written text comprehension that readers fixate each word until processing at perceptual, linguistic and conceptual levels has been completed. They propose a strong eye–mind assumption: when looking at relevant visual displays, there is no lag between what the observer is fixating and what the mind is processing (Just and Carpenter 1980: 331). In contrast, Underwood and Everatt (1992) argue that one current fixation can indicate either past, present and future information acquisition, which weakens the strong eye–mind assumption (cf. also Slowiaczek 1983). Eye fixations have been considered a boundary between perception and cognition, since they overtly indicate that information was acquired. Thanks to new technology, eye movements “provide an unobtrusive, sensitive, real-time behavioural index of ongoing visual and cognitive processing” (Henderson & Ferreira 2004: 18). More information about eye movement analysis will be considered in sections to follow. Another way of approaching the mind is via spoken language. Within psycholinguistic research, the spurt-like character of speech and dysfluencies in spoken language production have been explored in order to reveal planning and monitoring activities on different levels of discourse (Garrett 1980; Linell 1982; Strömqvist 1996). The hypothesis behind these approaches is that dysfluencies that belong to the flow of language production reflect the flow of thought.
Chapter 5. Multimodal sequential method and analytic tool
In psychology and cognitive science, spoken language has been used to externalise mental processes during different tasks in form of verbal protocols or think-aloud protocols (Ericsson & Simon 1980), where subjects are asked to verbally report sequences of though during different tasks. Verbal protocols have been extensively used to reveal steps in reasoning, decision-making and problem-solving processes and applied as a tool for design and usability testing. Linguists were also trying to access the mind through spoken descriptive discourse (Linde & Labov 1975) and through a consciousness-based analysis of narrative discourse (Chafe 1996).
1.1
Synchronising verbal and visual data
It has been argued that eye movements reflect human thought processes, since it is easy to determine which elements attract the observer’s eye (and thought), in which order and how often. But eye movements reveal these covert processes only to a certain extent. We gain only information about which area the fixation landed on, not any information about what level the reader was focusing on (was it the contents, the format, or the colour of the inspected area?). An area may be fixated visually for different purposes: in order to identify a picture element, to compare certain traits of an object with traits of another area, in order to decide whether the momentary inferences about a picture element are true or in order to check details on a higher level of abstraction. Verbal foci and superfoci include descriptions of objects and locations, but also attitudes, impressions, motives and interpretations. A simultaneous verbal picture description gives us further insights into the cognitive processes. There are several possible ways to use the combination of verbal and visual data. In recent studies within the so-called visual world paradigm, eye movement tracking has been used as a tool in psycholinguistic studies on object recognition and naming and on reading and language comprehension (for an overview see Meyer & Dobel 2003 and Griffin 2004). The static pictorial stimuli were drawings of one, two or three objects and the linguistic levels concerned single words and referring expressions, nouns, pronouns, simple noun phrases (‘the cat and the chair’) and only to a very small extent utterances, for instance ‘the angel is next to the camel’, ‘the cowboy gives the hat to the clown’. The hypotheses behind this line of research is that eye movements are closely timelocked to the speech stream and that eye movements are tightly coupled with . The relation between language and mind is also discussed by Gärdenfors (1996).
81
82
Discourse, Vision and Cognition
cognitive processes, in particular in connection to speech planning (Meyer & Dobel 2003). For example, it was found that speakers usually fixate on new objects just before mentioning them, which is – measured on a millisecond scale – approximately 800 ms before naming (Griffin & Bock 2000). When listening, there is a latency of about 400–800 ms from the onset of the word to the moment when the eyes look at the picture (Couper 1974). Dynamic scenes and spoken descriptions were studied in Tomlin fish-film experiment (Tomlin 1995, 1997; see also Diderichsen 2001). The question under investigation was how focal attention affects grammatic constructs. The combination of verbal and visual data can also be used to disambiguate the local vagueness in one data set, as has been done in some propositional approaches. For instance, Hauland (1996), Hauland & Hallbert (1995) and Braarud et al. (1997) used language protocols as a complement to the eye movement data in the hope of revealing more exact information about the point of vision. The aim was to bridge the vagueness of the spoken language protocols by tracking the direction of the gaze. For instance, the meaning of a deictic term such as ‘that’ can be resolved by processing the x, y coordinate indicated by the gaze at an object. The complementarity of verbal and visual data was used to overcome the weaknesses of the respective modality and to guarantee a more robust and reliable interpretation of some points in the informants’ interactions (cf. also Strohner 1996). This methodology is based on the assumption that a visually focused object always has a counterpart in the spoken language description that is temporally aligned with it. However, as I will argue in my next two chapters, the relation between the objects focused on visually and the objects described verbally seems to be more complicated. In my studies, I combine the analysis of two sources, eye movement data and simultaneous spoken descriptions, with the expectation that these two kinds of data can give us distinct hints about the dynamics of the underlying cognitive processes. On the one hand, eye movements reflect human thought processes. It is easy to determine which elements attract the observer’s gaze, in what order and how often. In short, eye movements offer us a window to the mind. On the other hand, verbal foci formulated during picture description are the linguistic expressions of a conscious focus of attention. With the help of a segmented transcript, we can learn what is in the verbal focus at a given time. In short, spoken language description offers us another window to the mind. Both kinds of data are used as an indirect source to gain insights about the underlying cognitive processes and about human information processing.
Chapter 5. Multimodal sequential method and analytic tool
The way to uncover the ‘idea unit’ goes via spoken language in action and via the process of visual focusing.
1.2 Focusing attention: The spotlight metaphor The human ability to focus attention on a smaller part of the visual field has been discussed in the literature and likened to a spotlight (Posner 1980; Theeuwes, 1993; Olshausen & Koch 1995) or to a zoom-lens (Findlay & Walker 1998). “(…) the locus of directed attention in visual space is thought of as having greater illumination than areas to which attention is not directed or areas from which attention has been removed” (Theeuwes 1993: 95). “The current consensus is that the spotlight of attention turns off at one location and then on at another” (Mozer & Sitton 1998: 369). The explanations for such a spotlight differ, though. Traditionally, attention is viewed as a limited mental resource that constrains cognitive processing. In an alternative view, the concept is viewed in terms of the functional requirements of the current task (Allport 1989). What we attend to during the visual perception and the spoken language description can be conceived of with the help of a spotlight metaphor, which intuitively provides a notion of limitation and focus. Actually, the ‘spotlight’ comes from the inner world, from the mind. It is the human ability to visually and verbally focus attention on one part of the information flow a time. Here, this spotlight is transformed to both a verbal and a visual beam (cf. Figure 1). The picture elements fixated are visually in the focus of a spotlight and embedded in a context. The spotlight moves to the next area that pops up from the periphery and will be in focus for a while. If we focus our concentration and eye movements on a point, we mostly also divert our attention to that point. By using a sequential visual fixation analysis, we can follow the path VERBAL FOCUS
VISUAL FOCUS
‘IDEA UNIT’
Figure 1. Verbal focus, visual focus and ‘idea unit’.
83
84
Discourse, Vision and Cognition
of attention deployed by the observer. Concerning spoken language description, it has been shown that we focus on one idea a time (Chafe 1980, 1994). The picture elements described are in the focus of an attentional spotlight and embedded in the discourse context. What I will be investigating in the following is the relationship between the content of the visual focus of attention (specifically clusters of visual fixations) and the content of the verbal focus of attention (specifically clusters of verbal foci). The common denominator of both modalities, vision and spoken language, is the way we focus attention when we move from one picture element to another. In a dynamic processual view, the spotlight metaphor can be applied to both visual scanning of the picture and spoken language description of the picture. Verbal and visual protocols can, in concert, elucidate covert processes. Using ‘two windows to the mind’ offers a better understanding of information processing and has implications for our understanding of cognition, attention and the nature of consciousness. I propose a multimodal sequential method that can be successfully used for a dynamic analysis of perception and action in general. The units extracted from the empirical data give us hints about the ways in which information is acquired and processed in the human mind. The multimodal clusters extracted from verbal and visual foci illustrate how humans connect vision and speech while engaged in certain activities.
1.3
How it all began …
Wallace Chafe, who studied the dynamic nature of language in connection with the flow of ideas through consciousness and who has demonstrated that the study of language and consciousness can provide an unexpectedly broad understanding of the way the mind works, referred to an unpublished but very interesting experiment performed by Charlotte Baker at Berkeley during the late 1970s (Chafe 1980). Baker studied the connection between eye movements when looking at a picture and the corresponding description of the picture content from memory. It turned out that the linguistic segments corresponded closely to the gaze movements. Not only were the parts of the picture, to which attention was paid, the same. The order in which a subject focused on these parts was also identical. Based on these results, Chafe suggested that: Similar principles are involved in the way information is acquired from the environment (for example, through eye movements), in the way it is scanned by consciousness during recall, and in the way it is verbalised. All three processes
Chapter 5. Multimodal sequential method and analytic tool
may be guided by a single executive mechanism which determines what is focused on, for how long, and in what sequence. (Chafe 1980: 16)
Chafe has also suggested that language and vision have similar properties: both proceed in brief spurts and have a focus and a periphery (Chafe 1980, 1994, 1996). As far as I know, this is the very first known eye tracking study exploring the couplings between vision and discourse by using a complex picture. (And it took almost 25 years before the topic was taken up again.) Comparing patterns during the visual scanning of the picture and during the verbal description of the picture is a very useful method that provides sustainable results. It helps us in answering the question whether there is a unifying system integrating vision and language, and it enriches the research about the nature of human attention, vision, discourse and consciousness.
2.
Simultaneous description with eye tracking
I collected new data in order to study semantic and temporal correspondence between units in spoken description and picture viewing. Four informants were asked to verbally describe a complex picture while they were inspecting it (on-line). The aim of the study was to observe and identify patterns and phenomena. The motif was a complex picture – a Swedish children’s book illustration (cf. Figure 2). The duration of each session was self-determined. Picture viewing and picture description lasted on average 2,25 minutes. The process of the visual discovery of the picture was registered using an eye tracker. We used the remote SMI iView system with an infrared camera measuring the
Figure 2. The motif is from Nordqvist (1990). Kackel i grönsakslandet. Opal. (Translated as ‘Festus and Mercury: Ruckus in the Garden.’)
85
86 Discourse, Vision and Cognition
pupil-corneal reflex with a 50 Hz scan rate. The simultaneous spoken language description was recorded, transcribed and segmented into verbal foci and superfoci (cf. Chapter 1, Section 2.1). Visual and verbal data were then synchronised in time. When comparing the data, a new method was applied, using multimodal score sheets (see following sections).
2.1 Characteristics of the picture: Complex depicted scene The type of scene is, of course, a very important factor influencing both the form and content of verbal picture descriptions and the eye movement patterns during picture viewing. Let me therefore stop for a while at the chosen complex picture, characterise it in more detail and relate it to a scene classification within scene perception research. Henderson & Hollingworth 1999 define a scene as a “semantically coherent (and often nameable) human-scaled view of a realworld environment comprising background elements and multiple discrete objects arranged in a spatially licensed manner” (Henderson & Ferreira 2004: 5). The chosen illustration can be characterised by the following features: • • • • • • • • • •
it is a true scene (not a randomly arranged spatial array of objects), it is a depiction as a stand-in for a real environment, it is a colour rendering, it is a complex scene, it consists of an immovable background and multiple moving or movable discrete objects, it contains agents (persons and animals), it depicts both states and events and allows different scan paths and thereby also different ‘readings’, it has a route to semantic interpretation, it is nameable, the gist of the scene can be rapidly apprehended.
Similarly to the scene perception researchers (Henderson & Hollingworth 1998, 1999; Henderson & Ferreira 2004), we are interested in what picture areas attract attention and in what order and for how long. Let us start with the question where we tend to look at pictures and scenes. What will be most interesting interesting for observers? In scene perception research, different methods have been used to determine what areas will attract the maximal attention (Henderson & Hollingworth 1998). Areas of interest were determined either by the researchers themselves (Buswell 1935) or by naive viewer ratings (Mackworth & Morandi
Chapter 5. Multimodal sequential method and analytic tool
1967) and the influence of semantic ‘region informativeness’ was investigated (Loftus and Mackworth 1978). It was found that contextually guided attention enhances detection of objects in a scene (de Graef 1990). From a functional perspective, different areas of the same scene will become relevant and informative depending on the communicative task or goal (preference task, memory task, picture description etc.). Already Yarbus (1967) stressed the role of the task in guiding where and when to fixate, and recent research on eye movements in natural behaviour has confirmed its importance (Hayhoe 2004; Hayhoe & Ballard 2005). Apart from that, we want to know more about the interplay between language production and visual perception. A given scene can evoke different descriptions depending on the describers’ knowledge, associations, preferences and other cognitive, experiential and contextual factors.
2.2 Characteristics of the spoken picture description: Discourse level Like other psycholinguistic researchers in general, I am interested in revealing the dynamics of the language production (and visual perception). However, unlike the majority of the eye tracking-based psycholinguistic studies focusing on word or sentence level, the studies presented in this book focus on higher levels of discourse. The starting points are free descriptions containing complex ideas about the picture discovery that are formulated in large units of discourse. Compared to verbal protocols or think-aloud protocols where subjects are explicitly asked to externalise their mental processes during an activity, the informants in our study were asked to describe a picture. Although our informants were not instructed to comment on their mental processes, we obtained a spontaneous and coherent line of discourse (and thought) that contained embedded sequences expressing associations, attitudes, and meta-cognitive comments. The reason for this is as follows: When looking at a complex picture, scene or environment, we do not only reveal our thoughts about the discrete objects that we see in the form of pure descriptive verbal foci. We also formulate our associations, comments, impressions and refocus on certain picture elements, on the situation, on what we said, recategorise what we have seen, all of which appears as the various types of foci that we have discussed in Chapter 2. In other words, apart from reporting about WHAT we see as viewers, we also focus on HOW the picture appears to us and why. Viewers and describers are involved in categorising and interpreting activities and their steps in descriptions serve presentational, orientational and organisational functions.
87
88
Discourse, Vision and Cognition
It is important to start with complex pictures and to analyse natural descriptive discourse. After that we can also include the naturally occurring nonpresentational foci, complex ideas and abstract concepts in our analysis. To look at the discourse level is essential since it reveals the ongoing conceptualisation process and the way viewers and describers create meaning.
2.3 Multimodal score sheets The aim of this study has been to conduct a qualitative sequential analysis of the temporal and semantic relations between clusters of visual and verbal data (cf. Chapters 6, Section 1 and Chapter 7, Section 1). For each person, eye movement data have been transformed to a visual flow on a timeline and the objects that have been fixated in the scene have been labelled. The spoken language descriptions have been transformed from the transcript observation format to a verbal flow on a timeline and the borders of foci and superfoci have been marked. As a result of these transformations, a multimodal time-coded score sheet with different streams can be created. A multimodal time-coded score sheet is a format suitable for synchronising and analysing visual and verbal data (for details see Holsanova 2001: 99f.). In comparison with ELAN, developed in Max Planck Institute in Nijmegen, and other transcription tools with tier, timeline and tags, I have built in the analysis of foci and superfoci, which is unique. Figure 3 illustrates what happens visually and verbally during a description of the three pictures of the old man Pettson, who is involved in various activities on the right hand side of the picture. As we can see in Figure 3, the score sheet contains two different streams: it shows the visual behaviour (objects fixated visually during description on line 1; thin line = short fixation duration; thick box = long fixation) and verbal behaviour (verbal idea units on line 2), synchronised over time. Since we start
1. iView fix.
TIME 0.45
2. Transc. 3. Superfocus
0.47
0.49
0.51
to the right´
0.53
0.55
0.57
0.59
three working variations the land digging SUM
1.01
1.03
raking SUBST LIST
Figure 3. Multimodal time-coded score sheet.
1.05
sowing
1.07
1.09
1.11
1.13
1.15
1.17
Chapter 5. Multimodal sequential method and analytic tool
from the descriptive discourse level, units of discourse are marked and correlated with the visual behaviour. Simple bars mark the borders of verbal foci (expressing the conscious focus of attention) and double bars mark the borders of verbal superfoci (thematic clusters of foci that form more complex units of thought). On line 3, we find the coding of superfocus types (summarising, substantive, evaluative etc.). With the help of this new analytic format, we can examine what is in the visual and verbal attentional spotlight at a particular moment: Configurations of verbal and visual clusters can be extracted and contents in the focused verbal idea flow and the visual fixation clusters can be compared. This score sheet makes it possible to analyse what is happening during preceding, simultaneous and following fixations when a larger idea is developed and formulated. Since the score sheet also includes types of superfoci, it is possible to track the functional distribution of the extracted verbal and visual patterns. This topic will be pursued in detail in Chapter 6. The multimodal time-coded score sheets are suitable for an analysis of processual aspects of picture viewing and picture description (and perhaps even for the dynamics of perception and action in general). So for example, instead of analysing the result of the picture discovery of ‘three versions of Pettson’ in the form of a fixation pattern (Figure 4, Section 2.3.1) and the result of the verbal picture description in form of a transcript (Example 1, Section 2.3.2), we are able to visualise and analyse the process of picture viewing and picture description on a time-coded score sheet (as seen in Figure 3). Let me explain this in detail.
2.3.1 Picture viewing Figure 4 illustrates the fixation plot resulting from the description ‘three versions of Pettson on the right’. It shows us the path of picture discovery, i.e. the
Figure 4. Fixation plot: ‘Three versions of Pettson on the right’
89
90 Discourse, Vision and Cognition
Table 1. Average fixation duration in different activities. Activity
Average fixation duration
silent reading oral reading visual search scene perception viewing of art music reading typing picture viewing & picture description
225 ms 275 ms 275 ms 300 ms 300 ms 375 ms 400 ms 367 ms
objects and areas that have been fixated by the viewer. This is, however, a static pattern since it does not exactly visualise when and in what order they were fixated. The circles in Figure 4 indicate the position and duration of the fixations, the diameter of each fixation being proportional to its duration. The lines connecting fixations represent saccades. The white circle in the lower right corner is a reference point: it represents the diameter of a one-second fixation. Let me at this point briefly mention some basic information about eye movements. Eye gaze fullfills many important functions in communication: Apart from being an important source of information during social interaction, gaze is related to verbal and non-verbal action (Griffin 2004). The reason why people move their eyes is “to bring a particular portion of the visible field of view into high resolution so that we may see in fine detail whatever is at the central direction of gaze” (Duchowski 2003: 3). The saccadic eye movements are important during picture viewing. They consist of two temporal phases: fixations and saccades. Fixations are stops or periods of time when the point of regard is relatively still. The average fixation duration varies. Fixation duration varies according to the activity we are involved in. Table 1 (based on Rayner 1992; Solso 1994; Henderson & Hollingworth 1999 and my own data) shows some examples of average fixation duration during different viewing activities. The jumps between stopping points are called saccades. During saccades, the eyes move at a relatively rapid rate to reorient the point of vision from one spatial position to another. Saccades are very short, usually lasting from 20–50 ms. It is during fixations that we acquire useful information, whereas our vision is suppressed and we are essentially blind during saccades (Hendersson & Hollingworth 1998, 1999; Hoffman 1998). Usually, three types of visual acuity are distinguished: foveal vision which encompasses a visual angle of only about 1–2 degrees, parafoveal vision which encompasses a visual angle of up to 10
Chapter 5. Multimodal sequential method and analytic tool
Figure 5. Fixations during the sessions with four different informants.
degrees and peripheral vision which encompasses a visual angle beyond 10 degrees (Henderson & Hollingworth 1999, Solso 1994). In order to read or to study picture elements in detail, we have to position our eyes so that the retinal image of a particular element in the visual field falls on the fovea (a small, disk shaped area of the retina that gives the highest visual acuity). The location where we look in a scene is partly determined by the scene constraints and region informativeness, partly by the task, instruction or interest. Viewers can deploy different visual paths through the same picture, since they extract information from those parts of the scene that are needed for their particular description. Figure 5 shows fixations during the sessions with four different informants. Apart from a fixation plot, fixation data can typically be retrieved also in the format of fixation files containing a list of fixations in their temporal order, accompanied with their location (x, y coordinates) and duration (in ms).
2.3.2 Picture description Let us now look at a result of the verbal description, the transcript of ‘three versions of Pettson on the right’.
91
92
Discourse, Vision and Cognition
Example 1. Transcript sample ‘Three versions of Pettson on the right’ (uttered during picture discovery illustrated in Figure 4). 0123 (1s) eeh to the right’ 0:50 sum + loc 0124 there are three versions of (1s) old guy Pettson’ 0:54 0125 who is working the land, 0:56 0126 he is digging in the soil’ 0:59 subst. list 0127 he is (1s) eh raking’ 1:02 0128 and then he is sowing, 1:06
The spoken picture description has been recorded, transcribed and translated from Swedish into English. The transcript of the spoken description is detailed (Example 1). It includes verbal features, prosodic features (such as intonation, rhythm, tempo, pauses, stress, voice quality, loudness), and non-verbal features (such as laughter). It also contains hesitations, interruptions, restarts and other features that are typical of speech and that give us additional information about the speaker and the situational context. Each numbered line represents a new verbal focus expressing the content of active consciousness. Several verbal foci are clustered into superfoci (for example summarising superfocus 0123–0125 or a list of items 0126–0128; see Chapter 1, Section 2.1 for definitions and details). However, both the visual and the verbal outcomes above, i.e. the fixation plot in Figure 4 and the transcript for ‘three versions of Pettson’ in Example 1, are static. In order to follow the dynamics of picture viewing and picture description, we need to synchronise these two data outcomes in time. Let us have a look at the same sequence in the new sequential format (Figure 6) where we ‘zoom in’ on what is happening in the visual and verbal streams. The boxes of different shading and different length with labels on the top line represent objects that were fixated visually. On the bottom line, we find the verbal foci and superfoci including pauses, stress etc. Both the verbal and the visual streams are correlated on a time-line. With the help of the multimodal sequential method, we are able to extract several schematic configurations or patterns – both within a focus and within a superfocus – as a result of a detailed analysis of the temporal and semantic relations (cf. Holsanova 2001). Some of them can be seen in Figure 6, for example an n-to-one configuration between the visual and the verbal part in the second focus when the viewer is introducing the three pictures. Notice that there is a large delay (or latency) between the visual fixation of the sowing Pettson (during the second verbal focus) and the verbal mention of this particular picture (in the sixth focus). In fact, the ‘sowing Pettson’ is not locally fixated, in parallel
P digging
P digging
P raking
P raking
P digging P digging P raking
P raking
P digging
P digging
P raking
P sowing
P raking
P raking
--- eeh to the right’ there are three variations of --- old guy Pettson who is working the land, he is digging in the soil’
P raking
P digging P raking
P raking he is --- eh raking and then he is sowing,
P digging
time
Figure 6. Schematic configuration on the multimodal time-coded score sheet: ‘Three versions of Pettson on the right’ with connections between verbal and visual foci and with markings of foci and superfoci.
VERBAL
VISUAL
Chapter 5. Multimodal sequential method and analytic tool 93
94 Discourse, Vision and Cognition
to the verbal description within the same focus. Its mentioning is based on a previous fixation on this object, during the summarising focus ‘there are three versions of (1s) old guy Pettson’. The ability to keep track of this referent gives us some hints about the capacity of the working memory. For more details about the temporal and semantic relations, see Chapters 6 and 7.
3.
Multimodal sequential method
In a dynamic sequential analysis, we can answer the following questions: What does the process of picture viewing and picture description look like? Where do we start, how do we structure our thoughts about the picture, what do we focus on and in what order, how do we understand the contents of the picture and how do we verbalise it to different audiences? What does the sequential build-up of the visual examination and the verbal descriptions usually look like? Can we find any general patterns among the individual paths? Are there different (visually and cognitively based) scanning strategies in separate stages of picture viewing and picture description? A multimodal sequential method combines verbal data (successive clusters of verbal foci) and visual data (successive clusters of visual fixations) in order to explain what happens visually and verbally in an individual session at a certain point in time. It makes it possible to analyse how the attentional ‘spotlight’ has been cast on picture elements and areas in the course of the visual scanning and the simultaneous verbal description. In particular, we can look closely at the temporal and semantic relations between the visually attended and verbally focused objects and explore whether there is a correspondence between these two kinds of data. By conducting a detailed sequential analysis, it is possible to obtain insights into the dynamics of the underlying cognitive processes. In addition, this method allows us to follow how informants bridge the gap between visual information gathering and speech production. We can examine how the informants successively verbalise their thoughts about the picture or a scene. The combination of the data illuminates mental processes and attitudes and can thus be used as a sensitive evaluative tool for understanding the dynamics of the ongoing perception and categorisation process (cf. Holsanova 2006). By combining the contents of the verbal and visual attentional spotlight, we get a reinforcement effect. By using ‘two windows to the mind’, we obtain more than twice as much information about cognition, since vision and spoken language interact with each other. In this way, we can gain more detailed
Figure 7. Multimodal time coded score sheet of keystroke data and eye tracking data
Chapter 5. Multimodal sequential method and analytic tool 95
96 Discourse, Vision and Cognition
information about the structure of thoughts and the process of generating or accessing them. The multimodal time-coded score sheet Holsanova (2001) was implemented in a recent project on the dynamics of perception and production in on-line writing, where we studied the interaction between writing and gazing behaviour in 81 subjects balanced for age, gender and dyslectics/controls. Instead of describing pictures orally, the informants produced a written picture description. The tool provided an overview of how the writers’ attention was distributed between the stimulus picture, the keyboard, the computer monitor and elsewhere during the writing session (cf. Andersson et al. 2006). Let us look at an example from the written data. The informant has just started a new paragraph. The first formulation in the new paragraph is ‘Mitt på bilden finns det också <2.021> kor <3.031> <4.210> som är bruna och vita <14.351>.’ (‘In the middle of the picture, there are also <2.021> cows <3.031> <4.210> that are brown and white <14.351>’). Figure 7 shows us how the visual attention was distributed during writing and during pauses. First stream shows visual behaviour on the stimulus picture, second stream on the monitor, third on the keyboard. The last two streams show writing activity. In the beginning, ‘Mitt på bilden finns det också [...]’ ‘In the middle of the picture there is also [...]’, the subject tends to look at the keyboard whereas later on, she mainly looks at the monitor when writing. Before naming the objects and writing the word ‘kor’ ‘cows’, this person stops and makes a short pause (2.02 sec) in order to revisit the stimulus picture. She may, for example, be checking whether the cattle in the picture are cows, bulls, calves or perhaps sheep. After the visual information gathering is accomplished, she finally writes ‘kor’ ‘cows’. After writing this, she distributes her visual attention between the screen and the keyboard, before she finishes the sentence by writing ‘som är bruna och vita’ ‘that are brown and white”. Apart from the process of writing, this method has been used for multimodal dialogue systems. Qvarfordt (2004) studied the effect of overlaying eye-gaze on spatial information (a map) in a collaborative user interface in order to know whether users can take advantage of the conversational partner’s gaze information. In result, she found important functions of users’ eye-gaze patterns in deictic referencing, interest detection, topic switching, ambiguity reduction, and establishing common ground in a dialogue. If we incorporate different types of foci and superfoci in the analysis, we can follow the eye gaze patterns during various mental activities (cf. Chapter 6, Section 2). Interface
Chapter 5. Multimodal sequential method and analytic tool
design and human factors are therefore yet another areas where this method could be applied – when testing how nuclear power plant operators react in a simulated scenario, measuring situation awareness of operators in airport towers, and evaluating computer interfaces or assessing architectural and interface design (Lahtinen 2005). A further possible application lies within the educational context (test of language development, text-picture integration, etc.). It would also be possible to add a layer for gestures and for drawing in order to reveal practitioners’ expert knowledge by analysing several verbal and non-verbal activities: designers sketching and verbally explaning a structure for a student, archaeologists visually scanning, verbally describing structures on a site while simultaneously and pointing and gazing at them, or radiologists visually scanning images and verbally describing the anomalies in an examination report. All these multimodal actions involve several mental processes including analytic phases, planning, survey phases, constructive phases, monitoring phases, evaluative phases, editing phases and revision phases. It would be interesting to synchronize different streams of behaviour (verbal, visual, gestural, other non-verbal) in order to investigate the natural segmentation of action into functional phases or episodes, in order to get to know more about individual strategies and patterns and about the distribution of the underlying mental processes. Finally, multimodal score sheets can also be applied within scene perception and reasoning. The selection of informative regions in an image or in a scene is guided both by bottom-up and top-down processes such as internal states, memory, tasks and expectations (Yarbus 1967). Recorded eye movements of human subjects with simultaneous verbal descriptions of the scene can reveal conceptualisations in the human scene analysis that, in turn, can be compared to system performance (Schill 2005; Schill et al. 2001). The advantages when using a multimodal method in applied areas are threefold: it gives more detailed answers about cognitive processes and the ongoing creation of meaningful units, it reveals the rationality behind the informants’ behaviour (how they behave and why, what expectation and associations they have) and it gives us insights about users’ attitudes towards certain layout solutions (what is good or bad, what is easy or difficult etc.). In short, the sequential multimodal method can be successfully used for a dynamic analysis of perception and action in general.
97
98 Discourse, Vision and Cognition
4.
Conclusion
This chapter has been an introduction to a multimodal sequential method used for comparison of visual and verbal data. This method has been used in a study of simultaneous picture description with eye tracking in order to throw light on the underlying cognitive processes. A complex picture and free descriptive discourse served as starting points. In order to investigate the dynamics of picture viewing and picture description, spoken language description and eye movement data were transformed and synchronised on a time-line and the border between different foci and superfoci were marked. The multimodal time coded score sheet enables us to synchronise visual and verbal behaviour over time, to follow and compare the content of the attentional spotlight and to extract clusters in the visual and verbal flow. The aim of the current chapter was to explain the principles and importance of a multimodal sequential method enriched with the analysis of foci and superfoci. Verbal and visual data have been used as two windows to the mind in order to gain insights about the underlying cognitive processes. In the next two chapters, this method will be used to reveal principles involved in information processing. In particular, we will investigate the temporal and semantic correspondence between visual and verbal data.
chapter 6
Temporal correspondence between verbal and visual data
In our everyday life, we often look ahead, pre-planning our actions. People look ahead when they want to reach something, identify a label, open a bottle. When playing golf, rugby, cricket, chess or football, the players usually do not follow the track of the moving object but rather fixate the expected future position where the object should land (Tadahiko & Tomohisa 2004; Kiyoshi et al. 2004). Piano players and singers read the next passage in the score and their eyes are ahead of their hands and voices (Berséus 2002; Goolsby 1994; Pollatsek & Rayner 1990; Sloboda 1974; Young 1971). Last but not least, copytypers’ eyes are ahead of their fingers (Butsch 1932; Inhoff & Gordon 1997). In sum, predictions and anticipatory visual fixations are frequent in various types of activities (Kowler 1996). The question is how picture viewing and picture description are coordinated in time. Do we always look ahead at a picture element before describing it verbally or can the verbal description of an object be simultaneous with visual scanning? Does it even happen that eye movements follow speech?
By now, the reader has been presented with different types of foci and superfoci, with the sequential qualitative method and the analytic tool and with multimodal time-coded score sheet, all of which have been described in the previous chapters. In the following two chapters, I will use this method to compare the content of the visual focus of attention (specifically clusters of visual fixations) and the content of the verbal focus of attention (specifically verbal foci and superfoci). Both temporal and semantic correspondence between the verbal and visual data will be investigated. In this chapter, I will primarily concentrate on temporal relations. Is the visual signal always simultaneous with the verbal one? Is the order of the objects focused on visually identical with the order of objects focused on verbally? Can we find a comparable unit in visual and verbal data? It is, of course,
100 Discourse, Vision and Cognition
difficult to separate temporal and semantic aspects in the analysis. We have to consider the ‘contents’ of the verbal and visual foci on the timeline. Nevertheless, I will focus on the temporal aspects first and return to the semantic relations in more detail in the following chapter. The inspiration for this study comes from Chafe’s idea that there is a similar way for us to acquire, recall and verbalise information: “All three processes may be guided by a single executive mechanism which determines what is focused on, for how long, and in what sequence” (Chafe 1980: 16). His suggestion implies temporal and semantic correspondence between the verbal and the visual data stream. From the examples in Chafe (1980) we can also deduce that the idea unit (verbal focus in our terminology) is the suggested unit of comparison. In order to test it, I compared patterns during the visual scanning of a picture and during the verbal description of it. The aim was to find out what is in the visual and verbal attentional spotlight at a particular moment and to answer the question about comparable units. First, I will go through an inventory of configurations extracted from the data on simultaneous picture description with eye tracking starting with configurations at the lowest level within a focus (Section 1.1) and then move higher up in the discourse hierarchy, towards the superfocus (Section 1.2). I will illustrate them graphically on a timeline, mention their frequency distribution in the data and look at their relation to the preceding and following unit. Second, I will analyse the functional distribution of the configuration types by connecting them to verbal activity types. The question under investigation is whether certain types of patterns reflect a certain type of mental activity. Third, I will discuss how the typology of multimodal clusters can be used to study mental activity.
1.
Multimodal configurations and units of visual and verbal data
Let us start with the inventory of configurations extracted from the data. The multimodal time-coded score sheet presented in the previous chapter showed connections between verbal and visual foci in detail. Here, the various configurations will be presented in a more simplified, schematic way. In this simplified version of the score, there are only two streams: the visual and the verbal behaviour. The discrete units compared (boxes on the timeline) represent the objects that at a certain moment are in the focus of visual and verbal attention.
Chapter 6. Temporal correspondence between verbal and visual data
1.1
Configurations within a focus
Since we have deduced from Chafe (1980) that the verbal focus is a candidate for a unit of comparison between verbal and visual data, we will start on the focus level. In particular, we will compare the verbal focus (an idea about the picture element that is conceived of as central at a certain point in time and delimited by prosodic, acoustic and semantic features) with the temporally simultaneous visual focus (a cluster of visual fixations directed onto a discrete object in the scene).
1.1.1 Perfect temporal and semantic match We are now ‘zooming in’ on the verbal and visual streams on the score sheet. The ideal case is one where there is a correspondence or a perfect match between what is seen and what is said. Such a perfect temporal and semantic match would result in the overlap configuration seen in Figure 1. However, this kind of perfect temporal and semantic overlap is very rare – in fact, it appears in only 5 percent of foci. If it occurs, it is often a part of a substantive focus (subst) or summarising focus (sum). 1.1.2 Delay between the visual and the verbal part In the next configuration extracted, there is a delay between the visual and the verbal stream when focusing on an identical object (Figure 2). In other words, the object (the first Pettson figure to the left) is fixated prior to its verbalisation (Pettson). The delay-configuration is well represented in free picture descriptions, but there is no constant latency throughout the data. Therefore, we cannot reset or align the whole spoken data set automatically in order to compare the semantic simultaneity of the visual and verbal units. The delay configuration appears in 30 percent of foci in the on-line description and is
Pettson 1
stone VISUAL
VISUAL
VERBAL
VERBAL
Pettson
stone
time
time
Figure 1. Perfect match.
Figure 2. Delay.
101
102 Discourse, Vision and Cognition
typical of certain parts of the descriptions, especially of the list of items and substantive foci (subst). One further characteristic is that this configuration usually does not appear alone, within one focus, but rather as a series, as a part of a superfocus (see configuration 1.2.2). Eye-voice latency (or eye-voice span) has been reported in a number of psycholinguistic studies using eye tracking, but the average duration differs in different activities. It has, for instance, been observed that the eye-voice latency during reading aloud is 750 ms and during object naming, 900 ms (Griffin & Bock 2000; Griffin 2004). As we have seen in picture viewing and picture description, the eye-voice latency in list foci lasts for about 2000–3000 ms. In our mental imagery studies (Johansson et al. 2005, 2006), the eye-voice latency was on average 2100 ms during the scene description and approximately 300 ms during the retelling of it (cf. Chapter 8, Section 2.1). The maximum value across all subjects was 5000 ms in both phases. There are interesting parallels in other areas of non-verbal behaviour, such as gesturing, signing or using a pen. The finding that the verbal account lags behind the visual fixations can be compared to Kendon’s (1980), Naughton’s (1996) and Kita’s (1990) results, which reveal that both spontaneous gesturing and signed language precede their spoken lexical analogues during communication. The eye-hand latency in copy-typing ranges between 200 and 900 ms (Inhoff & Gordon 1997). Also, when users interact with a multimodal system, pen input precedes their speech by one or two seconds (Oviatt et al. 1997). The question that arises here is why the latency in picture viewing and picture description is so long compared to studies in reading and object naming. What does it measure? Meyer and Dobel (2003: 268) write: “When speakers produce sentences expressing relationships between entities, this formulation phase may be preceded by a conceptualization or appraisal phase during which speakers aim to understand the event, assign roles to the event participants and possibly, select a verb.” Conceptualisation and formulation activities during utterance production might be one explanation, but the length of the lag indicates that there are more complex cognitive proceses going on at a discourse level. Other possible explanations for the length of the lag will be discussed in detail in section three of this chapter.
1.1.3 Triangle configuration So far, we have been looking at a single cluster of fixation within a verbal focus. The most common case is, however, that multiple visual clusters are connected to one verbal focus and directed towards different objects. The ‘triangle’
Chapter 6. Temporal correspondence between verbal and visual data
stone
stone
VISUAL
VERBAL
there’s a stone time
Figure 3. Triangle configuration: In front of the tree there is a stone.
configuration in Figure 3 encompasses two visual foci and one verbal focus. It has been formulated as a part of a superfocus consisting of two foci ‘in front of the tree’ there is a stone’. The size of the visual fixation cluster is limited by the size of the verbal focus (< 2 sec). If the cluster is longer than that, then the fixations exceeding the focus border belong to the next focus. The first visual fixation on the object seems to be a preparatory one. The refixation on the same object then occurs simultaneously with the verbal description of this object. The two white boxes in between are anticipatory fixation clusters on other objects that are going to be mentioned in the next verbal focus. The observer may at the time be looking ahead and pre-planning (cf. Velichkovsky, Pomplun & Rieser 1995). Thus, it seems like a large proportion of the categorisation and interpretation activities has already taken place during the first phase of viewing (the first visual fixation cluster) whereas the later refixation – simultaneous with the formulation of ‘stone’ – occurs in order to increase the saliency on ‘stone’ during speech production. The issue of ‘anticipation’ – fixations on a forthcoming referent – has been shown in auditory sentence comprehension (Tanenhaus et al. 2000). In the above configuration, we see a typical example of anticipation in on-line discourse production: the visual cluster on white objects ‘interferes’ temporally and semantically with the current focus (more about semantic relations will be said in the following chapter). The triangle configuration is typical of substantive verbal foci (subst) and localising foci (loc) and represents 17 percent of all presentational foci. As we will see later (in configuration 1.2.3), it usually represents only a part of a superfocus and is often intertwined with fixations from other foci. Already in this configuration we can see that a 1:1 correspondence between verbal and visual foci will not hold. The process of attentional focusing and refocusing on picture elements seems to be more complicated than that.
103
104 Discourse, Vision and Cognition
VISUAL
VERBAL
and fou r men working around it time
Figure 4. Intertwined n-to-1 mapping.
1.1.4 N-to-1 mappings As stated above, the clusters consisting of multiple visual fixations and one verbal focus are very frequent in the data. The configuration below, called n-to-1 mapping (Figure 4), represents 37 percent of all foci in the on-line description. It usually appears as a part of a summarising focus (sum) or a substantive focus (subst), is well integrated into a superfocus and intertwined with other foci (see configuration 1.2.4). 1.1.5 N-to-1 mappings, during pauses The spoken picture description is sometimes interrupted by pauses and hesitations typical for language production, but the eyes do not stop moving to new fixation points. In other words, the visual examination takes place even during pauses in the verbal description. The next configuration (Figure 5), is an example of that. The visual scanning is done with multiple fixations and information is acquired both during a pause (illustrated by a broken line) and simultaneously with the verbal description. This configuration often appears in a summarising superfocus (sum), which is preceded by a long pause, or in substantive foci (subst). It is the most frequent type of configurations appearing in 37 percent of the presentational foci. It does not occur in isolation but is intertwined with fixations on other objects or areas belonging to the same superfocus (see the white boxes Pettson 2 and Pettson 3, which are going to be in focus during the following verbal focus).
Chapter 6. Temporal correspondence between verbal and visual data
tree
tree
tree
tree
tree
VISUAL
VERBAL
I see a tree in the middle time
Figure 5. N-to-1 mappings during pauses.
Figure 6. Pettson is digging.
1.1.6 N-to-1 mappings, rhythmic re-examination pattern A special case of multiple visual fixations and one verbal focus are rhythmic repetitive patterns on large complex objects (cf. Figure 6: Pettson is digging). Parts of one and the same object are fixated in a repetitive pattern with short fixations. In the process of describing Pettson and his activity, the viewer in a
105
106 Discourse, Vision and Cognition
rhythmical sequence fixates the hand-tool-soil, hand-tool, hand-tool-soil and by doing this, creates a schema for a digging activity. This rhythmic sequence can be easily identified when looking at the video. This brings to mind Kahneman’s (1973: 65) statement that “the rate of eye movements often corresponds to the rate of mental activity”. The viewer’s eye movements seem to mimic the functional relationships between these parts of the whole figure. This rhythmical pattern appears as a part of a substantive focus (subst). After this first review of clusters within the focus, we can return to the hypothesis concerning comparable units. One conclusion we can draw from the analysis above is that the focus is often not an appropriate unit for comparison. There is very seldom a 1:1 relation between verbal focus and visual focus (i.e. a single cluster of visual fixations on one object in the scene). The foci are intertwined in larger wholes. Therefore, we have to search for a more complex unit.
1.2 Configurations within a superfocus The next higher unit in the hierarchy to consider is the superfocus (a larger coherent chunk of speech consisting of several verbal foci). Configurations 1.2.1– 1.2.6 below exemplify more complex patterns extracted from the data within a superfocus, i.e. the sequences of foci delimited by double bars on the score.
1.2.1 Series of perfect matches The configuration where visual and verbal data match temporally (Figure 7) is quite rare. Occasionally, it appears in the list superfoci, but only if the objects have been fixated in the preceding summarising superfocus.
VISUAL
VERBAL time
Figure 7. Series of perfect matches.
Chapter 6. Temporal correspondence between verbal and visual data 107
VISUAL
VERBAL time
Figure 8. Series of delays.
1.2.2 Series of delays When speaking about plural objects, e.g. about the three birds which are situated close to each other in the central part of the tree, the describers keep their eye fixations within the central parts of the tree and make several short and long fixations and refixations on the birds. Sequentially analysed, visual and verbal foci build a series of delays (Figure 8). This complex configuration is quite frequent and is very typical of a list superfocus after a summarising focus. It reminds us of a ‘counting-like’ behaviour. Each item on the list is briefly ‘checked off ’ visually before it is mentioned verbally. It is not unusual that the same objects that were initially perceived and described on a general level (in a summarising focus), and then checked once more and described on a detailed level in a following superfocus with a list of items (list). These two superfoci are concerned with the same referents and the same global topic and are thus coherent. The informants dwelled with their eyes within the same area of interest for as long as those two superfoci were formulated. After a global description of the area, the observers mentally ‘zoomed in’ and described the details on a finer scale. One and the same area scanned during summarising focus (sum) and list of items (list) could thus be conceived of as a region and a subregion. Also, it seems that the cluster in a list is dependent on how the picture was scanned under a summarising superfocus. It can either have the form of a typical series of delay-clusters with short dwells, suggesting that the observer has acquired information from these parts previously and is now only ‘checking off ’ the items (e.g. three birds). Or, it proceeds in long visual fixation clusters with a shorter latency between the visual and the verbal part. This variant
108 Discourse, Vision and Cognition
bird 1
bird 2
bird 3
bird 3
VIS UAL
VERBAL one is sittin g on its eggs
the other is singing
and the third she-bird
is beating a rug or something,
time
Figure 9. Series of delays combined with 1-to-n mapping.
s uggests that the observer has not analysed these areas thoroughly the first time s/he entered this area (i.e. during the summarising overview). Instead, s/he returns to them and analyses them in detail later on. One of the varieties of this configuration can be seen in Figure 9. The speaker is describing three birds doing different things in a summarising focus and continues with a list: one is sitting on its eggs, the other is singing and the third female bird is beating a rug or something. The visual dwelling on the third bird is very long, consisting of inspections of various details of the birds appearance that lead to the categorisation ‘she-bird’. The delay configurations are combined with one 1-to-n configuration. During the summarising (sum) and list foci in on-line descriptions, informants were inspecting one and the same area of interest twice but on two different levels of specificity. This finding about two levels of viewing is fully consistent with the initial global and a subsequent local phase hypothesised by Buswell (1935), who identified two general patterns of perception: One of these consists of a general survey in which the eye moves with a series of relatively short pauses over the main portions of the picture. A second type of pattern was observed in which series of fixations, usually longer in duration, are concentrated over small areas of the picture, evidencing detailed examination of those sections. While many exceptions occur, it is apparent that the survey type of perception generally characterises the early part of an examination of a picture, whereas the more detailed study, when it occurs, (Buswell 1935: 142) usually appears later.
Chapter 6. Temporal correspondence between verbal and visual data 109
1.2.3 Series of triangles In my data from free picture descriptions, series of triangles are quite frequent. It is common in localising foci (loc), in substantive superfoci (subst) and in evaluative foci (eval). As we can see in Figure 10, the foci are intertwined. The informant is describing the lower central part of the picture in a localising, evaluative and substantive focus: ‘In front of the tree which is curved is a stone’, and simultaneously making multiple visual fixations on the tree and the stone. 1.2.4 Series of N-to-1 mappings The configuration we can see within the double bars of a superfocus is a complex variant of a configuration in which multiple visual fixations are connected to one verbal focus. This configuration is typical of the summarising superfocus (sum). Figure 11 shows an addition of two n-to-1 configurations within a summarising superfocus and exemplifies the process of picture discovery during the verbal formulation ‘ … in the middle is a tree with one … with three birds doing different things’. Neither did the describer plan the summarising description beforehand, nor did he wait until he had inspected the whole tree area and counted all of the animals depicted there. Instead, he formulated the idea about the tree, checked the tree again visually, noticed one of the birds there and directly started to formulate his new idea about the bird which then had to be corrected on-line with respect to the number of animals (‘with one … with three birds doing different things.). The pattern in Figure 12, ‘… I see a tree in the middle … and four men working around it’, is even more intertwined. The superfocus (between the double bars) consists of two foci. Note that some objects are partly fixated during pauses in the spoken description (tree) and some objects described in the second focus are partly fixated during the preceding focus (Pettson 1, Pettson 3), long before they are integrated in the summarising formulation. The consequence of this intertwined structure is that (a) there is no constant temporal relation between the visual and the verbal foci because of the partial lack of the verbal signal during a pause and (b) there is a preparatory visual fixation on Pettson 1 and Pettson 3 during the ‘tree’-sequence. If we, however, look at the superfocus as a whole, we find agreement.
in front of the tree
stone
Figure 10. Intertwined series of triangles.
VERBAL
VISUAL
stone
which is curved
tree
is a stone
stone
time
110 Discourse, Vision and Cognition
tree foilage
tree foilage
in the middle is a tree
tree
Figure 11. Intertwined series of n-to-1 mappings.
VERBAL
VISUAL
bird 2 tree
with one
bird 2
bird 1 bird 3
with three birds doing different things
bird 2 bird 1 bird 3
time
Chapter 6. Temporal correspondence between verbal and visual data 111
tree
tree
Figure 12. Intertwined n-to-1 mappings.
VERBAL
VISUAL
tree
Pettson 3
Pettson 1 I see a tree in the middle
tree
tree
Pettson 4
Pettson 2
Pettson 1
Pettson 2 and four men working around it
time
112 Discourse, Vision and Cognition
Chapter 6. Temporal correspondence between verbal and visual data
VISUAL
VERBAL time
Figure 13. N-to-n mapping.
1.2.5 Series of N-to-N mappings The last configuration (Figure 13) consists of multiple visual and multiple verbal foci and is called n-to-n mappings. This configuration is typically found in substantive foci with categorisation difficulties (cat. diff). It represents 11 percent of all foci. The fact that the describer has spent a whole superfocus describing only one object can be a sign of categorisation problems. When categorising a special kind of a plant (tree, different flowers), an animal (cat, insect) and nonanimate things in the picture (telephone line), the informants try to solve their problems on-line. The problems connected to categorising and labelling objects result in multiple and long visual fixation clusters on several parts of the object and repetitive attempts to verbally categorise the object. In the extracted verbal superfocus (1) below, the describer is trying to specify the kind of tree while looking at the foliage and the branch on the left. (1) 0358 0359 0360 0361
(4s) ehm’ this tree seems to be a sort of ’ (2s) I don’t remember what it is called haha, there are those things hanging here, sort of seed/ seed things in the tree,
2.42 2:44 2:46 2:50
subst. categ. problems
Pauses, hesitation, vagueness and modifications (those things, sort of), metacomments, corrections and verbatim repetitions are typical of this type of verbal superfocus. In other words, there are multiple visual and verbal activities. More specifically, multiple visual fixation clusters are accompanied by linguistic alternation and dysfluencies. Apart from configurations within the superfocus, correspondence could be found on an even higher level of discourse, namely the discourse topics.
113
114 Discourse, Vision and Cognition
Frequency distribution of configuration types
5% 11% 37% 17%
n-to-1 delay triangles n-to-n match
30%
Figure 14. Frequency distribution of the configuration types in free on-line picture descriptions.
Two describers can be guided by the composition of the picture and moved systematically (both visually and verbally) from the left to the right, from the foreground to the background (Holsanova 2001: 117f.). The discourse topic can also be based on impressions, associations or evaluations in connection to the scene.
1.3
Frequency distribution of configurations
Hereby, the inventory of the configurations is concluded. I have already mentioned how frequent the particular configurations were in the data. The diagram gives the reader a summarising overview of the frequency distribution of various configurations in the data. The most frequent type of configuration was n-to-1 mapping (37 percent of the foci), followed by delay (30 percent of the foci), triangles (17 percent of the foci) and n-to-n mapping (11 percent of the foci). The least frequent type of configuration was perfect match (5 percent of the foci). When we compare configurations within a focus and configurations within a superfocus we can notice the following: one visual fixation usually does not match one verbal focus and a perfect match is very rare. There is rather an asymmetry between them, in the sense that several visual fixations are usually
Chapter 6. Temporal correspondence between verbal and visual data
connected to one verbal focus, in particular in summarising foci. The foci are often intertwined, which causes a partial lack of qualitative correspondence between the visual and verbal foci. The configurations where multiple visual foci are connected to one verbal focus and intertwined with each other are well integrated into a larger unit, into a superfocus. Explanations for intertwined configurations can be found in discourse planning. If one visual fixation cluster is connected to one verbal focus, there is usually a latency of about 2–3 seconds between them, in particular in list of items. This delay configuration is in turn a part of a larger unit, the surrounding superfocus, and the length of the delay reflects cognitive processes on a discourse level. Looking back at Chafe’s (1980) suggestions, the conclusion we can draw from this comparison is that the verbal focus about one object does not always closely correspond to a single visual focus, i.e. a visual fixation cluster on this object. Also, the order in which the picture elements are focused on differs partially. The hypothesis that there are comparable units in the visual scanning of the picture and the simultaneous spoken language description of it can still be confirmed, but on a higher level. It is the superfocus rather than the focus that seems to be the suitable unit of comparison between visual and verbal data, since in most cases the superfocus represents an entity that delimits separable clusters of visual and verbal data. We will now turn to the second section of this chapter, which focuses on the functional distribution of the extracted configurations.
2.
Functional distribution of multimodal patterns
In the first section of this chapter, I characterised the configurations of verbal and visual data on a focus and a superfocus level. In this section, I will take a closer look at the relation between verbal activity type and type of multimodal pattern. While analysing the distribution of the multimodal patterns in the data, the question arose whether these clusters occur regularly and can be related to specific types of mental activities. The question is: Can we find configurations typical of summarising, comparing, reconceptualising, problem solving, listing, and evaluating? Can these configurations give us hints about the type of mental activity that is going on? I will thus reverse the perspective now and take the various types of verbal superfoci in all informants’ descriptions as a
115
116 Discourse, Vision and Cognition
starting point in order to check whether we can identify integration patterns that are typical of certain kinds of verbal activities. To start with, let us look at the distribution of the different focus types in the on-line picture descriptions with eye tracking. The foci expressing the presentational function (summarising, substantive, localising foci, list of items, substantive foci with categorisation difficulties) and the orientational function (evaluative foci) represent 92 percent of all foci (or, in absolute numbers, 186 foci). Foci associated with the organisational function are excluded, since they are often concerned with procedural questions about the discourse. As a next step, we will look at some relevant characteristics of the visual and verbal stream in the various configuration types. The clusters are described in terms of the following verbal and visual parameters: i. The characteristics of the verbal data stream including length of verbalisation (sec), pauses, number of verbal foci, kind of verbal complexity (level of embedding, repetitive sequences, lexical or syntactic alternation). ii. The characteristics of the visual data stream including the number of fixation clusters, duration of fixation clusters, refixations, order of objects focused on. Table 1 summarises the functional distribution of multimodal patterns in different activity types and gives a short description of the visual and verbal streams. The characteristics of the visual foci are based on the distinction between short (150–400 ms), middle (401–800 ms) and long fixation clusters or dwells (801–1000 ms) found in the data. These averages might seem rather long compared to results in scene perception and naming tasks. But we may bear in mind that eye movements have various functions in connection to speech production (Griffin 2004). Apart from naming-related eye movements (Griffin & Bock 2000), eye movements are used in a ‘counting-like’ function when describing multiple picture elements, as a support in the process of categorisation and re-categorisation, selfcorrections etc. (cf. Chapter 7, Section 4.3). Figure 15 shows the frequency distribution of configuration types as a function of verbal activity types. The closest relation between contents of the visual and verbal units can intuitively be found in substantive foci (subst). However, as we can see in Table 1, it is not so easy to identify a uniform reoccurring pattern there. All five configurations appear here, with n-to-1 as the dominant one and delay as a quite frequent one. The reason is probably that substantive
Chapter 6. Temporal correspondence between verbal and visual data
Table 1. Multimodal patterns and activity types. Activity type
Cluster type
sum (summarising foci) B4 series of n-to-1 mappings subst (describing objects, states and events)
n-to-1 mappings, delay, triangles
list (a) (listing details of objects, states and events)
B2 series of delays (2–3 sec)
list (b) (reformulating descriptions of objects, states and events)
B2 series of delays (2–3 sec)
loc (descriptions of the location of objects)
cat. diff. (categorisation difficulties, problem solving)
eval (evaluating objects, states and events, expressing attitudes)
B3 series of triangles
B5 series of n-to-n mappings
Characteristics of the visual and verbal stream – many short dwells (150–400 ms) – all items checked – mostly one verbal focus – many short dwells (150–400 ms) – one or multiple verbal formulations – many short dwells (150–400 ms) – some middle dwells (500–800 ms) – multiple verbal foci – short, middle, long dwells (150–400, 500–800, 800–1000 ms) – verbal development, reformulations – multiple dwells, refixations – short, middle and long fixation clusters (150–400, 500–800, 800–1000 ms) – single or multiple verbal foci – series of visual and verbal foci – many short (150–400 ms) and long (800–1000 ms) dwells, intensive scanning – many verbal reformulations, corrections, pauses, hesitations – many short dwells (150–400 ms) or long dwells(800–1000) and multiple verbal formulations
n-to-1, triangles, 1-to-n
foci can contain evaluative and localising aspects. The patterns seem to vary according to the different aspects included. However, when we look at the summarising foci (sum), list of items (list) and foci with categorisation difficulties (cat. diff.), the configurations appear
117
118 Discourse, Vision and Cognition
Frequency distribution of configuration types as a function of verbal activity types 40 35 30 n-to-1
25
delay 20
triangles n-to-n
15
perfect match
10 5 0 1 Sum
2 Subst
3 List
4 Loc
5 Cat.diff.
6 Eval.
Figure 15. Frequency distribution of configuration types as a function of verbal activity types.
regularly across all informants and can be systematically correlated with certain types of verbal activities. For summarising foci (sum), n-to-1 configuration appears to be typical. This coupling is quite plausible: The information about objects, states and events is acquired visually by multiple fixation clusters on several objects in the scene, both during pauses and during the verbal description, and is summarised verbally, usually in one verbal focus. For list of items, the delay configuration dominates. This can be explained by a counting-like behaviour: the informants have inspected the objects during a summarising focus and are ‘checking’ the items off the ‘list’. Since they are describing these objects in detail now, they have to check the appearance, relation and activity of the listed objects. This categorisation, interpretation and formulation costs mental effort, which is reflected in the delay configuration. Triangles are typical of localising foci (loc). Within one verbal focus highlighting an idea about an object’s location in the scene (e.g. in front of the tree there is a stone), the viewers visually inspect the stone-area before mentioning ‘stone’. During this first inspection, the inspected object functions as a pars pro toto, representing the in front-of-the-tree-location (for more details about this semantic relation, see the following chapter). After that, the informants inspect some other objects in the scene (which will be mentioned in the following
Chapter 6. Temporal correspondence between verbal and visual data
foci), and finally devote the last visual inspection to the stone again, while simultaneously naming it. The mental activities seem to be divided into a preparatory phase when most of the categorisation and interpretation work is done and a formulation phase during the refixation. Categorisation difficulties are exclusively connected to cluster type n-to-n mappings. Concerning the characteristics of the verbal and visual stream, multiple verbal and visual foci are typical of this type of activity, where visual fixations were either short/medium or very long (intensive scanning). For the verbal stream, repetitive sequences, lexical or syntactic alternation and, sometimes even hyperarticulation are characteristic. When determining a certain kind of object, activity or relation, the effort to name and categorise is associated with problem solving and cognitive load. This effort is often manifested by four or five different verbal descriptions of the same thing within a superfocus. In short, tendencies towards reoccurring multimodal integration patterns were found in certain types of verbal superfoci during free picture descriptions. The results from the identified typology of multimodal clusters and their functional distribution can give us hints about certain types of mental activities. If further research confirms this coupling, these multimodal clusters can receive a predictive value.
Figure 16. Example of visual display and linguistic production in a psycholinguistic task (displays used in van Meulen, Meyer & Levelt 2001).
119
120 Discourse, Vision and Cognition
“In the middle is a tree with one … with three birds doing different things; one is sitting on its eggs, the other is singing and the third female bird is beating a rug or something”. Figure 17. The complex visual display and linguistic production in the current studies on descriptive discourse.
3.
Discussion: Comparison with psycholinguistic studies
As a background for the discussion about temporal (and semantic) relations, we will look at the differences in free picture description and in psycholinguistic studies based on eye tracking. First, we have to consider the character of visual displays used. Whereas psycholinguistic studies with eye tracking use spatial arrays of one, two or, at the most, three objects (cf. Figure 16), we use a complex picture with many possible relations (cf. Figure 17). It is a true depicted scene with agents (person and animals) that is nameable and has a route to semantic interpretation. The visual inspection and conceptualisation phase takes more time, of course, in a complex picture with many entities, relationships and events involved (cf. Chapter 5, Section 2.1). Second, we have to consider the linguistic descriptions produced. Free description of a complex picture, lasting for two or three minutes, is, of course, much more complex than a short phrase or utterance formulated about a simple array of two-three objects in the course of a couple of seconds. Also, more extensive previews are necessary when planning and formulating more complex descriptons on a discourse level. Apart from that, the information gained visually during an inspection of an object need not be verbalised immediately, but can be used later in the discourse, maybe as a part of a general description later on (we can see a cultivation process).
Chapter 6. Temporal correspondence between verbal and visual data
Furthermore, the informants have the freedom to structure their discourse on all levels of discourse hierarchy: via the choice of discourse topics (guided by the composition or by associations, impressions, evaluations: it looks like a typical Svensson-home, everybody is happy), via the choice of different types of superfoci and foci (ideas about the referents, states or events, their relations, judgements about relations and properties) and via the choice of even smaller ‘building blocks’ according to how the scene is perceived and interpreted by the informants. Finally, it happens that the informants reconceptualise the whole picture or parts of it during the process of picture viewing and picture description, something that radically changes their previous description. In short, the description of the scene was not reduced to object naming. The starting point are informants’ own complex ideas about the scene, their descriptions and interpretations, their way of making meaning and their use of presentational, orientational, and organisational functions of descriptive discourse. Everything that is mentioned is said in relation to what has been mentioned in the preceding parts of the discourse (for example a detailed list of objects is presented after a global overview) and will be mentioned in the following (with a certain amount of conscious forward-planning). Although experimental control in psycholinguistic studies allows exact measurements of the onset of speech after a single visual fixation (on a millisecond level), it has restrictions. The subject is expected to identify and label isolated objects by using a particular syntactic structure and particular expressions. It is therefore sometimes difficult to generalise the psycholinguistic findings to comprehension of complex scenes and cognitive processes underlying the production of natural descriptive discourse.
4.
Conclusion
In the first section, I showed a variety of extracted configurations from verbal and visual data during free descriptions of a complex picture. The discourse level of description was at the centre of our interest and the configurations were presented on different levels of discourse hierarchy: within the focus and within the superfocus. I calculated the occurrences of the configurations in the data, looked at their relation to the preceding and following units and connected them to the type of superfocus they were part of. Let us now look at the results from observations of the temporal relations between visual and verbal data in free descriptions of a complex picture:
121
122 Discourse, Vision and Cognition
i. The verbal and the visual signals were not always simultaneous. Visual scanning was also done during pauses in the verbal description and at the beginning of the sessions, before the verbal description had started. ii. The visual focus was often ahead of speech production (objects were visually focused on before being described). If one visual fixation was connected to one verbal focus, there was usually a delay of approximately 2–3 seconds between them, in particular in list foci. This latency was due to conceptualisation, planning and formulation of a free picture description on a discourse level, which affected the speech-to gaze alignment and prolonged the eye-voice latency. iii. Visual focus could, however, also follow speech meaning so that a visual fixation cluster on an object could appear after the describer had mentioned it. The describer was probably monitoring and checking his statement against the visual account. iv. Areas and objects were frequently re-examined, which resulted either in multiple visual foci or in both multiple visual and verbal foci. As we will see in the following chapter, a refixation on one and the same object could be associated with different ideas. v. One visual fixation usually did not match one verbal focus and a perfect overlap was very rare. Some of the inspected objects were not mentioned at all in the verbal description, some of them were not labelled as a discrete entity but instead included later, on a higher level of abstraction (there are flying objects). vi. Often, several visual fixations were connected to one verbal focus, in particular in summarising foci (sum). vii. The order of objects focused visually and objects focused verbally was not always the same, due to the fact that in the course of one verbal focus, preparatory glances were passed and visual fixation clusters landed on ‘new’ objects, long before these were described verbally. viii. Multiple visual foci were intertwined and well integrated into a larger unit, into a superfocus. This was due to discourse coherence and was related to cognitive processes on the discourse level. ix. Comparable units could be found; however, not on a focus level. In most cases, the superfocus represented the entity that delimited separable clusters of visual and verbal data. I have therefore suggested that attentional superfoci rather than foci are the suitable units of comparison between verbal and visual data.
Chapter 6. Temporal correspondence between verbal and visual data
We can conclude that many of the observations are related to cognitive processes on the discourse level. The main result concerns similarities between units in our visual and verbal information processing. In connection to the traditional units of speech discussed in the literature, the empirical evidence suggests that the superfocus expressed in longer utterances (or similar larger discourse units) plays a crucial role and is the basic unit of information processing. Apart from that, as the results from my perception study (Chapter 1) indicate, the superfocus is easier to identify and both cognitively and communicatively relevant. I was also able to demonstrate the lag between the awareness of an idea and the verbalisation of that idea: when describing a list of items in free descriptive discourse, the latency between a visual focus and a verbal focus was longer than the latency in psycholinguistic studies on a phrase or clause level. This insight may be valuable for linguists but may also enrich research about the nature of human attention and consciousness. We will return to the issue of speech production and planning in the following chapter. The second section of this chapter maintains that a certain type of fixation pattern reflects a certain type of mental activity. I analysed how often each type of pattern appears in association with each type of focus and presented the frequency distribution of cluster types as a function of verbal activity type. It shows that this correspondence is present in summarising foci (sum), lists of items (list), localising foci (loc) and substantive foci with categorisation difficulties (cat. diff). Finally, the current eye tracking study has been discussed in comparison with psycholinguistic studies. Differencies have been found in the characteristics of the visual displays used and in the linguistic description produced. The current chapter dealt with temporal relations between verbal and visual data. The following chapter will concern semantic relations between these two sorts of data.
123
chapter 7
Semantic correspondence between verbal and visual data
It is not self-evident that we all agree on what we see, even though we look at the same picture. Ambiguous pictures are only one example of how our perception can differ. The reason behind these individual differences is that our way of identifying objects in a scene can be triggered by our expectations, interests, intentions, previous knowledge, context or instructions we get. The ‘picture content’ that, for example, the visitors at an art gallery extract during picture viewing does not very often coincide with the title that the artist has formulated. Current theories in visual perception stress the cognitive basis of art and scene perception: we ‘think’ art as much as we ‘see’ art (cf. Solso 1994). Thus, observers perceive the picture on different levels of specificity, group the elements in a particular way and interpret both WHAT they see and HOW the picture appears to them. All this is reflected in the process of picture viewing and picture description.
The previous chapter dealt with temporal relations in picture viewing and picture descriptions, particularly with the configurations of verbal and visual data, the unit of comparison and the distribution of the various configuration types as a function of verbal activity types. In this chapter, we will use the multimodal method and the analytic tools described in Chapter 5 in order to compare the ‘contents’ of verbal and visual clusters. First, we will take a closer look at the semantic relations between visual and verbal data from on-line picture descriptions. Second, we will review the results concerning levels of specificity. Third, we will focus on spatial proximity and mental groupings that appear during picture description. Fourth, we will discuss the generability of our results by presenting results of a priming study conducted with twelve additional informants. We will also discuss the consequences of the studies for viewing aspects, the role of eye movements, fixation patterns, and language production and language planning.
126 Discourse, Vision and Cognition
1.
Semantic correspondence
In the first section, we will be looking at the content of the visual and verbal units. In picture viewing and picture description, the observers direct their visual fixations towards certain objects in the scene. They also focus on objects from the scene in their spoken language descriptions. With respect to semantic synchrony, we can ask what type of information is focused on visually when, for instance, someone describes the tree and the three birds in it. Does she fixate all the units that the summary is built upon? Is there a one-to-one relationship between the visual and the verbal foci? It is also interesting to consider whether the order of the described items is identical with the order in which they were scanned visually. Clusters of visual fixations on an object can be caused by different perceptual and cognitive processes. The spoken language description can help us to reveal which features observers are focusing on. Let us look again at the relation between the visual and the verbal focus in the example presented in Figure 1: ‘In front of the tree which is curved is a stone’. In the visual stream, the object ‘stone’ is fixated three times: twice in the ‘triangle’ configuration and once in the ‘perfect match’ configuration (cf. Chapter 6). If we compare these configurations with the verbal stream, we discover that the relation between the visual and the verbal focus is different. While their relation is one of semantic identity in the ‘perfect match’ (stone = stone), this is not the case in the ‘triangle’ configuration. Here, the concrete object ‘stone’ is viewed from another perspective; the focus is not on the object itself, but rather on its location (stone = in front of the tree). In the ‘perfect match’ configuration, eye movements are ‘pointing’ at a concrete object. In the ‘triangle case’, on the other hand, there is an indirect semantic
stone
stone
tree
stone
which is curved
is a stone
VIS UAL
VERBAL in front of the tree
time
Figure 1. ‘In front of the tree which is curved is a stone’.
Chapter 7. Semantic correspondence between verbal and visual data
relation between vision and spoken discourse. The observers’ eyes are ‘pointing’ at a concrete object in the scene but – as can be revealed from the verbal description – the observer is mentally zooming out and focusing the position of the object. This latter relation can be compared with the figure-ground (trajectorlandmark) relation in cognitive semantics (Lakoff 1987; Langacker 1987, 1991; Holmqvist 1993). According to cognitive semantics, our concepts of the world are largely based on image-schemata, embodied structures that help us to understand new experiences (cf. Chapter 8, Section 1). For instance, in the relation ‘in front of ’, the trajector (TR) – the location of the object – is considered to be the most salient element and should thus be focused by the observer/describer. In fact, when saying ‘in front of the tree’, the informant actually looks at the trajector (=stone) and directs his visual attention to it (Figure 2). In other words, the saliency in the TR role of the schema co-occurs with the visual (and mental) focus on the stone, while the stone itself is verbalised much later. It would be interesting to conduct new experiments in order to verify this finding from spontaneous picture description for other relations within cognitive semantics. Nuyts (1996) is right when he notes that Chafe’s concept of human cognition is much more dynamic than issues discussed within mainstream cognitive linguistics. Chafe is more process-oriented whereas mainstream cognitive linguists are more representation-oriented, studying mental structures and issues of a more strictly conceptual-semantic nature. However, as this example shows, much could be gained by introducing the theoretical concepts from cognitive semantics, such as landmark, trajector, container, prototype, figureground, source-goal, centre-periphery, image schema etc. as explanatory devices on the processual level.
LM
TR
Figure 2. ‘In front of a tree’ (stone= trajector, tree=landmark).
127
128 Discourse, Vision and Cognition
1.1
Object-location relation
The object-location relation is a quite common relation in picture viewing and picture descriptions and has different manifestations, as the three examples (a)–(c) show. Examples (a)–(b) (a) 0205 in the background in the upper left’ 0206 some cows are grazing, (b) 0310 (2s) in the middle of the field there is a tree,
In example (a), the cows are partly viewed as a concrete object and partly as a location. This time, the location is not expressed with the help of another concrete object in the scene but in terms of picture composition (in the background). In (b), the observer is not fixating one object in the middle as a representation of the location, as we would expect. Instead, the middle is ‘created’ by comparing or contrasting the two halves of the scene. The observer delimits the spatial location with the aid of multiple visual fixations on different objects. He is moving his eyes back and forth between the two halves of the picture, fixating similar objects on the left and on the right (Findus 1 and Findus 2, soil in the left field and soil in the right field) on a horizontal line. After that, he follows the vertical line of the tree in the middle, fixating the bottom and the top of it. This dynamic sequence suggests that the observer is doing a ‘cross’ with his eyes, as if he were trying to delimit the exact centre of the scene. The semantic relation between the verbal and the visual foci is implicit; the discrete objects are conceived of as pars pro toto representation of the two halves of the picture, suggesting the location of the middle.
1.2 Object-path relation Example (c) (c) 0415 0416 0417
and behind the tree’ there go . probably cows grazing’ down towards a . stone fence or something
Example (c) is a combination of several object-for-location manifestations. First, cows are fixated as a spatial relation (behind the tree), then they are refixated and categorised as concrete objects, their activity is added (grazing)
Chapter 7. Semantic correspondence between verbal and visual data 129
and, finally, cows are refixated while the perceived direction or path of their movement is mentioned (down towards a stone fence).
1.3
Object-attribute relation
In addition to the object-location relation, the focus can also be on the object-attribute relation. During evaluations, observers check the details (form, colour, contours, size) but also match the concrete, extracted features with the expected features of a prototypical/similar object: ‘something that looks like a telephone line or telephone poles . that is far too small in relation to the humans’. They compare objects both inside the picture world and outside of it. By using metaphors and similes from other domains they compare the animal world with the human one: ‘one bird looks very human’; ‘there’s a dragonfly like a double aeroplane’.
1.4 Object–activity relation Finally, we can extract the object-activity relation. When describing an activity, for example when formulating ‘Pettson is digging’, the eye movements depict the relationships among picture elements and mimic it in repetitive rhythmic units, by filling in links between functionally close parts of an object (face – hand-tool; hand-tool, hand-tool-soil). In terms of cognitive semantics we can say that eye movements fill in the relationship according to a schema for an action (cf. Figure 3).
Figure 3. Pettson is digging.
130 Discourse, Vision and Cognition
To summarise the findings concerning the relation between the visual and verbal foci we can state the following: A visual fixation on an object can mean that: i. the object itself is focused on and conceptualised on different levels of abstraction (lilies = lilies, cat = animal, something = insect); ii. the object’s location is focused on (stone = in front of the tree) which can be explained by the figure-ground principle; iii. The object’s path or direction (cows = go down towards a stone fence) iii. the object’s attributes are focused on and evaluated (cat = weird cat, bird = looks human, dragonfly = double aeroplane); iv. the object’s activity is focused on (he’s digging, he’s raking and he’s sowing, an insect which is flying); We will now turn to levels of specificity in the categorisation process.
2.
Levels of specificity and categorisation
When dealing with the specific semantic relationship between the content of the verbal and the visual spotlight, we can then ask: Is it the object as such that is visually focused on, or is it some of its attributes (form, size, colour, contours)? Is it the object as a whole that is focused on, or does the observer zoom in and analyse the details of an object on a finer scale of resolution? Or is the observer rather mentally zooming out to a more abstract level (weird cat, cat, animal, strange thing, something)? As for the verbal categorisation, the main figure, Pettson, can be described as a person, a man, an old guy, a farmer, or he can be called by his name. His appearance can be described (a weird guy), his clothes (wearing a hat), the activity he is involved in can be specified (he is digging) and this activity can be evaluated (he is digging frenetically). Thus, we can envision a description on different levels of specificity and with different degrees of creativity and freedom. The tendencies that could be extracted from the data are the following: Informants start either with a specific categorisation which is then verbally modified (in a potato field or something; the second bird is screaming or something like that) or, more often, with a vague categorisation, a ‘filler’, followed by a specification: Pettson has found something he is looking at’ he is looking at the soil’. In other words, a general characteristic of an object is successively replaced by
Chapter 7. Semantic correspondence between verbal and visual data
more specific guesses during the visual inspection: ‘he is standing there looking at something’ maybe a stone in his hand that he has dug up’. Informants also use introspection and report on mental states, as in the following example. After one minute of picture viewing and picture description, the informant verbalises the following idea: when I think about it, it seems as if there were in fact two different fields, one can interpret it as if they were in two different fields’ these persons here (cf. Section 3.4). In the following, I will show that not only scene-inherent concrete objects or meaningful groups of objects are focused on, but that also new ‘mental’ groupings are created on the way. Let us turn to the aspect of spatial proximity.
3.
Spatial, semantic and mental groupings
According to Gestalt psychology, the organising principles that enable us to perceive patterns of stimuli as meaningful wholes (a) proximity, (b) similarity, (c) closure, (d) continuation, and (e) symmetry. Especially the principles of proximity, similarity and symmetry seem to be relevant for the viewers in the studies of picture perception and picture description: The proximity principle implies that objects placed close to each other appear as groups rather than as a random cluster. The similarity principle means that there is a tendency for elements of the same shape or colour to be seen as belonging together. Finally, the symmetry principle means that regions bounded by symmetrical borders tend to be perceived as coherent figures. Concerning picture or scene description, we can think of several principles of grouping that could have guided the verbalisation process: the spatial proximity principle (objects that are appear close to each other in the picture are described together), the categorical or taxonomical proximity principle (elements that are similar to each other are described together), the compositional principle (units delimited by composition are described together), the saliency principle (expected, preferred and important elements are described together), the animacy principle (human beings and other animate elements are described together) or perhaps even other principles.
3.1
Grouping concrete objects on the basis of spatial proximity
When we consider a group of objects in the scene, it is claimed that observers tend to fixate the closest objects first. This phenomenon is very coercive
131
132 Discourse, Vision and Cognition
( because of the proximity to the fovea) and is described as an important principle in the literature (Lévy-Schoen 1969, 1974; referred by Findlay 1999). And in fact, informants in the current study seemed to be partly guided by this principle: they fixate clusters of objects depicted with spatial proximity. They describe objects in this delimited area during a verbal superfocus, and keep their eyes within this area during visual scanning. The simplest example are clusters of scene-inherent groups of concrete objects as in ‘three birds in the tree’. The observer is zooming out, scanning picture elements and conceptualising them on a higher level of abstraction. In terms of cognitive semantics, he is fixating a picture-inherent group of visual objects (birds) and linking them together to a plural trajector (TR).
3.2 Grouping multiple concrete objects on the basis of categorical proximity The cluster in Figure 4 was based on the picture-inherent spatial proximity of objects. However, observers also fixated and described clusters that were not spatially close. One type of cluster was based on groupings of multiple similar concrete objects (four cats, four versions of Pettson). This type of cluster cannot be based on spatial proximity, since the picture elements are distributed over the whole scene. In this case, we can rather speak about categorial proximity. We have to note that the simultaneity of the depicted objects involved in different activities has probably promoted this guidance. Similarity should in general
Figure 4. Three birds in the tree.
Chapter 7. Semantic correspondence between verbal and visual data
invite comparison and clustering. This type of cluster is usually described in a summarising superfocus that is typically followed by a list where each instance is described in detail.
3.3 Grouping multiple concrete objects on the basis of the composition Another type of cluster that was perceived as a meaningful unit in the scene was hills at the horizon. The observers’ eyes followed the horizontal line, filling in links between objects. This cluster was probably a compositionally guided cluster. The observer is zooming out, scanning picture elements on a compositional level. This type of cluster is still quite close to the ‘suggested’, ‘designed’ or scene-inherent meaningful groupings.
3.4 Mental zooming out, recategorising the scene As I have mentioned earlier, pictures invite observers to different scanning paths during their discovery. Especially in later parts of the observation, the clusters are based on thematic aspects and guided by top-down factors. The next example is a visual cluster that corresponds to an idea about the picture composition. After one minute of picture viewing and picture description, the informant recategorises the scene: ‘when I think about it, it seems as if there were in fact two different fields’ (Figure 5). After a closer inspection of the visual cluster, we can see that the observer is rescanning several earlier identified objects that are distributed in the scene.
Figure 5. Fixation pattern: ‘Two different fields’.
133
134 Discourse, Vision and Cognition
What we do not see and know on the basis of eye movements only, is how the observer perceives the objects on different occasions. This can be traced thanks to the method combining picture viewing and picture description. In the case of ‘two different fields’, the objects that are refixated represent a bigger (compositional) portion of the picture and support the observer’s reconceptualisation. By mentally zooming out, he discovers an inferential boundary between parts of the picture that he has not perceived before. The scene originally perceived in terms of ‘one field’ has become ‘two fields’ as the observer gets more and more acquainted with the picture. This example illustrates the process during which the observer’s perception of the picture unfolds dynamically.
3.5 Mental grouping of concrete objects on the basis of similar traits and activities We are now moving further away from the scene-inherent spatial proximity and approaching the next type of clusters constructed by the observers. This time, the cluster is an example of an active mental grouping of concrete objects based on an extraction of similar traits and activities (cf. Figure 6. Flying insects). The prerequisite for this kind of grouping is a high level of active processing. Despite the fact that the objects are distributed across the whole scene and appear quite differently, they are perceived of as a unit because of the identified common denominator. The observer is mentally zooming out and creating a unit relatively ‘independent’ of the ‘suggested’ meaningful units
Figure 6. Fixation patterns for ‘Flying insects’.
Chapter 7. Semantic correspondence between verbal and visual data
in the scene. The eye movements mimic the describers’ functional grouping of objects.
3.6 Mental grouping of concrete objects on the basis of an abstract scenario In a number of cases, especially in later parts of the observation, the clusters are based on thematic aspects. The next cluster lacks spatial proximity (Figure 7). The observer is verbalising his impression about the picture content: ‘it looks like early summer’. This abstract concept is not identical with one or several concrete objects compositionally clustered in the scene, and visual fixations are not guided by spatial proximity. The previous scanning of the scene has led the observer to an indirect conclusion about the season of the year. In the visual fixation pattern, we can see large saccades across the whole picture composition. It is obviously a cluster based on mental coupling of concrete objects; their parts or attributes (such as flowers, foliage, plants, leaves, colours) on a higher level of abstraction. Since the concept ‘early summer’ is not directly identical with one or several concrete objects in the scene, the semantic relation between verbal and visual foci is inferred. The objects are concrete indices of a complex (abstract) concept. In addition, the relation between the spoken description and the visual depiction is not a categorical one, associated with object identification. Instead, the observer is in a verbal superfocus, formulating how the picture appears to him on an abstract level. Afterwards, during visual rescanning, the
Figure 7. Fixation patterns by a person saying ‘It looks like an early summer’.
135
136 Discourse, Vision and Cognition
observer is searching again for concrete objects and their parts as crucial indicators of this abstract scenario. By refocusing these elements, the observer is in a way collecting evidence for his statement. In other words, he is checking whether the object characteristics in the concrete scene match the symptoms of the described scenario. Concrete objects can be viewed differently on different occasions as a result of our mental zooming in and out. We have the ability to look at a single concrete object and simultaneously zoom out and speak about an abstract concept or about the picture as a whole. When talking about creativity and freedom, this type of ‘mental groupings’ shows a high degree of active processing. These clusters are comparable to Yarbus’ (1967) task-dependent clusters where […] in response to the instruction ‘estimate the material circumstances of the family shown in the picture’, the observer paid particular attention to the women’s clothing and the furniture (the armchair, stool, tablecloth). In response to the instruction ‘give the ages of the people shown in the picture’, all attention was concentrated on their faces. In response to the instruction ‘surmise what the family was doing before the arrival of the unexpected visitor’, the observer directed his attention particularly to the objects arranged on the table, the girl’s and woman’s hands […]. After the instruction ‘remember clothes worn by the people in the picture’, their clothing was examined. The instruction ‘remember position of the people and objects in the room’ caused the observer to examine the whole room and all the objects. […] Finally, the instruction ‘estimate how long the unexpected visitor had been away from the family’, caused the observer to make particularly intensive movements of the eyes between the faces of the children and the face of the person entering the room. In this case he was undoubtedly trying to find the answer by studying the expression on the faces and trying to determine whether the children (Yarbus 1967: 192–193) recognised the visitor or not.
Although the informants in my study did not receive any specific instructions, their description spontaneously resulted in such kinds of functionally determined clusters. These clusters were not experimenter-elicited, as in Yarbus’ case, but naturally occurring (correlational). It was the describers themselves who created such abstract concepts and scenarios, which in turn provoked a distributed visual search for corresponding significant details in the scene. Clusters of this type have temporal proximity but no spatial proximity. They are clearly top-down guided and appear spontaneously in the later parts of the observation/description.
Chapter 7. Semantic correspondence between verbal and visual data
To summarise the findings concerning spatial, semantic and mental groupings, the perceived units could contain any of i to vii: i. singular concrete objects in the scene ii. scene-inherent clusters of concrete objects, spatial proximity (three birds in the tree) iii. groups of multiple concrete objects, more or less distributed in the scene, categorical proximity (Pettsons, cats) iv. groups of multiple objects across the scene that are horizontally (or vertically) aligned according to the scene composition (hills at the horizon) v. mental zooming out, recategorising the scene (two fields) vi. mental grouping of concrete objects on the basis of similar traits, properties and activities, across the whole scene (flying objects) vii. mental grouping of multiple concrete objects, their parts or attributes (flowers, leaves, colours) on a higher level of abstraction (early summer). These findings have consequences for viewing dimensions (Section 4.2) and for function of eye movements (Section 4.3). It should be noted that the spoken language description may influence eye movements in many respects. Nevertheless, it is not unlikely that patterns based on evaluation and general impression do appear even in free visual scene perception. They may be a natural part of interpreting scenes and validating the interpretation. I can also imagine that evaluative patterns can be involved in preference tasks (when comparing two or more pictures). Or, in a longer examination of a kitchen scene resulting in the following idea: it looks as if somebody has left it in a hurry. If we want to extract such clusters on the basis of eye movements alone, the problem is that the spoken language description is needed in order to identify them. If we combine both modalities, we are more able to detect these kinds of complex clusters.
4.
Discussion
The question arises whether the eye movement patterns connected to abstracts concepts (such as early summer) are specific to the task of verbally describing a picture, or whether they appear even during perception of the same speech. Are these eye movement patterns caused by the process of having to systematically structure speech for presentation? Are they limited to situations where the picture description is generated simultaneously with picture viewing? Do these patterns appear for speakers only? Or can similar eye movement patterns
137
138 Discourse, Vision and Cognition
Table 1. The list of pre-recorded utterances 1–11. 1. Det finns tre fåglar i trädet. En sitter på sina ägg’ den andra fågeln verkar sjunga’ och den tredje hon-fågeln piskar matta. There are three birds in the tree. One is sitting on its eggs, the other bird is singing and the third she-bird is beating a rug. 2. Det är nog inte bara en åker utan två olika på bilden. There is not only one field but two different fields in the picture. 3. Det verkar vara tidig sommar. It seems to be early summer. 4. Till höger går en telefonledning. To the right there is a telephone line. 5. Fåglarna till vänster sitter tripp trapp trull bland liljebladen. The birds on the left are sitting in line according to size among the lilies. 6. Det flyger insekter i bilden. There are insects flying in the picture. 7. Det är sent på hösten och naturen förbereder sig inför vintern. It is late autumn and nature is preparing for winter. 8. Alla på bilden är ledsna. Everyone in the picture is sad. 9. Det är sent på kvällen och det håller på att skymma. It is late in the evening and it is starting to get dark. 10. Det är ett typiskt industriområde. It is a typical industrial area. 11. Gubben och katten känner inte varandra. The old man and the cat do not know each other.
be elicited even for viewers who observe the picture after they have listened to a verbal description of the scene? In order to answer these questions, we conducted a priming study that will be presented in Section 4.1. If we find similar eye movement patterns even for listeners, this would mean that these patterns are not connected to the effort of planning a speech but rather to the semantics of the viewed scene.
4.1 The priming study The question under investigation was whether we could prime the occurrence of similar viewing patterns by presenting spoken utterances about picture content before picture onset. . I would like to thank Richard Andersson for his help with the study.
Chapter 7. Semantic correspondence between verbal and visual data
Twelve informants (seven male, five female) listened to a series of eleven pre-recorded utterances that appeared in random order and were followed by a picture being displayed for 6 seconds. The utterances were partly invented (utterances 7–11 in Table 1), partly generated by the original speakers in the study on simultaneous picture descriptions (utterances 1–6 in Table 1). In order to guarantee that the informants had the same point of departure for the picture viewing, the informants were asked to fixate the asterisk on the screen when listening to the utterance describing the contents of the scene. After the spoken description, they were shown the complex picture for 6 seconds and directly after that, they were asked to estimate how well the spoken description and the picture agreed, by pressing one of the keys on the evaluation scale 1–5. The instruction was as follows: ‘You will hear an utterance and then see a picture. Your task is to judge how well the utterance corresponds to the picture on a scale from 1 to 5. Value 5 means that the agreement between the utterance and the picture is very good and value 1 means that the utterance does not correspond to the picture at all.’ This procedure was repeated several times. Eye movements were measured during the visual scanning of the picture. If the visual patterns generated by a single viewer during a spoken picture description are not random, they will be similar to visual patterns elicited by the same spoken description performed by a group of listeners who visually inspect
Figure 8a. Flying insects: Listeners’ scanpaths (03, 04, 07 and 09). Can be compared to speaker’s original scanpath in Figure 6.
139
140 Discourse, Vision and Cognition
Figure 8b. Telephone line: Listeners scanpaths (03, 04, 07, and 09). Area of interest is marked with a rectangle on the right.
Figure 8c. Early summer: Listerners’ scanpaths (03, 04, 07, and 09). Can be compared to speaker’s original scanpath in Figure 7.
Chapter 7. Semantic correspondence between verbal and visual data
the picture. This would indicate a) that we will find similar eye movement patterns within the group of listeners and b) that the eye movement produced by the listeners will be similar to the clusters produced by the original speaker. In the analysis, I concentrate on visual clusters caused by utterances 1–6 (birds in the tree, two different fields, early summer, telephone line, three birds in the lilies, flying insects), since I have the original visual patterns to compare with. It is difficult to statistically measure similarity of scanpaths. It is even more difficult to compare dynamic patterns, to measure similarity between spatial AND temporal configurations and to quantitatively capture temporal sequences of fixations. To my knowledge, no optimal measure has been found yet. Figure 8a–c illustrates scanpath similarity of three statistically significant scenarios expressed in utterances 3, 4 and 6 (flying insects, telephone line and early summer) for four randomly chosen listeners (03, 04, 07 and 09). For the scanpaths elicited by the utterances 3, 4 and 6 (flying insects, telephone line and early summer), I calculated visual fixations on a number of areas of interest that were semantically important for the described scenario. The concept ‘early summer’ does not explicitly say which concrete objects in the scene should be fixated, but I defined areas of interest indicating the scenario: flowers, leaves, foliage, plants etc., that have also been fixated by the original speaker. The aim was to show that the listeners’ visual fixation patterns within ‘relevant’ areas of interest were significantly better than chance. Then the expected value for the time spent on the ‘relevant’ area proportional to its size was compared with the actual value. A t-test was conducted telling us whether the informants looked significantly more on the ‘relevant’ areas of interest than by chance. The visual scanpaths among the twelve listeners were rather similar. Whereas the concept ‘two fields’ and ‘birds in the lilies’ were not significant, in the cases of ‘flying insects’ (p = 0,002) and ‘telephone line’ (p = 0,001), we got significant results for all listeners. For ‘early summer’, the results were partially significant (large flowers on the left, p = 0,01; flowers left of the tree, p = 0,05). We can thus conclude that a number of these utterances elicited similar eye movement patterns even for the group of listeners. This implies that the eye movement patterns are not restricted to the process of planning and structuring a verbal description of the scene but are rather connected to the scene semantics. These results are in line with studies by Noton and Stark (1971a, 1971b, 1971c) who showed that subjects tend to fixate regions of special interest according to certain scanpaths.
141
142 Discourse, Vision and Cognition
When we compare listeners’ scanpaths in Figure 8 with the original speakers’ scanpaths in Figures 6–7 (flying insects and early summer), we can see many similarities in the spatial configurations, reflecting similarities between places where speakers and listeners find relevant semantic information. However, there is a large variation between subjects concerning the temporal order and the length of observation.
4.2 Dimensions of picture viewing The results concerning spatial, semantic and mental groupings (this chapter, Section 3) can be interpreted in terms of viewing dimensions. Studying simultaneous picture viewing and picture description can help us to understand the dynamics of the ongoing perception process and, in this way, contribute to the area in the intersection of artistic and cognitive theories (cf. Holsanova 2006). Arnheim (1969) draws the conclusion that “visual perception (…) is not a passive recording of stimulus material but an active concern of the mind” (Arnheim 1969: 37). Current theories of visual perception stress the cognitive basis of art and scene perception. We ‘think’ art as much as we ‘see’ art (Solso 1994). Our way of perceiving objects in a scene can be triggered by our expectations, our interests, intentions, previous knowledge, context or instructions. In his book Visual thinking, Arnheim (1969) writes: “… cognitive operations called thinking are not the privilege of mental processes above and beyond perception but the essential ingredients of perception itself. I am referring to such operations as active exploration, selection, grasping of essentials, simplification, abstraction, analysis and synthesis, completion, correction, comparison, problem solving, as well as combining, separating, putting in context” (Arnheim 1969: 13). He draws the conclusion that “visual perception (…) is not a passive recording of stimulus material but an active concern of the mind” (Arnheim 1969: 37). My data confirms the view of active perception: it is not only the recognition of objects that matters but also how the picture appears to the viewers. Verbal descriptions include the quality of experience, subjective content and descriptions of mental states. Viewers report about (i) referents, states and events, (ii) colours, sizes and attributes, (iii) compositional aspects, and they (iv) mentally group the perceived objects into more abstract entities, (v) compare picture elements, (vi) express attitudes and associations, (vii) report about mental states. Thanks to the verbal descriptions, several viewing dimensions can be distinguished: content aspect; quality aspect; compositional aspect; mental
Chapter 7. Semantic correspondence between verbal and visual data
Table 2. Viewing dimensions Viewing dimensions Content aspect
looking at concrete objects when reporting about referents, states and events and spatial relations in substantive, summarising, and localising foci returning to certain picture areas and inspecting them from other Quality aspect perspectives: colours, sizes and other attributes of objects were examined and compared (mostly in evaluative foci) Compositional aspect delivering comments on picture composition when looking at picture elements inspecting the region with another idea in mind Mental grouping & mental zooming out Attitudes & including attitudes and associations that come from the outside of associations the picture world comparing objects & events with those outside the picture world Comparative aspect
grouping and mental zooming out; attitudes, associations and introspection; comparative aspect (cf. Table 2).
4.3 Functions of eye movements The results of the comparison of verbal and visual data reported in Chapters 6 and 7 have also consequences for the current discussion about the functions of eye movements. Griffin (2004) gives an overview of the function of speechrelated gazes in communication and language production. The reported psycholinguistic studies on eye tracking reported involve simple visual stimuli and a phrase or sentence level of language production. However, by studying potentially meaningful sequences or combinations of eye movements on complex scenes and simultaneous picture description on a discourse level, we can extend the list of eye movement functions as follows (cf. Table 3).
4.4 The role of eye fixation patterns Furthermore, the results reported in Chapters 6 and 7 can reveal interesting points concerning the role of fixation patterns. During picture viewing and picture description, informants make many refixations on the same objects. This is
143
144 Discourse, Vision and Cognition
Table 3. Function of eye movement patterns extracted from free on-line descriptions of a complex picture. Type of gazes
Explanation
Counting-like gazes
aid in sequencing in language production when producing quantifiers, before uttering plural nouns Gazes in message shifting the gaze to the next object to planning be named, aid in planning of the next focus; aid in interpretation of picture regions Gazes reflecting catgazing at objects while preparing egorising difficulties how to categorise them within a superfocus, gazes at objects or parts of objects until the description is retrieved, even during dysfluencies. Gazes reflect allocation of mental resources Monitoring gazes Speakers sometimes return their gazes to objects after mentioning them in a way that suggests that they are evaluating their utterances. Re-fixations. Comparative gazes aid in interpretations; reparation/ (reparative gazes or re- self-editing in content, changing categorisation gazes) point of view Organisational gazes organisation of info, structuring of discourse, choice of discourse topics Ideational gazes
Summarising gazes
Abstract, inferential gaze-production link
when referring to objects, when using a modifier, when describing objects’ location or action common activity common taxonomic category-fixating multiple objects reflected in a speaker’s gaze on flowers and leaves before suggesting that a scene depicts early summer
Example, type of focus three birds, three . eh four men digging in the field list foci on the level of the focus, superfocus and discourse topic level subst, sum, eval foci cat.diff.
sum
‘two fields subst foci, meta, eval Now I have described the left hand side introspect, meta foci subst, loc foci
flying objects sum early summer subst, sum
Chapter 7. Semantic correspondence between verbal and visual data
consistent with Yarbus’ (1967) conclusion that eye movements occur in cycles and that observers return to the same picture elements several times and with Noton & Stark (1971b) who coined the word ‘scanpath’ to describe the sequential (repetitive) viewing patterns of particular regions of the image. According to the authors, a coherent picture of the visual scene is constructed piecemeal through the assembly of serially viewed regions of interest. In the light of my visual and verbal data, however, it is important to point out that a refixation on an object can mean something else than the first fixation. A fixation on one and the same object can correspond to several different mental contents. This finding also confirms the claim that meaning relies on our ability to conceptualise the same object or situation in different ways (Casad 1995: 23). Fixation patterns can reveal more than single fixations. However, we still need some aid, some kind of referential framework, in order to infer what ideas and thoughts these fixations and scanpaths correspond to. As Viviani (1990) and Ballard et al. (1996) pointed out, there is an interpretation problem: we need to relate the underlying overt structure of eye scanning patterns to internal cognitive states. The fixation itself does not indicate what properties of an object in a scene have been acquired. Usually, the task is used to constrain and interpret fixations and scanpaths on the objects in the scene. For instance, Yarbus’ (1967) instructions ‘give the ages of the people shown in the picture’ etc. resulted in different scanpaths and allowed a functional interpretation of the eye movement patterns. It showed which pieces of information had been considered relevant for the specific task and were therefore extracted by the informants. However, as we have seen in the analysis of semantic relations, we can also find similar spontaneous patterns in free picture description, without there being a specific viewing task. In this case, the informants’ attempt to formulate a coherent description of the scene and their spontaneous verbal description may be viewed as a source of top-down control. The question is whether the task offers enough explanation for the visual behaviour or whether the verbal description is the optimal source of explanation for the functional interpretation of eye movement patterns. In certain respects, the combination of visual scanpaths and verbal foci can reveal more about the ongoing cognitive processes. If we focus on the discourse level and include different types of foci and superfoci (cf. Chapter 2, Section 1), we can get more information about the informants’ motivations, impressions, attitudes and (categorisation) problems (cf. Chapter 6, Section 3).
145
146 Discourse, Vision and Cognition
4.5 Language production and language planning The question arises how the scanning and description process develop. Concerning the temporal aspect we can ask: Does the thought always come before speech? Do we plan our speech globally, beforehand? Or do we plan locally, on an associative basis? Do we monitor and check our formulations afterwards? Several answers can be found in the literature. Levelt (1989) assumes planning on two levels: macroplanning (i.e. elaboration of a communicative goal) and microplanning (decisions about the topic or focus of the utterance etc.). Bock & Levelt (1994, 2004) maintain that speakers outline clause-sized messages before they begin to sequentially prepare the words they will utter. Linell (1982) distinguishes between two phases of utterance production: the construction of an utterance plan (the decision about semantic and formal properties of the utterance) and the execution of an utterance plan (the pronunciation of the words). He also suggests a theory that represents a compromise between Wilhelm Wundt’s model of complete explicit thought and Hermann Paul’s associative model. According to Wilhelm Wundt’s holistic view (Linell 1982; Blumenthal 1970), the speaker starts with a global idea (‘Gesamtvorstellung’) that is later analysed part-by-part and sequentially organised into an utterance. Applied to the process of visual scanning and verbal description, the observer would have a global idea of the picture as a whole, as well as of the speech genre ‘picture description’. The observer would then decide what contents she would express verbally and in what order: whether she would describe the central part first, the left and the right part of the picture later on, and the foreground and the background last, or whether she would start from the left and continue to the right. If such a procedure is followed systematically, the whole visual and verbal focusing process would be guided by a top-down principle (e.g. by a picture composition). This idea would then be linearised and verbalised and specified in a stepwise fashion. Evidence against a holistic approach contains hesitations, pauses and repetitions revealing the fact that the utterance has not been planned as a whole beforehand. Evidence supporting this pre-structured way of description, on the other hand, is reflected by the combination of summarising foci (sum), followed by a substantive list of items (list). According to Hermann Paul’s associative view, the utterance production is “a more synthetic process in which concepts (expressed in words and phrases) are successively strung together by association processes” (Linell 1982: 1). Applied to our case, the whole procedure of picture viewing and picture description
Chapter 7. Semantic correspondence between verbal and visual data
would be locally determined and associative. It would develop on the basis of the observer’s locally occurring formulations, observations, associations, emotions, memories and knowledge of the world. The observer would not have a global, systematic strategy but would proceeds step by step, using different available resources. Evidence against an associative model, however, contains slips of the tongue and contaminations revealing forward planning. Yet another possibility is that visual scanning and verbal description are sequentially interdetermined and a combination of the above mentioned approaches appear. In practice, that would mean that within a temporal sequence, certain thoughts and eye movement patterns can cause a verbal description. Alternatively, certain formulations during speech production can cause a visual verification phase. The visual verification pattern can then lead to a modification of the concept and to a reformulation of the idea, which in turn may be confirmed by a subsequent refixation cluster. The abstract concept early summer gives us some evidence that the relation between what is seen, conceptualised and said is interactive. Even if there has been some local planning beforehand, the plan can change due to the informant’s interaction with the visual scene, her mental representation of a scenario and her verbal formulation of it. Thanks to this interaction, certain lines of thoughts can be confirmed, modified or lead to reconceptualisation. Linell (1982) suggests a theory that is a compromise between the two above mentioned opposite views on the act of utterance production. On the one hand, there is no complete explicit thought or Gesamtvorstellung before the “concrete” utterances have been constructed. On the other hand, there is ample evidence against a simplistic associationist model. In utterance production there is a considerable amount of advance planning and parallel planning going on (though there seems to be limits to the capacity for planning). Furthermore, although the retrieval of words and, to some extent, the application of construction methods involve both automatic and partly chance-dependent (probabilistic) processes, there also seems to be a monitoring process going on by which the speaker checks, as it were, the words and phrases that are constructed against some preconscious ideas of what he wants (Linell 1982: 14) to say.
In the light of my data from free descriptive discourse, not every focus and superfocus is planned in the same way and to the same extent. We find a) evidence for conceptualisation and advanced planning on a discourse level, in particular in sequences where the latency between the visual examination and
147
148 Discourse, Vision and Cognition
the speech production is very long. We also find evidence for b) more associative processes, in particular in sequences where the speakers start executing their description before they have counted all the instances or planned the details of the whole utterance production (I see three … eh . four Pettsons doing different things, there are one … three birds doing different things). Finally, we find evidence for c) monitoring activities, where the speakers afterwards check the expressed concrete or abstract concept against the visual encounter. I therefore agree with Linell’s view that the communicative intentions may be partly imprecise or vague from the start and become gradually more structured, enriched, precise and conscious through the verbalisation process.
5.
Conclusion
The aim of the first section has been to look more closely at the semantic relations between verbal and visual clusters. My point of departure has been complex ideas expressed as verbal foci and verbal superfoci in free simultaneous spoken language descriptions and processed eye movement data from viewing a complex picture, both aligned in time and displayed on the multimodal score sheets. Using a sequential method, I have compared the contents of the verbal and visual ‘spotlights’ and thereby also shed light on the underlying cognitive processes. Three aspects have been analysed in detail: the semantic correspondence, the level of specificity and the spatial proximity in connection to the creation of new ‘mental units’. The semantic relations between the objects focused on visually and described verbally were often implicit or inferred. They varied between objectobject, object-location, object-path, object-activity and object-attribute. Informants were not only judging the objects’ size, form, location, prototypicality, similarity and function, but also formulating their impressions and associations. It has been suggested that the semantic correspondence between verbal and visual foci is comparable on the level of larger units and sequences, such as the superfocus. The combination of visual and verbal data showed that objects were focused on and conceptualised on different levels of specificity. The dynamics of the observer’s on-line considerations could be followed, ranging from vague categorisations of picture elements, over comments on one’s own expertise, mentions of relevant extracted features, formulations of more specific guesses about an object category, to evaluations. Objects’ location and attributes were
Chapter 7. Semantic correspondence between verbal and visual data 149
described and evaluated, judgements about properties and relations between picture elements were formulated, metaphors and similes were used as a means of comparison and finally, object’s activities were described. We have witnessed a process of stepwise specification, evaluation, interpretation and even re-conceptualisation of picture elements and the picture as a whole. The perceived and described clusters displayed an increasing amount of active processing. We saw a gradual dissociation between the visual representations (objects present here-and-now in the scene) and the discourse-mediated mental representations built up in the course of the description. Informants started by looking at scene-inherent objects, units and gestalts. Apart from scene-inherent concrete picture elements with spatial proximity, they also described new creative groupings along the way. As their picture viewing progressed, they tended to create new mental units that are more independent of the concrete picture elements. They made large saccades across the whole picture, picking up information from different locations to support concepts that were distributed across the picture. With the increasing cognitive involvement, observers and describers tend to return to certain areas, change their perspective and reformulate or re-categorise the scene. It became clear that their perception of the picture changed over time. Objects across the scene – horizontally or vertically aligned – were grouped according to the scene composition. Multiple similar elements distributed in the scene were clustered on the basis of a common taxonomic category. The dynamics of this categorisation process was reflected in the usage of many refixations in picture viewing and reformulations, paraphrases and modifications in picture description. Active mental groupings were created on the basis of similar traits, on symmetry and common activity. The process of mental zooming in and out could be documented, where concrete objects were refixated and viewed on another level of specificity or with another concept in mind. During their successive picture discovery, the informants also created new ‘mental’ groupings across the whole scene, based on abstract concepts. The priming study showed that some of the eye movement patterns, produced spontaneously when describing a picture either in concrete or abstract terms, were not restricted to the situation where a speaker simultaneously describes a picture. Similar viewing patterns could be elicited by using utterances from picture description even for listeners. In an analysis of fixation behaviour on spatial areas of interest, we got significant or partially significant results in a number of utterances.
150 Discourse, Vision and Cognition
I have pointed out the relevance of the combination of visual and verbal data for delimitation of viewing aspects, for the role of eye movements, fixation patterns and for the area of language production and language planning. This chapter has been concerned with the semantic correspondence between verbal and visual data. In the following chapter, I will present studies on mental imagery associated with picture viewing and picture description.
chapter 8
Picture viewing, picture description and mental imagery
It is sometimes argued that we do not visualise, at least not in general, when we understand language. As long as the answer to this question depends on introspective observation, the matter cannot be objectively settled. Some people claim that they are visual thinkers, while others claim they are verbal thinkers and do not use any visualisations at all. There is, however, indirect proof in favour of visualisations in communication. Speakers spontaneously use gestures, mimic and draw pictures, maps and sketches when they communicate. The use of iconic and pictorial representations is useful in communication since it helps the speaker and the listener to achieve understanding. Pictures, maps and sketches are external representations showing how the described reality is conceptualised. Image schemata, mental imagery and mental models are linked to perception. With eye tracking methodology, these different types of visualisation processes can be exactly traced and revealed.
In this chapter, we will further investigate picture descriptions, visualisations and mental imagery. In the first section of this chapter, we will focus on the issue of visualisations in discourse production and discourse comprehension. In particular, we will look at an example of how focus movements and the participants’ mental images can be reconstructed from a spontaneous conversation including drawing. In the second section, we will ask whether speakers use mental images in descriptive discourse and review the results of our studies on picture description and mental imagery. After a short discussion about possible application in educational contexts, we will conclude. The reader should by now be acquainted with the way speakers create coherence when connecting the subsequent steps in their descriptions, how they ‘package’ their thoughts in small ‘digestible’ portions and present them, step by step, in a certain rhythm to the listeners (Chapter 3). The conscious focus of attention is moving (Chafe 1994). The human ability to conceptualise the
152 Discourse, Vision and Cognition
same objects and scenes in different ways has been demonstrated in Chapter 7. Semantic relations between visual and verbal foci in descriptive discourse (in front of the tree is a stone) were explained by introducing some of the theoretical concepts from cognitive semantics, such as landmark, trajector, container, prototype, figure–ground, source–goal, centre–periphery, image schema etc. (Chapter 7). The question is whether speakers and listeners think in images during discourse production and discourse understanding.
1.
Visualisation in discourse production and discourse comprehension
According to cognitive linguistics, the meanings of words are grounded in our everyday perceptual experience of what we see, hear and feel (cf. the finger push example in Figure 1). In their book on Metaphors we live by, Lakoff & Johnson (1980) showed how language comprehension is related to spatial concepts. Our concepts about the world are called mental models. Mental models are models of external reality that are built up from perception, imagination, knowledge, prior experience and comprehension of discourse and that can be changed and updated (Johnson-Laird 1983; Gentner & Stevens 1983). Mental models are, for instance, used to make decisions in unfamiliar situations and to anticipate events. They are, in turn, largely based on image schemata. Image schemata are mental patterns that we learn through our experience and through our bodily interaction with the world (Lakoff & Johnson 1980) and which help us to understand new experiences. On the basis of our bodily (perceptual) experience of ‘force’ (pressure and squeezing), we get an embodied concept of ‘force/pressure’ that can be generalised in a visual metaphor presented in Figure 2. This embodied concept is then reflected in verbal metaphors (They are forcing us out. We’d better not force the issues. She was under pressure. He couldn’t
Figure 1. Finger push
Figure 2. Visual metaphor for ‘force/pressure’
Chapter 8. Picture viewing, picture description and mental imagery
Table 1. From everyday experince via embodied concepts to communication Everyday experience Mental models Force/pressure
Image schema
Communication
Embodied concept Visual metaphor Verbal metaphors of force/pressure of force/pressure Embodied concept of force/pressure as a resource for mutual understanding
squeeze more out of them.). When speakers use these formulations in communication, they evoke mental images in the hearers and are used as an important resource for mutual understanding (cf. Table 1). Holmqvist (1993) makes use of image schema when he describes discourse understanding in terms of evolving mental images. Speakers’ descriptive discourse contains concepts that appear as schematic representations and establish patterns of understanding. When speakers want to describe a complex visual idea, e.g. about a scene, a navigation route or an apartment layout (cf. Labov & Linde 1975; Taylor & Tversky 1992), they have – depending on their goals – to organise the information so that our partner can understand it (Levelt 1981). By uttering ideas, speakers evoke images in the minds of the listeners, the consciousness of the speaker and the listeners get synchronised, and the listeners co-construct the meanings. In face-to-face spontaneous conversation, this process has a more dynamic and cooperative character (Clark 1996). The partners try to achieve joint attention, formulate complementary contributions and interactively adjust their ‘visualisations’. Quite often, the partners draw simultaneously with their verbal descriptions. Sketches and drawings are external spatial-topologic representations reflecting the conceptualisation of reality and serving as an aid for our memory (Tversky 1999). The partners must create a balance between what is being said and what is being drawn. The utterances and non-verbal action (such as drawing and gesturing) can be conceived of as instructions for the listeners for how to change the meaning, how something is perceived, how one thinks or feels or what one wants to do with something that is currently in the conscious focus (Linell 2005). Drawing plays an important role in descriptive discourse (Chapter 3, Section 2), since it functions as: a. b. c. d.
useful tool for the identification of ‘where are we now?’, ‘storage of referents’, an external memory aid for the interlocutors, support for visualisation,
153
154 Discourse, Vision and Cognition
e. an expressive way of underlining what is said and f. a representation of a whole abstract problem discussed in the conversation. In a chapter on focus movements in spoken discourse (Holmqvist & Holsanova 1997), we reconstruct the participants’ images during spontaneous descriptive discourse. A speaker spontaneously draws an abstract sketch illustrating the poor road quality in a descriptive discourse. The drawing and conversation are recorded to see whether the hypothesised movements of a joint focus of attention will leave traces in the discourse and in the drawing, and consequently tell us how the understanding develops. As the speaker moves through his explanation, he uses a combination of drawing, physical pointing and linguistic means in order to help the listeners identify the currently focused ideas about the referents, relations, topic shifts etc. In this conversational data, we investigated the visual focus movements in the abstract spontaneously drawn picture and their relations to the verbal focus movements in the spoken discourse in order to reveal how the participant’s mental image is constructed during the progression of discourse (Holmqvist & Holsanova 1997). Sketches and drawings allow revisions, regroupings, refinements and reinterpretations and are therefore an important thinking tool for design etc. (Suwa et al. 2001). It is important to outline the external representations according to the users’ mental model, i.e. the way the users conceptualise how everyday objects and situations are structured or how a system works. In our analysis, we have paid attention to what is being focused on at each point of discourse and related that to elements in the drawing. Focus in the drawings is illustrated with a white spotlight. Those parts that are currently not in focus are shadowed – they lie in the attentional outskirts (see an abbreviated Example 1 below).
Chapter 8. Picture viewing, picture description and mental imagery
Example 1 here is the whole spectrum,
… here is much money’ and very good quality, Mhmh … they do a good job, but they know it costs a little more to do a good job, Mhm ... then we have down here we have … the fellows who come from Italy and all those countries, they spend the money’ quickly and they don’t care, ... [mhm] so we have more or less Scandinavians and Scots up here then we have the Italians and the Portu[guese][hn] (…………………) now we can build roads and all this stuff. (…………………) .. then they trade down here, .... mhm .... and of course when . these have been given work enough times
155
156 Discourse, Vision and Cognition
then . these wind up bankrupt . and disappear. (………………………)
and then this thing spreads out’ (………………………)
... then we have a new group here’ who is the next generation of road builders.
We claimed that the consciousness of the speaker and of the listeners are synchronised, that they create joint attention focus and co-construct meaning. This is achieved by lexical markers of verbal foci, by drawing and by pointing. Let us first look at how the movement of a conscious focus of attention is reflected in the verbal description. The speaker marks the topic shifts and transition between verbal foci in the unfolding description with the help of discourse markers (cf. Chapter 3, Section 1.1.1, Holmqvist & Holsanova 1997). For instance, then, and then marks a progression within the same superfocus, whereas and now signals moving to a new superfocus. So marks moving back to a place already described, preparing the listener for a general summary, and the expressions and now it’s like this, and then they do like this serves as a link to the following context, preparing the listener for a more complex explication to follow. The describer guides the listeners’ attention by lexical markers like then down here, meaning: now we are going to move the focus (regulation), and we are moving it to this particular place (direction/location, deixis) and we are going to stay in this neighbourhood for some time (planning/prediction). The
Chapter 8. Picture viewing, picture description and mental imagery
marked focus movements therefore also influence the pronominal reference – the pronoun could primarily be found within the focused part of the image. Using a delimited, common source of references makes it easier for the speaker and the listener to coordinate and to understand each other. With the help of drawing (and pointing at the drawing), the partners can adjust their visualisations. The drawing as an external representation allows cognitive processes to be captured, shared and elaborated. In A’s spontaneous drawings, the spatial arrangement of elements is used to represent a) a concrete spatial domain: geography, sizes, spatial relations; b) non-spatial domains: amounts of money, temporal relations; c) abstract domains: intensity, contrast and quality dimensions, ethnic dimension, and d) dynamic processes: stages in a decision process, development over time. When we look at the data from picture descriptions in a naturally occurring conversation, we can see that external visualisations help the partners to achieve a joint focus of attention and to coordinate and adjust their mental images during meaning making. However, as we will see in the next section, mental imagery and inward visualisations of different kinds are also important to the speakers themselves.
2.
Mental imagery and descriptive discourse
As we have seen earlier, image schemata as for instance ‘in front of ’ are similar to the eye movement patterns during a spoken picture description (Chapter 7). This indicates that cognitive schemata might also be important for the speaker him/herself. On the other hand, some informants report that they are verbal and not visual thinkers, and the research on visual-spatial skills has shown that there are individual differences (Hegarty & Waller 2004, 2006, Chapter 4, Section 1 and 2). In order to verify the assumption that we use our ability to create pictures in our minds, we conducted a series of studies on mental imagery during picture description. The results of these studies contribute to our understanding of how speakers connect eye movements, visualisations and spoken discourse to a mental image. The reader might ask: What is mental imagery and what is it good for? Finke (1989: 2) defines mental imagery as “the mental invention or recreation of an experience that in at least some respects resembles the experience of actually perceiving an object or an event, either in conjunction with, or in the absence of, direct sensory stimulation.” (cf. also Finke & Shepard 1986). In popular
157
158 Discourse, Vision and Cognition
terms, mental imagery is described as ‘visualising’ or ‘seeing something in the mind’s eye’. Since mental images are closely connected to visual perception, this mental invention or recreation of experience almost always results in observable eye movements. The investigation of mental images in humans goes back to the experiments on mental rotation conducted by Shepard and Metzler (1971). It has been proposed that we use mental imagery when we mentally recreate personal experiences from the past (Finke 1989), when we retrieve information about physical properties of objects or about physical relationships among objects (Finke 1989), when we read novels, plan future events or anticipate possible future experiences, imagine transformations by mental rotation (Finke 1989) and mental animation (Hegarty 1992) and when we solve problems (Huber & Kirst 2004; Yoon & Narayanan 2004; Engel, Bertel & Barkowsky 2005). Mental images are closely related to mental models. In other words, imagery plays an important role in memory, planning and visual-spatial reasoning and is considered a central component of our thinking. How can we observe this phenomenon and prove that we think in images? Eye tracking methodology has become a very important tool in the study of human cognition, and current research has found a close relation between eye movements and mental imagery. Already in our first eye tracking study (Holsanova et al. 1998), we found some striking similarities between the informants’ eye movement patterns when looking at a complex picture and their eye movements when looking at a white board and describing the picture from memory. The results were interpreted as a bit of support for the hypothesis that mental scanning (Kosslyn 1980) is used as an aid in recalling picture elements, especially when describing their visual and spatial properties. Brandt and Stark (1997) and Laeng and Teodorescu (2001) have shown that spontaneous eye movements closely reflect the content and spatial relations from the original picture or scene. In order to extend these findings, we conducted a number of new eye tracking studies, both with verbal and pictorial elicitation, and both in light and in darkness (Johansson, Holsanova & Holmqvist 2005, 2006). In the following, I will mention two of these studies: a study with twelve informants who listened to a pre-recorded spoken scene description and later retold it from memory, and a study with another twelve informants who were asked to look at a complex picture for a while and then describe it from memory.
Chapter 8. Picture viewing, picture description and mental imagery
2.1 Listening and retelling In the first study, twelve informants – six female and six male students at Lund University – listened to a pre-recorded spoken scene description and later retold it from memory. The goal of this study was to extend the previous findings (Demarais & Cohen 1998; Spivey & Geng 2001; Spivey, Tyler, Richardson & Young 2000) in two respects: First, instead of only studying simple directions, we focused on the complexity of the spatial relations (expressions like at the centre, at the top, between, above, in front of, to the far right, on top of, below, to the left of). Second, apart from measuring eye movements during the listening phase, we added a retelling phase where the subjects were asked to freely retell the described scene from memory. Eye movements were measured during both phases. To our knowledge, these aspects had not been studied before. In addition, we collected ratings of the vividness of imagery both during the listening and retelling phase and asked the subjects whether they usually imagine things in pictures or words. The pre-recorded description was the following (here translated into English):
Imagine a two dimensional picture. At the centre of the picture, there is a large green spruce. At the top of the spruce a bird is sitting. To the left of the spruce and to the far left in the picture there is a yellow house with a black tin roof and white corners. The house has a chimney on which a bird is sitting. To the right of the large spruce and to the far right in the picture, there is a tree, which is as high as the spruce. The leaves of the tree are coloured in yellow and red. Above the tree, at the top of the picture, a bird is flying. Between the spruce and the tree, there is a man in blue overalls, who is raking leaves. In front of the spruce, the house, the tree and the man, i.e. below them in the picture, there is a long red fence, which runs from the picture’s left side to the picture’s right side. At the left side of the picture, a bike is leaning towards the fence, and just to the right of the bike there is a yellow mailbox. On top of the mailbox a cat is sleeping. In front of the fence, i.e. below the fence in the picture, there is a road, which leads from the picture’s left side to the picture’s right side. On the road, to the right of the mailbox and the bike, a black-haired girl is bouncing a ball. To the right of the girl, a boy wearing a red cap is sitting and watching her. To the far right on the road a lady wearing a big red hat is walking with books under her arm. To the left of her, on the road, a bird is eating a worm.
. The initial Swedish verb was “Föreställ dig…” which is neutral to the modality (image or word) of thinking.
159
160 Discourse, Vision and Cognition
Figure 3. Spatial schematics for the objects in the pre-recorded description.
A.
B.
C.
D.
Figure 4. iView analysis of the first 67 seconds for one subject. (A) 0–19 sec. Spruce and bird in top. (B) 19–32 sec. The house to the left of the spruce, with bird on top of chimney. (C) 32–52 sec. The tree to the right of the house and the spruce. (D) 52–67 sec. The man between the spruce and the tree, and the fence in front of them running from left to right.
Chapter 8. Picture viewing, picture description and mental imagery
Spatial schematics for the objects in the pre-recorded description can be seen in Figure 3. The experiment consisted of two main phases, one listening phase in which the subjects listened to the verbal description, and one retelling phase in which the participants retold the description they had listened to in their own words. Eye movements were recorded both while subjects listened to the spoken description and while they retold it.
2.1.1 Measuring spatial and temporal correspondence We developed a method analysing the relative position of an eye movement compared to the overall structure of the scanpath. Figure 4 shows four examples of how the eye movements for one subject are represented in iView four successive times during the description (circles represent fixations, lines represent saccades). 2.1.2 Eye voice latencies Apart from measuring spatial correspondence, we also needed temporal criteria for the occurrence of the fixation to ensure that it concerns the same object that is mentioned verbally. In a study of simultaneous descriptions of the same stimulus picture, I found that eye-voice latencies, i.e. the time from an object is mentioned until the eye moves to the corresponding location, typically range between 2 and 3 seconds (Holsanova 2001: 104f., cf. Chapter 6, Section 1.1.2). Whereas eye movements in the listening phase only can occur after the mentioning of an object, eye movements in the re-telling phase may precede the mentioning of an object. That is, some subjects first moved their eyes to a new position and then started the retelling of that objects, while others started the retelling of an object and then moved their eyes to the new location. On average, the voice-eye latency was 2.1 seconds during the description and 0.29 seconds during the retelling of it. The maximum value over all subjects was 5 seconds both during the description and the retelling of it. Thus, a 5 second limit was chosen, both before and after the verbal onset of an object. In sum, the eye movements of the subjects were scored as global correspondence, local correspondence and no correspondence. Eye movements to objects were considered correct in global correspondence when fulfilling the following spatial and temporal criteria: 1. The eye movement to an object must finish in a position that is spatially correct relatively to the subject’s eye gaze pattern the entire description or retelling.
161
162 Discourse, Vision and Cognition
2. In the listening phase, the eye movement from one position to another must appear within 5 seconds after the object is mentioned in the description. 3. In the retelling phase, the eye movement from one position to another must appear within 5 seconds before or after the subject mentions the object. It is known that the retrieved information about physical relationships among objects can undergo several changes (cf. Finke 1989; Kosslyn 1980; Barsalou 1999). Several experiments have shown that subjects rotate, change size, change shape, change colour, and reorganise and reinterpret mental images (Finke 1989). We found similar tendencies in our data. Such image transformations may affect our results, in particular if they take place in the midst of the description or the retelling phase. Therefore, we devised an alternative local correspondence measure. Eye movements were considered correct in local correspondence when fulfilling the following spatial and temporal criteria: 1. When an eye movement is moving from one object to another during the description or the retelling it must move in the correct direction. 2. In the listening phase, the eye movement from one position to another must appear within 5 seconds after the object is mentioned in the description 3. In the retelling phase, the eye movement from one position to another must appear within 5 seconds before or after the subject mentions the object. The key difference between global and local correspondence is that global correspondence requires fixations to take place at the categorically correct spatial position relative to the whole eye tracking pattern. Local correspondence only requires that the eyes move in the correct direction between two consecutive objects in the description. Examples and schematics of this can be seen in Figures 5 and 6. No correspondence was considered if neither the criteria for local correspondence nor global correspondence was fulfilled (typically, when the eyes did not move or moved in the wrong direction). For a few subjects, some eye movements were re-centred and shrunk into a smaller area (thus yielding more local correspondence). However, the majority of eye movements kept the same proportions during the listening phase and
Chapter 8. Picture viewing, picture description and mental imagery
A.
B.
C.
Figure 5. (A) Example of mostly global correspondences. (B) Example of mostly local correspondences. (C) Example of no correspondences at all. A.
B.
Figure 6. Schematics of global (A) and local (B) correspondence. A.
B.
Figure 7. Comparison of one person’s eye movement patterns during listening (A) and retelling phase (B).
the retelling phase. A comparison of one and the same person’s eye movement patterns during listening and retelling phase can be seen in Figure 7. Results for correct eye movements were significant during the listening phase and retelling phase in local and global correspondence coding. When listening to the pre-recorded scene description (and looking at a white board), 54.8 percent of the eye movements were correct in the global correspondence coding and 64.3 percent of eye movements were correct in the local correspondence coding. In the retelling phase, more than half of all objects mentioned had correct eye movements, according to the conservative global correspondence criteria (55.2 percent; p = 0.004). The resizing effects, i.e. the fact that
163
164 Discourse, Vision and Cognition
i nformants may have shrunk, enlarged and stretched the image, were quite common during picture description. It was also common that informants recentred the image from time to time; thus yielding local correspondence. When allowed for re-centring and resizing of the image – as with local correspondence – then almost three quarters of all objects had correct eye movements (74.8 percent, p = 0.0012). The subjects’ spatial pattern of eye movements was highly consistent with the original spatial arrangement.
2.2 Picture viewing and picture description In the second study, we asked another twelve informants – six female and six male students from Lund University – to look at a complex picture for a while and then describe it from memory. We chose Sven Nordqvist’s (1990) picture again as a complex visual stimulus. The study consisted of two main phases, a viewing phase in which the informants inspected the stimulus picture and a description phase in which the participants described this picture from memory in their own words while looking at a white screen. Eye movements were recorded during both phases. At the beginning of the viewing phase, each informant received the following instructions: You will see a picture. We want you to study the picture as thoroughly as possible and to describe it afterwards.
The picture was shown for about 30 seconds, and was then covered by a white screen. The following description phase was self-paced: the informants usually took 1–2 minutes to describe the picture. After the session, the informants were asked to rate the vividness of their visualisation during the viewing and the retelling phase on a scale ranging from 1 to 5. They were also asked to assess whether they usually imagine things in pictures or in words. The descriptions were transcribed in order to analyse which picture elements were mentioned and when. The eye movements were then analysed according to objects derived from the descriptions. For instance, when an informant formulated the following superfocus, 01:20 – And ehhh to the left in the picture’ 01:23 – there are large daffodils, 01:26 – it looks like there were also some animals there perhaps,
we would expect the informant to move her eyes towards the left part of the white screen during the first focus. Then it would be plausible to inspect the
Chapter 8. Picture viewing, picture description and mental imagery
referent of the second focus (the daffodils). Finally, we could expect the informant to dwell for some time within the daffodil area – on the white screen – searching for the animals (three birds, in fact) that were sitting there on the stimulus picture. The following criteria were applied in the analysis in order to judge whether ‘correct’ eye movements occurred: Eye movements were considered correct in local correspondence when they moved from one position to another in the correct direction within a certain time interval. Eye movements were considered correct in global correspondence when moving from one position to another and finishing in a position that was spatially correct relative to the whole eyetracking pattern of the informant (for a detailed description of our method, cf. Johansson et al. 2005, 2006). We tested significance between the number of correct eye movements and the expected number of correct movements by chance. Our results were significant both in the local correspondence coding (74.8 percent correct eye movements, p = 0,0015) and in the global correspondence coding (54.9 percent correct eye movements, p = 0,0051). The results suggest that informants visualise the spatial configuration of the scene as a support for their descriptions from memory. The effect we measured is strong. More than half of all picture elements mentioned had correct eye movements, according to the conservative global correspondence criteria. Allowing for re-centring and resizing of the image – as with local correspondence – makes almost three quarters of all picture elements have correct eye movements. Our data indicate that eye movements are driven by the mental record of the object position and that spatial locations are to a high degree preserved when describing a complex picture from memory. Despite the fact that the majority of the subjects had appropriate imagery patterns, we found no correlation between the subjects’ rating of their own visualisations and the degree of correct eye movement, neither for the viewing phase nor for the retelling phase. The subjects’ assessments about whether they usually think in words or pictures were proportionally distributed across four possibilities: (a) words, (b) pictures, (c) combination of words and pictures, (d) no guesses. Again, a simple correlation analysis showed no correlation between these assessments and the degree of correct eye movements, neither for the viewing nor for the retelling phase. One possible interpretation . The effect was equally strong for verbal elicitations (when the informants listened to a verbal, pre-recorded scene description instead of viewing a picture), and could also be found in complete darkness (cf. Johansson et al. 2006).
165
166 Discourse, Vision and Cognition
A.
B.
Figure 8. One and the same informant: viewing phase (A) and description phase (B).
might be that people in general are not aware of which mental modality they are thinking in. Overall, there was a good similarity between data from the viewing and the description phases, as can be seen in Figure 8. According to Kosslyn (1994), distance, location and orientation of the mental image can be represented in the visual buffer, and it is possible to shift attention to certain parts or aspects of it. Laeng & Teodorescu (2001) interpret their results as a confirmation that eye movements play a functional role during image generation. Mast and Kosslyn (2002) propose, similarly to Hebb (1968), that eye movements are stored as spatial indexes that are used to arrange the parts of the image correctly. Our results can be interpreted as further evidence that eye movements play a functional role in visual mental imagery and that eye movements indeed are stored as spatial indexes that are used to arrange the different parts correctly when a mental image is generated. There are, however, alternative interpretations. Researchers within the ‘embodied’ view claim that instead of relying on an mental image, we use features in the external environment. An imagined scene can then be projected over those external features, and any storing of the whole scene internally would be unnecessary. Ballard et al. (1996, 1997) suggest that informants leave behind ‘deictic pointers’ to locations of the scene in the environment, which later may be perceptually accessed when needed. Pylyshyn (2001) has developed a somewhat similar approach to support propositional representations and speaks about ‘visual indices’ (cf. also Spivey et al. 2004). Another alternative account is the ‘perceptual activity theory’ suggesting that instead of storing images, we store a continually updated and refined set of procedures or schemas that specify how to direct out attention in different situations (Thomas 1999). In this view, a perceptual experience consists of an
Chapter 8. Picture viewing, picture description and mental imagery
ongoing, schema-guided perceptual exploration of the environment. Imagery is then the re-enactment of the specific exploratory perceptual behaviour that would be appropriate for exploring the imagined object as if it were actually present. In short, mental imagery seems to play an important role even for the speakers involved in descriptive discourse. Our results show evidence supporting the hypothesis that mental imagery and mental scanning are used as an aid in recalling picture elements, especially when describing their visual and spatial properties from memory.
3.
Discussion: Users’ interaction with multiple external representations
Relations between mental imagery and external visualisations are important for our communication and our interaction with the external world (Hegarty 2004). The issue of usability is tightly connected to the extent to which external representations correspond to (mirror) mental visualisations. In this respect, combination of eye movement protocol and spoken language protocol can be used for the assessment of users’ interaction with multiple representations and new media (Holmqvist et al. 2003; Holsanova & Holmqvist 2004). In formats such as newspapers, netpapers, instruction materials or textbooks, containing texts, photos, drawings, maps, diagrams and graphics, it is important that the message is structured in a coherent way, so that the user has no difficulties finding information, navigating, conceptualising and processing information from different sources and integrating it with her own visualisation and experience (Holsanova et al., forthc.; Scheiter et al., submitted). Recent eye tracking studies on text-picture integration have shown that a coherent, conceptually organized format supports readers’ navigation and facilitates reading and comprehension (Holmqvist et al. 2006). Also, a format with spatial proximity between text and graphics facilitates integration and prolongs reading (Holsanova et al., forthc.; Holmqvist et al. 2006), even if we find individual differences concerning the ability to integrate multiple representations (Hannus & Hyönä (1999). Users’ anticipations, attitudes and problems when interacting with these formats can be revealed by a combination of eye movement data and (retrospective) verbal protocols.
167
168 Discourse, Vision and Cognition
4.
Conclusion
This chapter has dealt with mental imagery and external visualisations in connection with descriptive discourse. As we have seen, in a naturally occurring conversation, external visualisations help the partners to achieve a joint focus of attention and to coordinate and adjust their mental images during meaning-making. External visual representations such as drawings are central to learning and reasoning processes. They can be manipulated, changed and are subject to negotiations. The partners can work with patterns and exemplars standing for abstract concepts. Apart from the spatial domain, drawings can be used for other conceptual domains, i.e. for the non-spatial domain (time, money), abstract domain (contrast, intensity, quality), dynamic domain (stages in a process) etc. However, as we have seen, mental imagery and inner visualisations of different kinds are also important for the speakers themselves. In a study on picture viewing, picture description and mental imagery, a significant similarity was found between (a) the eye movement patterns during picture viewing and (b) those produced during picture description (when the informants were looking at a white screen). The eye movements closely reflected the content and the spatial relations of the original picture, suggesting that the informants created some sort of mental image as an aid for their descriptions from memory. Apart from that, even verbal descriptions engaged mental imagery and elicited eye movements that reflect spatiality (Johansson et al. 2005, 2006). Mental imagery and mental models are useful for educational methods and learning strategies. In the area of visuo-spatial learning and problem-solving, it is recommended to use those external spatial-analogical representations (charts, geographical layouts, diagrams etc.) that closely correspond to the users’ mental models. Our ability to picture something mentally is also relevant for design and human-computer interaction, since humans interact with systems and objects based on how they believe that the system works or the objects should be used. The issue of usability is thus tightly connected to the extent to which external representations correspond to our visualisations of how things function. For a user’s interaction with a format containing multiple representations (texts, photos, drawings, maps, diagrams and graphics) it is important the message is structured in a coherent way, so that the user has no difficulties to conceptualize, process and integrate information from different sources with her own visualisation and experience (Holsanova et al., forthc.). However, we
Chapter 8. Picture viewing, picture description and mental imagery 169
find individual differences concerning the ability to integrate multiple representations. Hannus & Hyönä (1999) showed in their eye tracking study that illustrations in textbooks were of benefit to high-ability pupils but not to lowability pupils who were not able to connect the illustration to the proper parts of the text. Finally, mental models are important for language comprehension: Kintsch & van Dijk (1983) use the term situation models and show the relevance of mental models for the production and comprehension of discourse (see also Zwaan & Radvansky 1998; Kintsch 1988). It has been found that discourse comprehension is in many ways associated with the construction of mental models that involves visuo-spatial reasoning (Holmqvist & Holsanova 1996).
chapter 9
Concluding chapter
I hope that, by now, the reader can see the advantages of combining discourse analysis with cognitively oriented research and eye movement tracking. I also hope that I have convincingly shown how spoken descriptive discourse and eye movement measurements can, in concert, elucidate covert mental processes. This concluding chapter looks back on the most important issues and findings in the book and mentions some implications of the multimodal approach for other fields of research. The way speakers segment discourse and create global and local transitions reflects a certain ‘cognitive rhythm’ in discourse production. The flow of speech reflects the flow of thoughts. In Chapter 1, I defined the most important units of spoken descriptive discourse that reflect human attention, verbal focus and verbal superfocus. I showed that listeners’ intuition about discourse boundaries and discourse segmentation is facilitated when the interplay of prosodic and acoustic criteria is further confirmed by semantic criteria and lexical markers. Also, it is easier for listeners to identify boundaries at the higher levels of discourse, such as the superfocus. The way we create meaning from our experience and describe it to others can be understood in connection with our general communicative ability: We partly talk about WHAT we experienced by selecting certain referents, states and events and by grouping and organising them in a certain way, but we also express our attitudes and relate to HOW these referents, states and events appeared to us. A taxonomy of foci reflecting these different categorising and interpreting activities that the speakers are involved in was developed in Chapter 2. Seven different types of foci have been identified, serving three main discourse functions. Substantive, summarising and localising foci are typically used for presentation of picture contents, attitudinal meaning is expressed in evaluative and expert foci, and a group of interpersonal, introspective and meta-textual foci serves the regulatory and organising function. This taxonomy of foci could be generalised to different settings but the distribution of foci varied. For instance, the summarising foci dominated in a setting where the picture was described from memory, whereas substantive foci dominated in simultaneous descriptions in a narrative setting. When spatial
172 Discourse, Vision and Cognition
aspects of the scene were focused on, the proportion of localising foci was significantly higher. Furthermore, an interactive setting promoted a high proportion of evaluative and expert foci in a situation where the informants were expressing their attitudes to the picture content, making judgements about the picture as a whole, about properties of the picture elements and about relations between picture elements. Introspective and meta-textual foci were also more frequent in a situation where the listener was present. In spoken descriptions, we can only focus our attention on one particular aspect at a time, and the information flow is divided into small units of speech. These segments are either linear or embedded. It happens that we make a digression – a step aside from the main track of our thoughts – and spend some time on comments, but we usually succeed in coming back to the main track, finish the previous topic and start on a new one. Sometimes, we must mentally reorient at transitions between segments. In the process of meaning making, both speakers and listeners try to connect these units into a coherent whole. In Chapter 3, I showed how speakers connect the subsequent steps in their description and thereby create discourse coherence. Discourse markers reveal the structuring of the speech, introduce smaller and larger steps in the description and mark the linear (paratactic) and the embedded (hypotactic) segments in discourse. Also, there are different degrees of mental distance between steps of description, reflected in discontinuities at the transition between foci and superfoci. This phenomenon has been interpreted in terms of the internal or external ‘worlds’ the speaker moves between. The largest hesitations and the longest pauses were found at the transitions where the speaker steps out of the description and turns to the meta-textual and interactional aspects. An analysis of a spontaneous description with drawing where the speaker is trying to achieve a certain visualisation effect for the listeners has shown that the interlocutors – despite the complexity in the hierarchical structure – – can retain nominal and pronominal references for quite a long time – can simultaneously focus on a higher and a lower level of abstraction – can handle multiple discourse-mediated representations of visually present and mentally imagined objects – can attend to the same objects with another idea in the mind. The dissociation between the visual and mental representations as well as the simultaneous handling of multiple discourse-mediated representations on different levels of abstraction is made possible (a) by the partners’ switching between active and semiactive information, (b) by joint attention and (c) by the
Chapter 9. Concluding chapter
use of mutual visual access (e.g. by observing each other’s pointing, gazing and drawing). The drawing – as an external visual representation – fulfils many functions in addition to the spoken discourse: It functions as a referent storage, external memory aid for the interlocutors, basis for visualisation of imaginary events and scenarios, and representation of the whole topic of the conversation. The fact that partners in a conversation can handle multiple discoursemediated representation of visually present and mentally imagined objects and scenarios contributes to the theory of mind. Different individuals focus on different aspects in their picture descriptions. In Chapter 4, I characterised and exemplified the two dominant styles found in the data and discussed various cognitive, experiential and contextual factors that might have given rise to these styles. Whereas attending to spatial relations is dominant in the static description style where the picture is decomposed into fields that are then described systematically, with a variety of terms for spatial relations, attending to the flow of time is the dominant pattern in the dynamic description style, where the informants primarily focus on temporal relations and dynamic events in the picture, talk about steps of a process, successive phases, and a certain order. The quality of the dynamic style is achieved by a frequent use of temporal verbs, temporal adverbs and motion verbs in an active voice. Discourse markers are often used to focus and refocus on the picture elements, and to interconnect them. Apart from that, the informants seem to follow a narrative schema: the descriptions start with an introduction of the main characters, their involvement in various activities and a description of the scene. The extracted description styles are further discussed in the framework of studies on individual differences and remembering. Connections are drawn to studies about visual and verbal thinkers and spatial and iconic visualisers. I also showed that spatial and narrative priming has effects on the description style. Spatial priming leads to a larger number of localisations, significantly fewer temporal expressions and significantly shorter static descriptions, whereas narrative priming mostly enhances the temporal dynamics in the description. The first four chapters focused on various characteristics of picture descriptions in different settings and built up a basis for a broader comparison between picture descriptions, picture viewing and mental imagery presented in Chapters 5–8. The multimodal method and the analytical tool, multimodal time-coded score sheet, were introduced in Chapter 5. Complex ideas formulated in the course of descriptive discourse were synchronised with fixation patterns
173
174 Discourse, Vision and Cognition
during visual inspection of the complex picture. Verbal and visual data have been used as two windows to the mind. With the help of this method, I was able to synchronise visual and verbal behaviour over time, follow and compare the content of the attentional spotlights on different discourse levels, and extract clusters in the visual and verbal flow. The method has been used when studying temporal and semantic correspondence between verbal and visual data in Chapters 6 and 7. By incorporating different types of foci and superfoci in the analysis, we can follow the eye gaze patterns during specific mental activities. Chapter 6 focused on temporal relations. From a free description of a complex scene, I extracted configurations from verbal and visual data on a focus and superfocus level. In result, I found complex patterns of eye gaze and speech. The first question to be answered concerned temporal simultaneity between the visual and verbal signal. I found that the verbal and the visual signals were not always simultaneous. The visual focus was often ahead of speech production. This latency was due to conceptualisation, planning and formulation of a free picture description on a discourse level, which affected the speech-to-gaze alignment and prolonged the eye-voice latency. Visual focus could, however, also follow speech (i.e. a visual fixation cluster on an object could appear after the describer had mentioned it). In these cases, the describer was probably monitoring and checking his statement against the visual account. In some instances, there was temporal simultaneity between the verbal and visual signals but no semantic correspondence (when informants – during a current verbal focus – directed preparatory glances towards objects to be described later on). The second question concerned the order of the objects focused on visually and verbally. The empirical results showed that the order of objects fixated may, but need not always be the same as the order in which the objects were introduced within the verbal focus or superfocus. For instance, some of the inspected objects were not mentioned at all in the verbal description, some of them were not labelled as a discrete entity but instead included later, on a higher level of abstraction. In the course of one verbal focus, preparatory glances were passed and visual fixation clusters landed on ‘new’ objects, long before these were described verbally. Also, areas and objects were frequently re-examined and a re-fixation on one and the same object could be associated with different ideas.
Chapter 9. Concluding chapter
Third, I investigated whether we can find a comparable unit in visual and verbal data. Often, several visual fixations were connected to one verbal focus and multiple visual foci were intertwined and well integrated into a larger unit. I have therefore suggested that attentional superfoci rather than foci are the suitable units of comparison between verbal and visual data. In connection with the traditional units of speech discussed in the literature, the empirical evidence suggests that the superfocus expressed in longer utterances (and similar larger units of discourse) plays a crucial role as a unit of information processing. A superfocus is easier to identify and both cognitively and communicatively relevant. Also, I was able to demonstrate the lag between the awareness of an idea and the verbalisation of that idea: Starting from a complex visual scene, free description and embedding in the discourse context, the latency between a visual focus and a verbal focus was much longer (2–3 sec) than the latency reported in psycholinguistic naming studies. Furthermore, functional clusters typical of different mental activities were found during free descriptions of complex pictures, in particular in summarising, localising foci, listing items and substantive foci connected to categorisation difficulties. For example, for localising foci, the mental activities seem to be divided into (a) a preparatory phase when most of the categorisation and interpretation work is done and the inspected object functions as a pars pro toto representing the location and (b) a formulation phase when the object is re-fixated and named. These tendencies towards reoccurring multimodal integration patterns offer many interesting application possibilities in interaction with multimodal interactive systems and learning. Chapter 7 focused on semantic correspondence between verbal and visual data. I found that the semantic relation between the objects focused on visually and described verbally was often implicit or inferred. It varied between object-object, object-location, object-path, object-activity and object-attribute. The combination of visual and verbal data shows that the objects are focused on and conceptualised at different levels of specificity, objects’ location and attributes are evaluated, metaphors and similes are used for comparison and the object’s activity is described. All this involves interpretation and active processing. We have witnessed a process of stepwise specification, evaluation, interpretation and even re-conceptualisation of picture elements and of the picture as a whole. The eye movement protocol and the verbal description protocol have been used as indirect sources of our mental processes. We saw a gradual dissociation
175
176 Discourse, Vision and Cognition
between the visual representations in the scene and the discourse-mediated mental representations built up in the course of the description. During their successive picture discovery, informants described not only scene-inherent objects with spatial proximity but also clustered elements distributed across the scene and created new mental groupings based on abstract concepts. The process of mental zooming in and out could be documented, where concrete objects were re-fixated and viewed with another concept in mind. The comparison of visual and verbal foci in the process of picture viewing and picture description shows how language and vision meet through time and the units extracted from the empirical data give us hints about the ways in which information is acquired and processed in the human mind. A priming study showed that the eye movement patterns connected to abstract concepts were not restricted to the process of planning and structuring a verbal description of the scene but were rather connected to the scene semantics. Similar viewing patterns could be elicited by using utterances from the original picture description even with a group of listeners. By studying potentially meaningful sequences or combinations of eye movements on complex scenes and simultaneous picture description on a discourse level, I was able to extend the list of eye movement functions: countinglike gazes, ideational pointing gazes, gazes in message planning, gazes reflecting categorising difficulties, monitoring gazes, comparative gazes, re-categorisational gazes, summarizing gazes, etc. The multimodal clusters extracted from verbal and visual foci illustrate how humans connect vision and speech while engaged in certain activities. Finally, the empirical results have consequences for questions about discourse production and planning: Does the thought always come before speech? Do we plan our speech globally, beforehand? Or do we plan locally, on an associative basis? Do we monitor and check our formulations afterwards? In the light of my data from free descriptive discourse, not every focus and superfocus is planned in the same way and to the same extent. I found (a) evidence for conceptualisation and advanced planning on a discourse level, in particular in sequences where the latency between the visual examination and the speech production is very long. I also found evidence for (b) more associative processes, in particular in sequences where the speakers start executing their description before they have counted all the instances or planned the details of the whole utterance production. Finally, I found evidence for (c) monitoring activities, where the speakers expressed a concrete or abstract concept
Chapter 9. Concluding chapter
and, afterwards, checked it against the visual encounter. I therefore agree with Linell’s (1982) view that the communicative intentions may be partly imprecise or vague from the start and become gradually more structured, enriched, precise and conscious through the verbalisation process. Chapter 8 was concerned with the role of mental imagery and external visualisations in descriptive discourse. In a naturally occurring conversation, external visualisations help the partners to achieve a joint focus of attention and to coordinate and adjust their mental images during meaning-making. However, as we have seen, inner visualisations and, in particular, mental images, are also important for the speakers themselves. In a study of picture viewing, picture description and mental imagery, a significant similarity was found between (a) the eye movement patterns during picture viewing and (b) those produced during picture description (when the picture was removed and the informants were looking at a white screen). The eye movements closely reflected the content and the spatial relations of the original picture, suggesting that the informants created a sort of mental image as an aid for their descriptions from memory. Eye movements were thus not dependent on a present visual scene but on a mental record of the scene. In addition, even verbal scene descriptions evoked mental images and elicited eye movements that reflect spatiality. Let me finally mention some implications for other fields of research. The multimodal method and the integration patterns discovered can be applied for different purposes. It has currently been implemented in the project concerning on-line written picture descriptions where we analyse verbal and visual flow to get an enhanced picture of the writer’s attention processes (Andersson et al. 2006). Apart from that, there are many interesting applications of integration patterns within evaluation of design, interaction with multimodal interactive systems and learning. The integration patterns discovered in our visual and verbal behaviour can contribute to the development of a new generation of multimodal interactive systems (Oviatt 1999). In addition, we would be able to make a diagnosis about the current user activity and predictions about their next move, their choices and decisions (Bertel 2007). In consequence, we could use this information on-line, for supporting users’ individual problem-solving strategies and preferred ways of interacting. The advantages when using a multimodal method are threefold: it gives more detailed answers about cognitive processes and the ongoing creation of meaningful units, it reveals the rationality behind the informants’ behaviour (how they behave and why, what
177
178 Discourse, Vision and Cognition
e xpectations and associations they have) and it gives us insights about users’ attitudes towards different solutions (what is good or bad, what is easy or difficult etc.). In short, the sequential multimodal method can be successfully used for a dynamic analysis of perception and action in general.
References
Aijmer, K. (1988). ‘Now may we have a word on this.’ The use of ‘now’ as a discourse particle. In M. Kytö, O. Ihalainen & M. Rissanen (Eds.), Corpus Linguistics, Hard and Soft. Proceedings of the Eighth International Conference on English Language Research on Computerized Corpora, 15–33. Aijmer, K. (2002). English Discourse Particles. Evidence from a corpus. Studies in Corpus Linguistics. John Benjamins: Amsterdam. Allport, A. (1989). Visual attention, In Posner, M. I. (Ed.), Foundations of Cognitive Science. Cambridge, MA: MIT Press, 631–682. Allwood, J. (1996). On Wallace Chafe’s ‘How consciousness shapes language’. Pragmatics & Cognition Vol. 4(1), 1996. Special issue on language and conciousness, 55–64. Andersson, B., Dahl, J., Holmqvist, K., Holsanova, J., Johansson, V., Karlsson, H., Strömqvist, S., Tufvesson, S., & Wengelin, Å. (2006). Combining keystroke logging with eye tracking. In Luuk Van Waes, Marielle Leiten & Chris Neuwirth (Eds.)Writing and Digital Media, Elsevier BV (North Holland), 166–172. Arnheim, R. (1969). Visual Thinking, Berkeley, University of California Press, CA. Baddeley, A. & Lieberman, K. (1980). Spatial working memory. In R. Nickerson (Ed.), Attention and performance (Vol. VIII, pp. 521–539). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Baddeley, A. (1992). Is working memory working? The fifteenth Bartlett lecture. The Quarterly Journal of Experimental Psychology, 44A, 1–31. Ballard, D. H., Hayhoe, M. M., Pook, P. K., & Rao, R. P. N. (1997). Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences (1997) 20, 1311–1328. Ballard, D. H., Hayhoe, M. M., Pook, P. K., & Rao, R. P. N. (1996). Deictic Codes for the Embodiment of Cognition. CUP: Cambridge. Bangerter, A. & Clark, H. H. (2003). Navigating joint projects with dialogue. Cognitive Science 27 (2003), 195–225. Barsalou, L. (1999). Perceptual symbol systems, Behavioral and Brain Sciences (1999), 22, 577–660. Bartlett (1932, reprinted 1997). Remembering. Cambridge: Cambridge University Press. Beattie, G. (1980). Encoding units in spontaneous speech. In H. W. Dechert & M. Raupach (Eds.), Temporal variables in speech, pp. 131–143. Mouton, The Hague. Berlyne, D. E. (1971). Aesthetics and psychobiology. New York: Appleton-Century-Crofts. Berman, R. A. & Slobin, D. I. (1994). Relating events in narrative. A crosslinguistic developmental study. Hillsdale, New Jersey: Lawrence Erlbaum. Berséus, P. (2002). Eye movement in prima vista singing and vocal text reading. Master paper in Cognitive Science, Lund University. http://www.sol.lu.se/humlab/eyetracking/ Studentpapers/PerBerseus.pdf
180 Discourse, Vision and Cognition
Bertel, S. (2007). Towards attention-guided human-computer collaborative reasoning for spatial configuration and design. In Foundations of Augmented Cognition (Proceedings of HCI International 2007, Beijing), pp. 337–345, Lecture Notes in Computer Science. Springer, Berlin. Blumenthal, A. (1970). Language and Psychology. Historical aspects of psycholinguistics. New York: John Wiley. Bock, J. K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of psycholinguistics (pp. 945–984). San Diego: Academic Press. Bock, K., Irwin, D. E., & Davidson, D. J. (2004). Putting First Things First. In J. M. Henderson & F. Ferreira (Eds.), The Interface of Language, Vision, and Action: Eye Movements and the Visual World. New York: Psychology Press. Braarud, P. Ø., Drøivoldsmo, A., Hollnagel, E. (1997). Human Error Analysis Project (HEAP) – The fourth pilot study: Verbal data for analysis of operator performance. HWR–495. Halden. Brandt, S. A., & Stark, L. W. (1997). Spontaneous eye movements during visual imagery reflect the content of the visual scene. Journal of Cognitive Neuroscience, 9, 27–38. Bruce, G. (1982). Textual aspects of prosody in Swedish. Phonetica 39, 274–287. Buswell, G. T. (1935). How people look at pictures. A study of the psychology of perception in art. Chicago: The University of Chicago Press. Butterworth, B. (1975). Hesitation and semantic planning in speech. Journal of Psycholinguistic Research 4, 75–87. Butsch, R. L. C. (1932). Eye movements and the eye-hand span in typewriting. J Educat Psychol 23: 104–121. Casad, E. H. (1995). Seeing it in more than one way. In John R. Taylor & Robert E. MacLaury (Eds.), Language and the cognitive construal of the world, 23–49. Trends in Linguistics. Studies and Monographs, 82. Berlin: Mouton de Gruyter. Chafe, W. L. (1979). The flow of Thought and the flow of Language. In Givón, T. (Ed.), Syntax and Semantics, Vol. 12: Discourse and Syntax. Academic Press: New York, San Francisco, London, 159–181. Chafe, W. L. (1980). The deployment of consciousness in the production of a narrative. In W. L. Chafe (Ed.), The Pear Stories: Cognitive, Cultural, and Linguistic Aspects of Narrative Production. Ablex: Norwood, NJ, 9–50. Chafe, W. L. (1987). Cognitive Constraints on Information flow. In Tomlin, Russel S. (Ed.), Coherence and Grounding in Discourse. Benjamins: Amsterdam/Philadelphia, 21–51. Chafe, W. L. (1994). Discourse, Consciousness, and Time. The flow and Displacement of Conscious Experience in Speaking and Writing. The University of Chicago Press: Chicago, London. Chafe, W. L. (1995). Accessing the mind through language. In S. Allén (Ed.), Of thoughts and words. The relation between language and mind. Proceedings of Nobel Symposium 92. Imperial College Press: London, 107–125. Chafe, W. L. (1996). How consciousness shapes language. Pragmatics & Cognition Vol. 4(1), 1996. Special issue on language and consciousness, 35–54. Clark, H. & Clark, E. (1977). Psychology and Language. Harcourt Brace Jovanovich, New York.
References
Clark, H. H. (1992). Arenas of Language Use. The University of Chicago press: Chicago. Clark, H. H. (1996). Using Language. Cambridge University Press: Cambridge. Couper, R. M. (1974). The Control of eye fixations by the meaning of spoken language: A new methodology for real-time investigation of speech perception. Memory and language processing. Cognitive Psychology, 6, 84–107. Crystal, D. (1975). The English tone of voice. London: St. Martin. De Graef, P. (1992). Scene-context effects and models of real-world perception. In K. Rayner (Ed.), Eye movements and visual cognition: Scene perception and reading, 243–259. New York: Springer-Verlag. Demarais, A. & Cohen, B. H. (1998). Evidence for image-scanning eye movements during transitive inference. Biological Psychology, 49, 229–247. Diderichsen, Philip (2001). Visual Fixations, Attentional Detection, and Syntactic Perspective. An experimental investigation of the theoretical foundations of Russel Tomlin’s fish film design. Lund University Cognitive Studies 84. Duchowski, Andrew T. (2003). Eye Tracking Methodology: Theory and Practice. SpringerVerlag, London, UK. Engel, D., Bertel, S. & Barkowsky, T. (2005). Spatial Principles in Control of Focus in Reasoning with Mental Representations, Images, and Diagrams. Spatial Cognition IV, 181–203. Ericsson, K. A. & Simon, H. A. (1980). Verbal Reports as Data. Psychological Review; 87: 215–251. Findlay, J. M., & Walker, R. (1999). A model of saccadic eye movement generation based on parallel processing and competitive inhibition. Behavioral and Brain Sciences 22: 661–674. Cambridge University Press. Finke, R. A. & Shepard, R. N. (1986). Visual functions of mental imagery. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance. New York:Wiley. Finke, R.A. (1989). Principles of Mental Imagery. Massachusetts Institute of Technology: Bradford books. Firbas, J. (1992). Functional Sentence Perspective in Written and Spoken Communication. Cambridge: Cambridge University Press. Gärdenfors, P. (1996). Speaking about the inner environment, In S. Allén (Ed.), Of thoughts and words. The relation between language and mind. Proceedings of Nobel Symposium 92. Stockholm 1994. Imperial College Press, 143–151. Gärdenfors, Peter (2000). Conceptual Spaces: The Geometry of Thought, MIT Press Cambridge, MA. Garett, M. (1980). Levels of processing in sentence production. In Butterworth (1980), Language production. London: Academic Press, 177–220. Garrett, M. (1975). The Analysis of Sentence Production. In Bower, G. (Ed.) Psychology of Learning and Motivation, Vol. 9. New York: Academic Press. 133–177. Gedenryd, H. (1998). How designers work. Making sense of authentic cognitive activities. Lund University Cognitive Studies 75: Lund. Gentner, D., & Stevens, A. L. (Eds.). (1983). Mental models. Hillsdale, NJ: Lawrence Erlbaum Associates.
181
182 Discourse, Vision and Cognition
Gernsbacher, M. A. & Givón, T. (Eds.). (1995). Coherence in spontaneous text. Amsterdam: Benjamins. Givón, T. (Ed.) (1979). Syntax and Semantics, Vol. 12: Discourse and Syntax. Academic Press: New York, San Francisco, London. Givón, T. (1990). Syntax: A Functional-Typological Introduction. Vol. 2 Amsterdam and Philadelphia: John Benjamins. Goldman-Eisler, F. (1968). Psycholingustics: Experiments in spontaneous speech. Academic Press, New York. Goolsby, T. W. (1994). Eye movement in music reading: Effects of reading ability, notational complexity, and encounters. Music Perception, 12(1), 77–96. Griffin, Z. & Bock, K. (2000). What the eyes say about speaking. Psychol Sci. 2000 July 11(4): 274–279. Griffin, Z. M. (2004). Why look? Reasons for eye movements related to language production. In Henderson & Ferreira (Eds.). The integration of language, vision, and action: Eye movements and the visual world. New York: Psychology Press. 213–247. Grosz, B. & Sidner, C. L. (1986). Attention, Intentions, and the Structure of Discourse. Computational Linguistics, Vol. 12, Nr. 3, 175–204. Grosz, B. J. (1981). Focusing and description in natural language dialogues. In A. Joshi, B. Webber, I. Sag (Eds.), Elements of discourse understanding. New York: Cambridge University Press. Grow, G. (1996). The writing problems of visual thinkers. http://www-wane.scri.fsu.edu/ ggrow/. An expanded version of a print article that appeared in the refereed journal, Visible Language, 28.2, Spring 1994, 134–161. Gullberg, M. (1999). Gestures in spatial descriptions. Working Papers 47, Department of Linguistics, Lund university, 87–98. Gülich, E. (1970). Makrosyntax der Gliederungsignale im gesprochenen Französisch. Wilhelm Fink Verlag: München. Halliday, M. A. K. (1970). A course in spoken English: Intonation. Oxford Universtity Press. Halliday, M. A. K. (1978). Language as social semiotic. Edward Arnold: London. Halliday, M. A. K. (1985). An Introduction to Functional Grammar. London: Edward Arnold. Hannus, M. & Hyönä, J. (1999). Utilization of Illustrations during Learning of Science Textbook Passages among Low- and High-Ability Children. Contemporary Educational Psychology, Volume 24, Number 2, April 1999, 95–123(29). Academic Press. Hauland, G. & Hallbert, B. (1995). Relations between visual activity and verbalised problem solving: A preliminary study. In Norros, L. (Ed.), VTT symposium 158, the 5th European Conference on cognitive science approaches to process control, Espoo, Finland, 99–110. Hauland, G. (1996). Building a methodology for studying cognition in process controll: A semantic analysis of visual and verbal behaviour. NTNU, Norge. Haviland, S. E. & Clark, H. H. (1974). What’s new? Acquiring New Information as a Process in Comprehension. Journal of Verbal Learning and Verbal Behaviour 13: 512–521. Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. TRENDS in Cognitive Sciences, 9(4), Elsevier Ltd., 188–193.
References
Hayhoe, M. M. (2004). Advances in relating eye movements and cognition. Infancy, 6(2), pp. 267–274. Hebb, D. O. (1968). Concerning imagery. Psychological Review, 75, 466–477. Hegarty, M. (1992). Mental animation: Inferring motion from static displays of mechanical systems. Journal of Experimental Psychology: Learning, Memory and Cognition, 18, 1084–1102. Hegarty, M. (2004). Diagrams in the mind and in the world: Relations between internal and external visualizations. In A. Blackwell, K. Mariott & A. Shimojima (Eds.), Diagrammatic Representation and Inference. Lecture Notes in Artificial Intelligence 2980 (1–13). Berlin: Springer-Verlag. Hegarty, M. & Waller, D. (2004). A dissociation between mental rotation and perspectivetaking spatial abilities. Intelligence, 32, 175–191. Hegarty, M. & Waller, D. (2006). Individual differences in spatial abilities. In P. Shah & A. Miyake (Eds.). Handbook of Visuospatial Thinking. Cambridge University Press. Henderson, J. M. & Hollingworth, A. (1998). Eye Movements during Scene Viewing. An Overview. In Underwood, G. W. (Ed.), Eye Guidance in Reading and Scene Perception, 269–293. Oxford: Elsevier. Henderson, J. M. & Hollingworth, A. (1999). High Level Scene Perception. Annual Review of Psychology 50, 243–271. Henderson, J. M. (1992). Visual attention and eye movement control during reading and picture viewing. In K. Rayner (Ed.) Eye movements and Visual Cognition. New York: Springer Verlag. Henderson, J. M. & Ferreira, F. (Eds.). (2004). The integration of language, vision, and action: Eye movements and the visual world. New York: Psychology Press. Herskovits, A. (1986). Language and Spatial Cognition. An Interdisciplinary Study of the Prepositions in English. Cambridge University Press: Cambridge. Hoffman, J. E. (1998). Visual Attention and Eye Movements. In Pashler, H. (Ed.). (1998). Attention. Psychology Press: UK, 119–153. Holmqvist, K., Holmberg, N., Holsanova, J., Tärning, J. & Engwall, B. (2006). Reading Information Graphics – Eyetracking studies with Experimental Conditions (J. Errea, ed.) Malofiej Yearbook of Infographics, Society for News Design (SND-E). Navarra University, Pamplona, Spain, pp. 54–61. Holmqvist, K. & Holsanova, J. (1997). Reconstruction of focus movements in spoken discourse. In Liebert, W., Redeker, G., Waugh, L. (Eds.), Discourse and Perspective in Cognitive Linguistics. Benjamins: Amsterdam, 223–246. Holmqvist, K. (1993). Implementing Cognitive Semantics. Image schemata, valence accomodation and valence suggestion for AI and computational linguistics. Lund University Cognitive Studies 17. Holmqvist, K., Holsanova, J., Barthelson, M. & Lundqvist, D. (2003). Reading or scanning? A study of newspaper and net paper reading. In Hyönä, J. R., & Deubel, H. (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (657–670). Elsevier Science Ltd. Holsanova, J., Holmberg, N. & Holmqvist, K. (forthc.). Intergration of Text and Information Graphics in Newspaper Reading. Lund University Cognitive Studies 125.
183
184 Discourse, Vision and Cognition
Holsanova, J. & Holmqvist, K. (2004). Med blick pa nätnyheter. Ögonrörelsestudier av läsning i nätbaserade tidningar. (Looking at the net news. Eye tracking study of net paper reading) In Holmberg, Claes-Göran & Svensson, Jan (red.), Mediekulturer, Hybrider och Förvandlingar. Carlsson förlag, 216–248. Holsanova, J. & Koryčanská, R. (1987). Die Rolle des Erwachsenen bei der Aneignung des Kommunikationstyps Erzählen und Zuhören durch Vorschulkinder. In W. Bahner/ J. Schildt/ D. Viehweger (Hrsg.), Proceedings of the XIVth International Congress of Linguistics, B. 2, Akademie-Verlag: Berlin, 1802–1805. Holsanova, J. (1986). Was ist Erzählen? Versuche zur Rekonstruktion des Alltagsbegriffs von Erzählen. Unveröffentlichte Diplomarbeit. Humboldt-Universität zu Berlin. Holsanova, J. (1989). Dialogische Aspekte des Erzählens in der Alltagskommunikation. In Kořenský, J. & Hartung, W. (Hrsg.) Gesprochene und geschriebene Kommunikation. Linguistica XVIII. Ústav pro jazyk český Praha, 65–80. Holsanova, J. (1996). Attention movements in language and vision, In Representations and Processes between Vision and NL, Proceedings of the 12th European Conference of Artificial Intelligence, Budapest, Hungary, 81–83. Holsanova, J. (1997a). Bildbeskrivning ur ett kommunikativt och kognitivt perspektiv. LUCS Minor 6. Lund University. Holsanova, J. (1997b). Verbal or Visual Thinkers? Different ways of orienting in a complex picture. In Proceedings of the European Conference on Cognitive Science, Manchester 1997, 32–37. Holsanova, J. (1998). Att byta röst och rädda ansiktet. Citatanvändning i samtal om ‘de andra’. (Changing voice and saving face. The use of quotations in conversations about ‘the others’) Språk och Stil 8 (1998), 105–133. Holsanova, J. (1999a). På tal om bilder. Om fokusering av uppmärksamhet i och strukturering av talad beskrivande diskurs. (Speaking of pictures. Focusing attention and structuring spoken descriptive discourse in monological and dialogical settings) Lund University Cognitive Studies 78. Holsanova, J. (1999b). Olika perspektiv på språk, bild och deras samspel. Metodologiska reflexioner. (Language, picture and their interplay seen from different perspectives. Methodological considerations.) In Haskå, I. & Sandqvist, C. (Eds.), Alla tiders språk. Lundastudier i nordisk språkvetenskap A 55. Lund University Press. Lund, 117–126. Holsanova, J. (2001). Picture Viewing and Picture Description: Two Windows on the Mind. Doctoral dissertation. Lund University Cognitive Studies 83: Lund, Sweden. Holsanova, J. (2006). Dynamics of picture viewing and picture description. In Albertazzi L. (Ed.), Visual thought. The depictive space of the mind. Part Three: Bridging perception and depiction of visual spaces. Advances in Consciousness Research. Benjamins, 233–254. Holsanova, J., Hedberg, B. & Nilsson, N. (1998). Visual and Verbal Focus Patterns when Describing Pictures. In Becker, Deubel & Mergner (Eds.), Current Oculomotor Research: Physiological and Psychological Aspects. Plenum: New York, London, Moscow. Holsanova, J., Rahm, H. & Holmqvist, K. (2006). Entry points and reading paths on the newspaper spread: Comparing semiotic analysis with eye-tracking measurements. In Visual communication 5 (1), 65–93.
References
Horne, M., Hansson, P., Bruce G., Frid, J. & Filipson, M. (1999). Discourse Markers and the Segmentation of Spontaneous Speech: The case of Swedish men ‘but/and/so’. Working Papers 47, 123–140. Dept. of Linguistics, Lund university. Huber, S., & Kirst, H. (2004). When is the ball going to hit the ground? Duration estimates, eye movements, and mental imagery of object motion. Journal of Experimental Psychology: Human Perception and Performance, Vol. 30, No. 3, 431–444. Inhoff, A. W. & Gordon, A. M. (1997). Eye Movements and Eye-Hand Coordination During Typing. Current Directions in Psychological Science, Vol. 6/6/1997. American Psychological Society: Cambridge University Press, 153–157. Johansson, R., Holsanova, J. & Holmqvist, K. (2005). What Do Eye Movements Reveal About Mental Imagery? Evidence From Visual And Verbal Elicitations. In Bara, B. G., Barsalou, L., Bucciarelli, M. (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society, pp. 1054–1059. Mahwah, NJ: Erlbaum. Johansson, R., Holsanova, J. & Holmqvist, K. (2006). Pictures and spoken descriptions elicit similar eye movements during mental imagery, both in light and in complete darkness. Cognitive Science 30: 6 (pp. 1053–1079). Lawrence Erlbaum. Johansson, R., Holsanova, J. & Holmqvist, K. (2005). Spatial frames of reference in an interactive setting. In Tenbrink, Bateman & Coventry (Ed.) Proceedings of the Workshop on Spatial Language and Dialogue, Hanse-Wissenschaftskolleg Delmenhorst, Germany October 23–25, 2005. Johnson-Laird, P. N. (1983). Comprehension as the Construction of Mental Models, Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, Vol. 295, No. 1077, 353–374. Jonassen, D. & Grabowski, B. (1993). Handbook of individual differences, learning, and instruction. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Juola J. F., Bouwhuis D. G., Cooper E. E.; Warner C. B. (1991). Control of Attention around the Fovea. Journal of Eeperimental Psychology – Human Perception and Performance 17(1): 125–141. Just, M. A., Carpenter, P. A. (1976). Eye fixations and cognitive processes, Cognitive Psychology 8, 441–480. Just, M. A., & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological review, 87, 329–354. Kahneman, D. (1973). Attention and Effort. Prentice Hall, Inc.: Englewood Cliffs, New Jersey. Kendon, A. (1980). Gesticulation and speech: Two aspects of the process of utterance. In Key, M. (Ed.), The Relationship of Verbal and Nonverbal Communication, Mouton: The Hague, 207–227. Kess, J. F. (1992). Psycholinguistics. Psychology, Linguistics and the Study of Natural Language. Benjamins: Amsterdam/Philadelphia. Kintsch, W. & van Dijk, T. A. (1983). Strategies of discourse comprehension. New York: Academic. Kintsch, W. (1988). The role of knowledge in discourse comprehension construction-integration model. Psychological Review, 95, 163–182.
185
186 Discourse, Vision and Cognition
Kita, S., & Özyürek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48, 16–32. Kita, S. (1990). The temporal relationship between gesture and speech: A study of JapaneseEnglish bilinguals. Unpublished master’s thesis. Department of Psychology, University of Chicago. Kiyoshi, Naito, Katoh Takaaki, & Fukuda Tadahiko (2004). Expertise and position of line of sight in golf putting. Perceptual and Motor Skills.V.99/P., 163–170. Korolija, N. (1998). Episodes in Talk. Constructing coherence in multiparty conversation. Linköping Studies in Arts and Science 171. Linköping university. Kosslyn, S. (1994). Image and Brain. Cambridge, Mass. The MIT Press. Kosslyn, S. (1978). Measuring the visual angle of the mind’s eye. Cognitive Psychology, 10, 356–389. Kosslyn, S. (1980). Image and Mind. Harvard University Press. Cambridge, Mass. and London, England. Kosslyn, S. M. (1995). Mental imagery. In S. M. Kosslyn & D.N. Osherson (Eds.), Visual cognition: An invitation to cognitive science (Vol. 2, pp. 267–296). Cambridge, MA: MIT Press. Kowler, E. (1996). Cogito Ergo Moveo: Cognitive Control of Eye Movement. In Landy M. S., Maloney, L. T. & Paul, M. (Eds.) Exploratory vision. The Active Eye. 51–77. Kozhevnikov, M, Hegarty, M. & Mayer, R. E. (2002). Revising the Visualizer-Verbalizer Dimension: Evidence for Two Types of Visualizers. Cognition and Instruction 20 (1), 47–77. Krutetskii, V. A. (1976). The psychology of mathematical abilities in school children. Chicago: University of Chicago Press. Labov, W. & Waletzky, J. (1973). Erzählanalyse: Mündliche Versionen persönlicher Erfahrung. In Ihwe, J. (Hg.) Literaturwissenschaft und Linguistik. Bd. 2. Frankfurt/M.: Fischer-Athenäum, 78–126. Laeng, Bruno & Teodorescu, Dinu-Stefan (2002). Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cognitive Science 2002, Vol. 26, No. 2: 207–231. Lahtinen, S. (2005). Which one do you prefer and why? Think aloud! In Proceedings of Joining Forces, International Conference on Design Research. UIAH, Helsinki, Finland. Lakoff, G. & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago Press. Lakoff, G. (1987). Women, fire, and dangerous things: what categories reveal about the mind. The University of Chicago Press: Chicago, IL. Lang, E., Carstensen, K.-U. & Simmons, G. (1991). Modelling Spatial Knowledge on a Linguistic Basis. Theory – Prototype – Integration. (Lecture Notes on Artificial Intelligence 481) Springer-Verlag: Berlin, Heidelberg, New York. Langacker, R. (1987). Foundations of Cognitive Grammar. Volume 1. Stanford University Press: Stanford. Langacker, R. (1991). Foundations of Cognitive Grammar. Volume 2. Stanford University Press: Stanford.
References
Lemke, J. (1998). Multiplying meaning: Visual and verbal semiotics in scientific text. In Martin, J. & Veel, R. Reading Science. London: Routledge. Levelt, W. J. M. (1981). The speaker’s linearization problem. Philosophical Transactions of the Royal Society of London B, 295, 305–315. Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41–104. Levelt, W. J. M. (1989). Speaking: From intention to articulation. MIT Press, Bradford books: Cambridge, MA. Lévy-Schoen, A. (1969). Détermination et latence de la réponse oculomotrice à deux stimulus. L’ Année Psychologique, 69, 373–392. Lévy-Schoen, A. (1974). Le champ d’activité du regard: données expérimentales. L’Année Psychologique, 74, 43–66. Linde, C. & Labov, W. (1975). Spatial network as a site for the study of language and thought. Language, 51: 924–939. Linde, C. (1979). Focus of attention and the choice of pronouns in discourse. In Talmy Givón (Ed.), Syntax and Semantics, Volume 12, Discourse and Syntax, Academic Press: New York, San Francisco, London, 337–354. Linell, P. (2005). En dialogisk grammatik? I: Anward, Jan & Bengt Nordberg (red.), Samtal och grammatik, Studentlitteratur, 231–315. Linell, P. (1982). Speech errors and the grammatical planning of utterances. Evidence from Swedish. In Koch, W., Platzack, C. & Totties, G. (Eds.), Textstrategier i tal och skrift. Almqvist & Wiksell International, 134–151. Stockholm. Linell, P. (1994). Transkription av tal och samtal, Arbetsrapporter från Tema Kommunikation 1994: 9, Linköpings universitet. Loftus, G. R. & Mackworth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of experimental psychology: Human Perception and Performance, 4, 565–572. Lucy (1992). Language, diversity and thought. A Reformulation of the Linguistic Relativity theory. Cambridge University Press. Mackworth, N. H. & Morandi, A. J. (1967). The gaze selects informative details within pictures. Perception and Psychophysics 2, 547–552. Mast, F. W., & Kosslyn, S. M. (2002). Eye movements during visual mental imagery, TRENDS in Cognitive Science, Vol. 6, No. 7. Mathesius, V. (1939). O takzvaném aktuálním členění větném. Slovo a slovesnost 5, 171–174. Also as: On informationbearing Structure of the Sentence. In K. Susumo (Ed.) 1975. Harvard: Harvard university, 467–480. Meyer, A. S. & Dobel, C. (2003). Application of eye tracking in speech production research In The Mind’s Eye: Cognitive and applied aspects of eye movement research, J. Hyönä & Deubel, H. (Editors), Elsevier Science Ltd, 253–272. Meulen, F. F. van der, Mayer A. S. & Levelt, W. J. M. (2001). Eye movements during the production of nouns and pronouns. Memory & Cognition, 29, 512–521. Mishkin, M., Ungerleider, L. G. & Macko, K. A. (1983). Object vision and spatial vision: Two cortical pathways, Trends in Neuroscience, 6, 414–417. Mozer, M. C. & Sitton, M. (1998). Computational modelling of spatial attention. In Pashler, H. (Ed.), Attention. Psychology Press: UK, 341–393.
187
188 Discourse, Vision and Cognition
Naughton, K. (1996). Spontaneous gesture and sign: A study of ASL signs cooccuring with speech. In Messing, L. (Ed.), Proceedings of the workshop on the integration of gesture in Language and speech. University of Delaware, 125–134. Nordqvist, S. (1990). Kackel i trädgårdslandet. Opal. Noton, D. & Stark, L. (1971a). Eye movements and visual perception. Scientific American, 224, 34–43. Noton, D. & Stark, L. (1971b). Scanpaths in saccadic eye movements while viewing and recognizing patterns, Vision Research 11, 9–29. Noton, D., & Stark, L. W. (1971c). Scanpaths in eye movements during perception. Science, 171, 308–311. Nuyts, J. (1996). Consciousness in language. Pragmatics & Cognition Vol. 4(1), 1996. Special issue on language and consciousness, 153–180. Olshausen, B. A. & Koch, C.(1995). Selective Visual Attention. In Arbib, M. A. (Ed.), The handbook of brain theory and neural networks. Cambridge, MA: MIT Press, 837–840. Oviatt, S. L. (1999). Ten Myths of Multimodal Interaction. Communications of the ACM, Vol. 42, No. 11, Nov. 99, 74–81. Paivio, A. (1971). Imagery and Verbal Processes. Hillsdale, N.J.: Erlbaum. Paivio, A. (1986). Mental representation: A dual coding approach. New York: Oxford University Press. Paivio, A. (1991a). Dual Coding Theory: Retrospect and current status, Canadian Journal of Psychology, 45 (3), 255–287. Paivio, A. (1991b). Images in Mind. Harvester Wheatsheaf: New York, London. Pollatsek, A. & Rayner, K. (1990). Eye movements, the eye-hand span, and the perceptual span in sight-reading of music. Current Directions in Psychological Science, 49–53. Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology 32, 3–25. Prince, E. (1981). Toward a Taxonomy of Given-New Information. In Cole, P. (Ed.), Radical pragmatics. New York: Academic Press. Pylyshyn, Z. W. (2001). Visual indexes, preconceptual objects, and situated vision, Cognition, 80 (1/2), 127–158. Quasthoff, U. (1979). Verzögerungsphänomene, Verknüpfungs- und Gliederungssignale in Alltagsargumentationen und Alltagserzählungen, In H. Weydt (Ed.), Die Partikel der deutschen Sprache. Walter de Gruyter: Berlin, New York, 39–57. Qvarfordt, Pernilla (2004). Eyes on Multimodal Interaction. Linköping Studies in Science and Technology No. 893. Department of Computer and Information Science, Linköpings Universitet. Rayner, K. (Ed.). (1992). Eye movements and visual cognition: scene perception and reading. New York: Springer-Verlag. Redeker, G. (1990). Ideational and pragmatic markers of discourse structure. Journal of Pragmatics 14 (1990), 367–381. Redeker, G. (1991). Linguistic markers of discourse structure. Review article. Linguistics 29 (1991), 139–172. Redeker, G. (2000). Coherence and structure in text and discourse. In William Black & Harry Bunt (Eds.), Abduction, Belief and Context in Dialogue. Studies in Computational Pragmatics (233–263). Amsterdam: Benjamins.
References 189
Redeker, G. (2006). Discourse markers as attentional cues at discourse transitions. In Kerstin Fischer (Ed.), Approaches to Discourse Particles. Studies in Pragmatics, 1 (339–358). Amsterdam: Elsevier. Rieser, H. (1994). The role of focus in task oriented dialogue. In Bosch, P. & van der Sandt, R. (Eds.), (1994). Focus and Natural Language Processing. Vol. 3, IBM Institute of Logic and Linguistics, Heidelberg. Sacks, H. (1968–1972/1992). Lectures on conversation, Vol. II, edited by Gail Jefferson. Oxford: Blackwell. Schank, R. C. & Abelson, R. P. (1977). Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Erlbaum. Scheiter, K., Wiebe, E. & Holsanova, J. (submitted). Theoretical and instructional aspects of learning with visualizations. In Zheng, R. (Ed.), Cognitive effects of multimedia learning. IGI Global, USA. Schiffrin, D. (1987). Discourse markers. Cambridge University Press: Cambridge. Schill, K. (2005). A Model of Attention and Recognition by Information Maximization. In Neurobiology of Attention. Section IV. Systems. Elsevier. 671–676. Schill, K., Umkehrer, E., Beinlich, S., Krieger, G. & Zetzche, C. (2001). Scene Analysis with saccadic eye movements: Top-down and bottom-up modelling. J. Electron. Imaging 10, 152–160. Schilperoord, J. & Sanders, T. (1997). Pauses, Cognitive Rhythms and Discourse Structure: An Empirical Study of Discourse Production. In Liebert, W.-A., Redeker, G. & Waugh, L. (Eds.), Discourse and perspective in Cognitive Linguistics. Benjamins: Amsterdam/ Philadelphia, 247–267. Selting, M. (1998). TCUs and TRPs: The Construction of Units in Conversational Talk. In Interaction and linguistic Structures. InLiST No.4., Potsdam. Shepard, R. N. & Metzler, J. (1971). Mental Rotation of Three-Dimensional Objects. Science 1971, Vol. 171, no. 3972, pp. 701–703. Sloboda, J. A. (1974). The eye-hand span – an approach to the study of sight reading. Psychology of Music, 2(2), 4–10 Slowiaczek M. L. (1983). What does the mind do while the eyes are gazing? In Rayer, K. (Ed.), Eye Movements in Reading: Perceptual and language processes. Academic Press, Inc.: New York, London etc., 345–355. Sneider, W. X., & Deubel, H. (1995). Visual attention and saccadic eye movements: Evidence for obligatory and selective spatial coupling. In Findley et al. (Eds.) Eye movement research. Elsevier Science B.V. Solso, R. L. (1994). Cognition and the Visual Arts. A Bradford Book. The MIT Press. Cambridge, Massachusetts London, England. Spivey, M., Richardson, D., & Fitneva, S. (2004). Thinking outside the brain: spatial indices to visual and linguistic information. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision and action: Eye movements and the visual world. New York: Psychology Press. Spivey, M. & Geng, J. (2001). Oculomotor mechanisms activated by imagery and memory: Eye movements to absent objects. Psychological Research, 65, 235–241
190 Discourse, Vision and Cognition
Spivey, M. J., Tyler M., Richardson, D. C., & Young, E. (2000). Eye movements during. comprehension of spoken scene descriptions. Proceedings of the Twenty-second Annual Meeting of the Cognitive Science Society, 487–492, Erlbaum: Mawhah, NJ. Stenström, A.-B. (1989). Discourse Signals: Towards a Model of Analysis. In H. Weydt (Ed.), Sprechen mit Partikeln. Walter de Gruyter: Berlin, New York, 561–574. Strohner, H. (1996). Resolving Ambigious Descriptions through Visual Information. In Representations and Processes between Vision and NL, Proceedings of the 12th European Conference of Artificial Intelligence, Budapest, Hungary 1996. Strömqvist, S. (1998). Lite om språk, kommunikation, och tänkande. In T. Bäckman, O. Mortensen, E. Raanes, & E. Östli (red.). Kommunikation med dövblindblivne. Dronninglund: Förlaget Nordpress, 13–20. Strömqvist, S. (2000). A Note on Pauses in Speech and Writing. In Aparici, M. (ed.), Developing literacy across genres, modalities and languages, Vol 3. Universitat de Barcelona, 211–224. Strömqvist, S. (1996). Discourse Flow and Linguistic Information Structuring: Explorations in Speech and Writing. Gothenburg Papers in Theoretical Linguistics 78. Suwa Masaki, Tversky Barbara, Gero, John & Purcell, Terry (2001). Seeing into sketches: Regrouping parts encourages new interpretations. In J. S. Gero, B. Tversky & T. Purcell (Eds.), Visual and Spatial Reasoning in Design II, Key Centre of Design Computing and Cognition, University of Sydney, Australia, 207–219. Tadahiko, Fukuda & Nagano Tomohisa (2004). Visual search strategies of soccer players in one-to-one defensive situation on the field. Perceptual and Motor Skills.V.99/P, 968– 974. Tanenhaus, M. K., Magnuson, J. S., Dahan, D., & Chambers, C. 2000. Eye movements and lexical access in spoken-language comprehension: Linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistics Research 29, 6, 557–580. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. 1995. Integration of visual and linguistic information in spoken language comprehension. Science 268, 5217, 1635–1634. Taylor, H. A. & Tversky, B. (1992). Description and depiction of environments. Memory and Cognition, 20, 483–496. Theeuwes, J. (1993). Visual selective attention: A theoretical analysis. Acta psychologica 83, 93–154. Theeuwes, J., Kramer, A. F., Hahn, S., & Irwin, D. (1998). Our eyes do not always go where we want them to go: Capture of the eyes by new objects. Psychological Science, 9, 379–385. Thomas, N. J. T. (1999). Are theories of imagery theories of imagination? An active perception approach to conscious mental content. Cognitive Science 1999, Vol. 23, No. 2, 207–245. Tomlin, R. S. (1995). Focal Attention, voice, and word order. An experimental, cross-linguistic study. In P. Downing & M. Noonan (Eds.), Word Order in Discourse. Amsterdam: John Benjamins, 517–554. Tomlin, R. S. (1997). Mapping Conceptual Representations into Linguistic Representations: The Role of Attention in Grammar. In J. Nuyts & E. Pederson, With Language in Mind. Cambridge: CUP, 162–189.
References
Tversky, B. (1999). What does drawing reveal about thinking? In J. S. Gero & B. Tversky (Eds.), Visual and spatial reasoning in design. Sydney, Australia: Key Centre of Design Computing and Cognition, 93–101. Tversky, B., Franklin, N., Taylor, H. A., Bryant, D. J. (1994). Spatial Mental Models from Descriptions, Journal of the American Society for Information Science 45(9), 656–668. Ullman, S. (1996). High-level vision: Object recognition and visual recognition, Cambridge, MA: MIT Press. Ungerleider, L. G. & Mishkin, M. (1982). Two cortical visual systems. In Analysis of visual behavior, (Ed.) D. J. Ingle, M. A. Goodale & R. W. J. Mansfield. MIT Press. Underwood, G. & Everatt, J. (1992). The Role of Eye Movements in Reading: Some limitations of the eye-mind assumption. In E. Chekaluk & K. R. Llewellyn (Eds.), The Role of Eye Movements in Perceptual Processes, Elsevier Science Publishers B. V., Advances in Psychology, Amsterdam, Vol. 88, 111–169. van Donzel, M. E. (1997). Perception of discourse boundaries and prominence in spontaneous Dutch speech. Working papers 46 (1997). Lund university, Department of Linguistics, 5–23. van Donzel, M. E. (1999). Prosodic Aspects of Information Structure in Discourse. LOT, Netherlands Graduate School of Linguistics. Holland Academic Graphics: The Hague. Velichkovsky, B. M. (1995). Communication attention: Gaze-position transfer in co–operative problem solving. Pragmatics and Cognition 3(2), 199–222. Velichkovsky, B., Pomplun, M., & Rieser, J. (1996). Attention and Communication: EyeMovement-Based Research Paradigms. In W. H. Zangemeister, H. S. Stiehl, & C. Freksa (Eds.), Visual Attention and Cognition (125–154). Amsterdam, Netherlands: Elsevier Science. Viviani, P. (1990). Eye movements in visual research: Cognitive, perceptual, and motor control aspects. In E. Kowler (Ed.) Eye movements and their role in Visual and Cognitive Processes. Reviews of Oculomotor Research V4. Amsterdam: Elsevier Science B. V., 353–383. Yarbus, A. L. (1967). Eye movements and vision (1st Russian edition, 1965). New York: Plenum Press. Young, L. J. (1971). A study of the eye-movements and eye-hand temporal relationships of successful and unsuccessful piano sight-readers while piano sight-reading. Doctoral dissertation, Indiana University, RSD72–1341 Zwaan, R. A. & Radvansky, G. A. (1998). Situation models in language comprehension and memory. Psychological Bulletin 123, 162–185.
191
Author index
A Abelson 9, 189 Aijmer 43, 179 Allport 83, 179 Allwood 179 Andersson, B. 96, 177, 179 Andersson, R. 138 Arnheim 142, 179 B Baddeley 68, 179 Ballard 87, 145, 166, 179, 182 Bangerter 45, 179 Barkowsky 158, 181 Barsalou 162, 179, 185 Barthelson 183 Bartlett 68, 179 Beattie 9, 179 Beinlich 189 Berlyne 179 Berman 9, 179 Berséus 99, 179 Bertel 158, 177, 180–181 Blumenthal 146, 180 Bock 7, 82, 102, 116, 146, 180, 182 Bouwhuis 185 Braarud 82, 180 Brandt 158, 180 Bruce 7, 180, 185 Bryant 191 Buswell 80, 86, 108, 180 Butsch 99, 180 Butterworth 9, 180–181 C Carpenter 80, 185 Carstensen 58, 186 Casad 145, 180
Chafe 4, 7–12, 20–21, 42, 49, 81, 84–85, 100–101, 115, 127, 151, 179–180 Chambers 190 Clark, E. 3, 180 Clark, H. H. 4, 14, 45, 153, 179, 181–182 Cohen 159, 181 Cooper 185 Couper 82, 181 Crystal 7, 181 D Dahan 190 Dahl 179 Davidson 180 De Graef 181 Demarais 159, 181 Deubel 80, 183–184, 187, 189 Diderichsen 82, 181 Dobel 81–82, 102, 187 Drøivoldsmo 180 Duchowski 90, 181 E Eberhard 190 Engel 158, 181 Ericsson 81, 181 Everatt 80, 191 F Ferreira 80, 86, 180, 182–183, 189 Filipson 185 Findlay 83, 132, 181 Finke 40, 157–158, 162, 181 Firbas 4, 181 Fitneva 189 Franklin 191
Frid 185 Fukuda 186, 190 G Gärdenfors 81, 181 Garett 3, 181 Gedenryd 50, 181 Geng 159, 189 Gentner 152, 181 Gernsbacher 50, 180, 182 Gero 190–191 Givón 4, 50, 180, 182, 187 Goldman-Eisler 3, 40, 182 Goolsby 99, 182 Gordon 99, 102, 185 Grabowski 68, 185 Griffin 7, 81–82, 90, 102, 116, 143, 182 Grosz 43, 49–50, 182 Grow 67, 182 Gülich 43, 182 Gullberg 52, 182 H Hahn 190 Hallbert 82, 182 Halliday 4, 7, 21, 182 Hannus 167, 169, 182 Hansson 185 Hauland 82, 182 Haviland 4, 182 Hayhoe 87, 179, 182–183 Hebb 166, 183 Hedberg 184 Hegarty 157–158, 167, 183, 186 Henderson 80, 86, 90–91, 180, 182–183, 189 Herskovits 58, 183 Hoffman 90, 183
194 Discourse, Vision and Cognition
Hollingworth 86, 90–91, 183 Hollnagel 180 Holmberg 183–184 Holmqvist 52, 127, 153–154, 156, 158, 167, 169, 179, 183–185 Holsanova 6, 7, 11–12, 15, 19, 21–22, 28, 36, 39, 45–46, 52, 59, 61, 64, 88, 92, 94, 96, 114, 142, 154, 156, 158, 161, 167–169, 179, 183–185, 189 Horne 15, 185 Huber 158, 185 Hyönä 167, 169, 182–183, 187 I Inhoff 99, 102, 185 Irwin 180, 190 J Johansson, R. 36, 102, 158, 165, 168, 185 Johansson, V. 179 Johnson 152, 185–186 Johnson-Laird 152, 185 Jonassen 68, 185 Juola 80, 185 Just 80, 185 K Kahneman 106, 185 Karlsson 179 Katoh 186 Kendon 102, 185 Kess 3, 185 Kintsch 169, 185 Kirst 158, 185 Kita 7, 102, 186 Kiyoshi 99, 186 Koch 83, 187–188 Korolija 16, 186 Koryčanská 184 Kosslyn 40–41, 69, 158, 162, 166, 186–187 Kowler 99, 186, 191 Kozhevnikov 68, 186 Kramer 190 Krieger 189 Krutetskii 68, 186
L Labov 59, 81, 153, 186–187 Laeng 158, 166, 186 Lahtinen 97, 186 Lakoff 127, 152, 186 Lang 58, 186 Langacker 23, 127, 186 Lemke 20–21, 36, 187 Levelt 3, 7, 119, 146, 153, 180, 187 Lévy-Schoen 132, 187 Lieberman 68, 179 Linde 49, 50, 81, 153, 187 Linell 2–3, 12, 80, 146–148, 153, 177, 187 Loftus 87, 187 Lucy 79, 187 Lundqvist 183 M Macko 187 Mackworth 86–87, 187 Magnuson 190 Mast 166, 187 Mathesius 4, 187 Mayer, A. S. 187 Mayer, R. E. 186 Metzler 158, 189 Mishkin 69, 187, 191 Morandi 86, 187 Mozer 83, 187 N Nagano 190 Naughton 102, 188 Nilsson 184 Nordqvist 5, 65, 85, 164, 188 Noton 141, 145, 188 Nuyts 127, 188, 190 O Olshausen 83, 188 Oviatt 102, 177, 188 Özyürek 186 P Paivio 67–68, 188 Pollatsek 99, 188
Pomplun 103, 191 Pook 179 Posner 80, 83, 179, 188 Prince 4, 188 Purcell 190 Pylyshyn 166, 188 Q Quasthoff 43, 188 Qvarfordt 96, 188 R Radvansky 169, 191 Rahm 184 Rao 179 Rayner 90, 99, 181, 183, 188 Redeker 43–44, 51, 183, 188–189 Richardson 159, 189–190 Rieser 103, 189, 191 S Sacks 59, 189 Sanders 9, 15, 189 Schank 9, 189 Scheiter 167, 189 Schiffrin 43, 189 Schill 97, 189 Schilperoord 9, 15, 189 Sedivy 190 Selting 7, 189 Shepard 157–158, 181, 189 Sidner 43, 49–50, 182 Simmons 58, 186 Simon 81, 181 Sitton 83, 187 Slobin 9, 179 Sloboda 99, 189 Slowiaczek 80, 189 Sneider 80, 189 Solso 80, 90–91, 125, 142, 189 Spivey 159, 166, 189–190 Spivey-Knowlton 190 Stark 141, 145, 158, 180, 188 Stenström 43, 190 Stevens 152, 181 Strohner 82, 190
Strömqvist 3, 9, 15, 40, 54, 80, 179, 190 Suwa 154, 190 T Tadahiko 99, 186, 190 Tanenhaus 103, 190 Tärning 183 Taylor 153, 180, 190–191 Teodorescu 158, 166, 186 Theeuwes 80, 83, 190 Thomas 166, 181, 190 Tomlin 82, 180–181, 190 Tufvesson 179
Author index
Tversky 48, 58, 153, 190–191 Tyler 159, 190 U Ullman 191 Umkehrer 189 Underwood 80, 183, 191 Ungerleider 69, 187, 191 V van der Meulen 119, 187 van Dijk 169, 185 van Donzel 15, 191 Velichkovsky 103, 191 Viviani 145, 191
W Waletzky 59, 186 Walker 80, 83, 181 Waller 157, 183 Warner 185 Wengelin 179 Wiebe 189 Y Yarbus 80, 87, 97, 136, 145, 191 Young 99, 159, 190–191 Z Zetzche 189 Zwaan 169, 191
195
Subject index
A activated information 6 active consciousness 5–6, 92 active information 11 active processing 134, 136, 149, 175 alignment 122, 174 animacy principle 79, 131 anticipatory visual fixations 99 area of interest 107, 108 associative processes 148, 176 associative view 146 attention 6–8, 16, 19, 43, 46, 49, 50, 52, 54, 68, 79, 80, 82–87, 96, 100, 123, 127, 136, 154, 156, 166, 171–172, 177, 179, 180, 183–184, 187–191 attentional spotlight 84, 89, 94, 98, 100, 174 attitude(s) 20, 24, 37–38, 47, 81, 87, 94, 97, 117, 142–143, 145, 167, 171–172, 178 C categorical proximity 132, 137 categorisation 14, 21, 24, 29, 31, 36, 94, 103, 108, 113, 116–119, 130, 144–145, 149, 175 categorisation difficulties 21, 29, 31, 36, 175 process 21, 94, 130, 149 categorising activities 27, 37, 87 clause(s) 3, 5–7, 10, 15, 43, 59, 62 123, 146 cluster(s) 19, 22–23, 41, 84, 88–89, 94, 98–104,
106–107, 113, 115–119, 122–123, 131–137, 141, 147, 149, 174, 176 in the visual and verbal flow 98, 174 co-construct meaning 156 cognition 68, 80, 84, 94, 127, 158, 179, 181–183, 186, 188 cognitive factors 64–66, 87, 173 cognitive linguistics 127, 152 cognitive processes 1, 16, 80–82, 97, 115, 121–123, 126, 145, 157, 177, 185 cognitive rhythm 4, 9, 16, 171 cognitive science 66, 81, 182, 186 cognitive semantics 127, 129, 132, 152 coherence 37, 39, 52–55, 64, 151, 186 collaborative user interface 96 comparable unit 99, 100, 106, 115, 175 comparative gazes 176 complex picture 19, 28, 65, 85–88, 98, 120–121, 139, 144, 148, 158, 164–165, 174–175, 184 scene 86, 121, 143, 174, 176 units of thought 9, 89 visual scene 175 compositional principle 131 conceptualisation 88, 120, 122, 147, 153, 174, 176 conceptualise 145, 151, 154 configuration(s) 92–93, 100, 102–104, 108, 113–118, 121, 125–126, 141, 142, 165, 174, 180
within focus 101 within superfocus 113 conscious focus of attention 6, 7, 82, 89, 151, 156 consciousness 4, 9, 19–20, 81, 84–85, 123, 153, 156, 179, 180, 188 consciousness-based approach 4 contextual aspects 65 conversation 7, 28, 36, 46, 48, 49–51, 154, 157, 168, 173, 177, 186, 189 counting-like gazes 176 covert 80–81, 84, 171 covert attention 80 covert mental processes 171 D delay 32, 92, 101, 107–108, 114–118, 122 delay configuration 108, 118 describe 19, 20–21, 25, 27–29, 31, 35, 39–41, 46, 51, 55–57, 60, 65–66, 68, 70, 77, 79, 85, 87, 132, 145–146, 153, 158, 164, 171 describer 42, 109, 113, 122, 127, 156, 174 description coherence 39, 46, 64 description styles 32, 54, 62–65, 67, 69, 173 descriptive discourse 1, 9, 15–16, 38, 53, 55, 79, 81, 88–89, 98, 120–121, 123, 147, 151–154, 157, 167–168, 171, 173, 176–177, 184 design 47, 50, 81, 97, 154, 168, 177, 180–181, 191
198 Discourse, Vision and Cognition
digression 39, 40, 42, 44, 45, 172 discourse analysis 171 discourse boundaries 12, 14, 15, 17, 171, 191 discourse coherence 50, 53, 122, 172 comprehension 151–152, 169, 185 hierarchy 10, 13, 100 level(s) 13, 15, 88, 102, 115, 120–123, 143, 145, 147, 174, 176 markers 5, 8, 11–12, 14, 16, 42–43, 46, 51–53, 59, 61–62, 67, 75, 156 operators 43 production 2, 9, 16, 103, 151–152, 171, 176 segmentation 1, 9, 11, 15–16, 39, 171 topics 9, 113, 121, 144 discourse-mediated mental representations 49, 149, 176 discourse-mediated representations 172 distribution of foci 28, 33, 35, 38, 171 drawing 38–39, 46, 49–50, 52, 54, 97, 151, 153–154, 156–157, 172–173, 191 dual code theory 68 dynamic 4, 30, 32, 55–67, 69, 70–77, 79, 84, 94, 97, 127–128, 141, 153, 157, 168, 173, 178 description style 56, 59, 61–64, 69, 70, 73, 76–77, 173 motion verbs 62, 64 verbs 58, 62, 70, 72, 74 dysfluencies 3, 16, 21, 80, 113, 144 E embedded segments 44 errors 3, 187
evaluation 9, 10, 50, 137, 139, 149, 175, 177 evaluative foci 24, 27, 33–34, 36, 52, 54, 62, 69, 109, 116, 143 evaluative tool 94 events 4, 16, 20–21, 27, 30, 37, 41, 57, 59, 62, 66–67, 77, 86, 117–118, 120–121, 142–143, 152, 158, 171, 173, 179 execution 3, 146 existential constructions 58, 70, 74, 76 experiential factors 64–66, 87, 173 experiment 82, 84, 161 expert foci 24, 27, 33–38, 62, 171–172 external memory aid 50, 54, 153, 173 visual representations 49 visualisations 157, 167–168, 177 eye fixation patterns 143 eye gaze 96, 161, 174 eye movement(s) 70, 80–84, 87, 90, 97, 99, 106, 116, 125–126, 129, 134–135, 137, 143, 145, 150, 157–158, 159, 161–166, 168, 176–177, 180–183, 185, 188–189 function of 143, 176 patterns 86, 137–138, 141, 144–145, 147, 149, 157–158, 163, 168, 176–177 protocol 167, 175 eye tracker 28, 85 eye tracking 29–33, 35, 37–38, 40, 79, 85, 87, 95, 98, 100, 102, 116, 120, 123, 143, 151, 158, 162, 167, 169, 179, 187 eye voice latencies 161 eye-gaze patterns 96 eye-mind assumption 191 eye-voice latency 102, 122, 174 eye-voice span 102
F feedback 2, 9, 15, 39, 50–51, 54 figure-ground 127 fixation(s) 89–91, 94, 99, 101, 103, 106, 135, 141, 144–145, 161–162, 175, 181, 190 duration 88, 90 pattern 89, 123, 125, 135, 141, 143, 150, 173 flow of speech 1, 171 flow of though 1, 80, 171 focus 1, 3, 5–7, 11–13, 15–16, 19, 21–23, 26–28, 30, 32–33, 36, 37, 39, 41, 44–46, 48–50, 52–53, 55–57, 61–62, 65, 67–69, 77, 79, 83–85, 87, 92, 94, 100–104, 106–109, 114–116, 118, 121–123, 125–127, 129, 144–147, 151, 153–154, 156–157, 164–165, 172–176, 183, 189 of active thought 3 of attention 7, 49 of thought 3 focusing 49, 50, 53–54, 59, 62–63, 79, 81, 87, 101, 103, 126–127 free description 63, 87, 121, 174–175 functional clusters 175 functional distribution of configuration types 100 of multimodal patterns 116 functions of eye movements 143 G gaze 50, 82, 84, 90, 96, 122, 144, 174, 187 behaviour 50 pattern 96, 161, 174 geometric type 68 gestures 3, 15, 46, 50, 52, 97, 151 global correspondence 161– 163, 165 global thematic units 16
global transition 9, 16, 171 H harmonic type 68 hesitation 2–3, 5, 7–8, 11, 21, 39–40, 113 hierarchy of discourse production 9 holistic view 146 hypotactic segments 53, 172 I iconic visualiser(s) 68–69, 77, 173 idea unit 5, 7, 9, 83, 88, 100 ideational pointing gazes 176 image schemata 152, 157 imagined scene 166 imagery system 68 impressions 16, 19, 29, 35, 81, 87, 114, 121, 145, 148 inactive foci 7 individual differences 66–67, 69, 77, 125, 157, 167, 169, 173, 185 information flow 4, 39, 172 information package(s) 5, 9, 15 information structure 191 information unit(s) 5, 9, 16 integration patterns 115, 177 interactive foci 25, 27, 29, 34, 36, 51 interactive setting 19, 26, 28–29, 33–35, 38–40, 56, 64, 69–70, 72–74, 77, 172, 185 interdisciplinary 183 interface design 97 interpretation 36, 41, 52, 68–69, 82, 86, 103, 118–120, 137, 144–145, 149, 165, 175 interpreting activities 27, 32, 37, 87, 171 intonation unit(s) 5, 7, 10–11, 49 introspection 29, 35, 131, 143 introspective foci 26–27, 34, 62
Subject index 199
J joint attention 153, 156, 172 joint focus of attention 50, 54, 154, 157, 168, 177 L language 2–4, 7, 9, 12, 14–15, 19, 46, 50, 54, 66–68, 80–82, 84–85, 87, 97, 102, 104, 125, 137, 143–144, 146, 150–152, 169, 176, 179, 180–184, 187–191 and thought 4, 187 and vision 85, 176, 184 comprehension 12, 81, 152, 169, 190–191 planning 125, 146, 150 production 3, 12, 80, 87, 104, 125, 143–144, 150, 182 larger units of discourse 9, 15, 49, 175 latency 82, 92, 101–102, 107, 115, 122–123, 147, 161, 174–176 level(s) of abstraction 22, 27, 40, 42, 49, 54, 81, 122, 130, 132, 135, 137, 172, 174 of discourse 1, 3, 7, 12, 17, 80, 87, 121, 171 of specificity 125, 130, 148, 175 lexical markers 11, 15, 17, 40, 52, 156, 171 linear segments 44 linguistics 66, 183 list of items 31, 62, 92, 102, 107, 115–118, 123 listeners 1, 12–17, 28, 36, 40, 43, 46, 49–50, 52–54, 138– 139, 141–142, 149, 151–154, 156, 171–172, 176 listening and retelling 159, 163 local correspondence 161–165 local planning 147 local transition 16, 171 localising expressions 46, 53, 59, 61–62, 77
localising foci 21, 23, 26–27, 32, 36–38, 62, 69, 103, 109, 116, 118, 123, 143, 171–172, 175 loudness and voice quality 42, 59, 62, 77 M macroplanning 3, 8, 146 meaning making 157, 172 meaningful sequences 143, 176 meaningful units 97, 134, 177 means of bridging the foci 42, 52 memory aid 50, 54, 153, 173 mental activities 96, 115, 118–119, 174–175 mental activity 100, 106, 115, 123 mental distance 40–42, 53, 172 mental groupings 69, 125, 131, 136–137, 142, 149, 176 mental imagery 28, 40, 69, 79, 102, 150–151, 157–158, 166–168, 173, 177, 181, 185, 187 mental models 151–152, 158, 168–169 mental processes 81, 87, 94, 97, 142, 175 mental representations 172 mental zooming 136–137, 143, 149, 176 mentally imagined objects 172–173 metacomments 26, 29, 34, 113 metaphors 129, 149, 152–153, 175 meta-textual foci 171–172 microplanning 3, 146 monitoring 3, 26–27, 80, 97, 122, 147–148, 174, 176 motives 81 multimodal 6, 77, 79, 84, 86, 89, 92, 94, 97–98, 100, 102, 119, 125, 148, 173, 175, 177–178 configurations 100
200 Discourse, Vision and Cognition
integration patterns 119, 175 method 77, 97, 125, 173, 177–178 score sheet 79, 86, 97, 148 scoring technique xii sequential method 6, 79, 84, 92, 94, 98 system 102 time-coded score sheets 89 multiple external representations 167 multiple representations 49, 167–169 mutual visual access 54, 173 N narrative 28, 30, 32–35, 37–38, 55, 59, 65–66, 69–70, 73–77, 81, 171, 173, 179–180 narrative priming 28, 30, 33, 38, 55, 66, 70, 73–74, 76–77, 173 narrative schema 173 narrative setting 30, 34–35, 74, 171 non-verbal 3, 15–16, 39, 46, 50, 52–54, 67–68, 90, 92, 97, 102, 153 actions 50, 52, 54 non-verbally 50 n-to-1 mappings 112, 117 n-to-n mappings 113, 117, 119 O object-activity 129, 148, 175 object-attribute 129, 148, 175 object-location 128–129, 148, 175 object-object 148, 175 object-path 148, 175 off-line 19, 21, 26–29, 31–33, 35, 37–39, 43, 63, 68, 70, 72–76 off-line picture description 19, 21, 27, 37, 39, 43, 70
organisational function 20–21, 25, 27, 29, 33–37, 42, 87, 116, 121 on-line 28–33, 35, 37–38, 40, 70, 73–74, 85, 96, 101, 103–104, 108–109, 113–114, 116, 125, 144, 148, 177 on-line picture description 40, 70, 73, 114, 116, 125 on-line writing 96 organising function 37, 171 orientational function 24, 42, 53, 116 overt 80, 145 overt attention 80 P parallel component model 51 paratactic segments 53, 172 paratactic transition 44 pauses 1–3, 8, 11, 15–16, 21, 39–40, 42, 53, 92, 96, 104–105, 108–109, 116–118, 122, 146, 172 pausing 9 perceive 12, 14, 125, 131 perception 1, 7–8, 12, 14, 16, 65, 80, 83–84, 86–87, 89, 94, 96–97, 108, 123, 125, 131, 134, 137, 142, 149, 151–152, 158, 178, 180–181, 184, 186, 188, 190 of discourse boundaries 12, 191 perfect match 101, 106, 114, 126 performative distance from the descriptive discourse 42 phases 3, 9, 57, 59, 77, 90, 97, 102, 146, 159, 161, 164, 166, 173 phrase 6–7, 15, 120, 123, 143 phrasing unit(s) 5, 7 picture description 2, 6, 10, 12, 17, 19, 20, 22, 28, 30–40, 42, 46, 53, 55–57, 59, 64–67, 69, 70–73, 76–77, 79–82,
85–87, 89–90, 92, 94, 96, 98–102, 104, 109, 119–122, 125–128, 131, 133–134, 137, 139, 142–143, 145–146, 149, 150–151, 157, 164, 168, 173– 174, 176–177, 184 picture viewing 6, 12, 32, 65, 70, 77, 79, 85–86, 89–90, 92, 94, 98–99, 102, 121, 125–126, 128, 131, 133–134, 137, 139, 142–143, 146, 149–150, 168, 173, 176–177, 183–184, 187 planning 1, 3, 16, 26–27, 37, 43, 80, 82, 97, 99, 103, 115, 120– 123, 138, 141, 144, 146–147, 156, 158, 174, 176, 180, 187 pointing 50, 52, 54, 97, 126– 127, 154, 156–157, 173 prediction 156 preparatory glances 122, 174 presentational function 21, 23, 26, 33, 36–37, 41, 116 priming study 125, 138, 149, 176 problem-solving 68, 81, 168, 177 prosodic criteria 11 proximity principle 131 psycholinguistic studies 81, 87, 102, 120–121, 123, 143 psycholinguistics 9, 180 psychology 66, 81, 131, 180, 186–187 R re-categorisational gazes 176 re-conceptualisation 149, 175 re-examine 122, 174 referent storage 54, 173 referential availability 44, 49, 53 referents 8, 20–21, 27, 29–30, 36–37, 41, 44, 50, 107, 121, 142–143, 154, 171 re-fixate 175–176 re-fixation 174
refocus 45–46, 53, 61, 77, 87, 173 region informativeness 87, 91 regulatory function 25 relevance principle 79 remembering 66–69, 77, 173 reorientation 42 retrospective 167 S saccades 90, 135, 149, 161 saliency principle 79, 131 scanpaths 139, 140–142, 145, 186 scene 19–21, 24–25, 32, 36, 42, 59, 65–66, 70, 77, 79, 86–88, 90–91, 94, 97, 101–102, 106, 114, 116, 118, 120–121, 125–128, 131–139, 141–142, 144–145, 149, 153, 158–159, 163, 165–166, 172–173, 176– 177, 188, 190 scene perception 86, 90, 97, 116, 125, 142, 188 scene semantics 141, 176 segmentation 1, 5, 9, 10–12, 14–16, 97 segmentation rules 1, 5, 11 semantic 11, 14–17, 51, 79, 98–100, 131, 137, 142, 148, 150, 171, 174–175 correspondence 79, 98–100, 148, 150, 174–175 criteria 11, 14–17, 171 groupings 131, 137, 142 semantic, rhetorical and sequential aspects of discourse 51 semiactive foci 7 semiactive information 49, 54, 172 sentences 1, 5, 14–15, 102 sequential processual 79 sequential steps 46, 53 series of delays 107 series of n-to-1 mappings 111
Subject index 201
series of n-to-n mappings 113, 117 series of perfect matches 106 series of triangles 109–110 similarity principle 131 simultaneous 28, 30–31, 34, 38, 81–82, 86, 89, 94, 97–101, 103, 115, 122, 139, 142–143, 148, 161, 171–172, 174, 176 simultaneous description with eye tracking 34 simultaneous verbal description 28, 94, 97 situation awareness 49, 54, 97 spatial and temporal correspondence 161 spatial expressions 23, 58, 61–63, 70, 72, 74–76 spatial groupings 125, 131, 137, 142 spatial perception 61–62 spatial priming 37–38, 66, 69–73, 75–77 spatial proximity 125, 131–132, 134–137, 148–149, 167, 176 spatial relations 23, 27, 30, 37, 52, 56–58, 64, 67–69, 76, 143, 157–159, 168, 173, 177 spatial visualiser(s) 68 speakers 5, 14–16, 21–25, 27, 35, 37, 39–42, 45–46, 49, 52–55, 60–61, 73, 75, 77, 82, 102, 137, 139, 142, 146, 148, 151–153, 157, 167–168, 171–172, 176–177 specification 130, 149, 175 speech 1–4, 6, 8–11, 15–16, 21, 26–27, 38–40, 42–43, 45, 51–53, 75, 80–82, 84, 92, 94, 99, 102–103, 106, 116, 121–123, 137–138, 143, 146– 148, 172, 174–176, 179–182, 185–188, 191 speech and thought 9 speech unit 4, 6, 42 spoken discourse 1–5, 12, 16, 127, 154, 157, 173, 183
spoken language 6–7, 12, 16, 43, 67, 79–84, 86, 88, 94, 98, 115, 126, 137, 148, 167, 181, 190 spontaneous conversation 16, 28, 38, 151, 153 spontaneous description and drawing 46 spontaneous descriptive discourse 36, 154 spontaneous drawing 48, 157 spotlight 79, 83–84, 94, 130, 154 states 4, 11, 20–21, 27, 30, 37, 41, 51, 86, 97, 117–118, 121, 131, 142–143, 145, 171 static 29, 32, 55–59, 62–67, 69–73, 76–77, 81, 90, 92, 173, 183 description style 55–57, 59, 63–64, 69–72, 76, 173 steps in picture description 53 storage of referents 50, 153 structure 1, 5, 14–15, 17, 19–20, 28, 30, 37–39, 43, 46, 51–53, 55, 94, 96–97, 109, 121, 137, 145, 161, 172, 188 structure of spoken picture descriptions 17, 19, 38 substantive foci 21–23, 27, 30–31, 33, 36–38, 44, 62, 102, 104, 113, 116, 123, 171, 175 substantive foci with categorisation difficulties 21, 30–31, 33, 38, 113, 116, 123 substantive list of items 22, 146 summarising foci 22, 24, 27, 31, 38, 41, 63, 114, 117, 122–123, 146, 171 summarizing gazes 176 superfocus 1, 8, 12–14, 22, 27, 40–41, 53, 89, 92, 100, 102–104, 106–107, 109, 113–115, 119, 121–123, 133, 144, 147–148, 156, 164, 171, 174–176
202 Discourse, Vision and Cognition
support for visualisation 50, 153 symmetry principle 131 T task-dependent cluster 136 taxonomy 4, 28, 30, 37, 39, 171 taxonomy of foci 30, 37, 39, 171 taxonomic proximity 131 temporal correspondence 85 temporal 61–63, 70, 72–77, 99, 121, 123, 125, 157, 173–174, 186, 191 dynamics 75, 76, 77, 173 expressions 62–63, 70, 72–74, 76–77, 173 perception 61–62 relations 77, 99, 121, 123, 125, 157, 173–174, 186, 191 simultaneity 174 thematic distance 40–41 from surrounding linguistic context 40 from surrounding pictorial context 41 think-aloud protocols 81, 87 thought processes 81, 82 timeline 88, 100 topical episodes 16 trajector-landmark 127 transcribing 3 transcript 4, 6, 11–12, 19, 46, 82, 88–89, 91–92 transcription 1, 3–4, 12, 16, 88 symbols 4 transition(s) 3, 25, 40–41, 44–45, 50, 53, 156, 172 between foci 39 triangle configuration 102 two different styles 56 two windows to the mind 6, 79, 84, 94, 98, 174 types 21, 26–28, 33, 35–37, 46, 55, 69, 87, 89, 96, 99, 121, 145, 171, 174 of foci 21, 26–28, 33, 35–37, 46, 55, 69, 87, 96, 99, 145, 171, 174
of superfoci 89, 121 typology of foci 38 U underlying cognitive processes 6, 82, 94, 98, 148 underlying mental processes 97 unit of comparison 100–101, 115, 125 units in spoken descriptive discourse 1 usability 81, 167, 168 utterance 3–4, 8, 43, 45, 52, 102, 120, 139, 146–148, 176, 185 utterances 1, 5, 9–10, 13–15, 81, 123, 138–139, 141, 144, 147, 149, 153, 175–176, 187 V variations in picture description 55 verbal and visual clusters 89, 125, 148 verbal and visual protocols 84 verbal behaviour 88, 98, 100, 174, 177, 182 verbal foci 8, 11–13, 15, 21–23, 34, 39, 41, 51–52, 82, 84, 86–87, 89, 92, 94, 99, 103, 106–107, 109, 113, 115–117, 122, 126, 130, 145, 148, 152, 156, 176 verbal focus 1, 6–8, 16, 21, 27, 82, 84, 92, 99–106, 109, 114–115, 117–118, 122–123, 126, 146, 154, 171, 174–175 verbal focus of attention 84, 99 verbal protocols 81, 87, 167 verbal stream 92, 101, 116–117, 119, 126 verbal superfoci 12, 13, 15, 89, 115, 119, 148 verbal superfocus 1, 6, 8, 16, 21, 113, 132, 135, 171 verbal thinkers 64, 67, 77, 151, 173
verbalisation process 131, 148, 177 verbalisers 68–69 viewing dimensions 137, 142 viewing patterns 138, 145, 149, 176 vision 69, 79, 82, 84–85, 90–91, 94, 127, 176, 182–183, 186–189, 191 visual access 49 visual and cognitive processing 80 visual behaviour 2, 28, 88–89, 96, 145 visual displays 80, 120, 123 visual fixation cluster 89, 103, 107, 115, 122, 174 visual foci 84, 93, 100, 103, 115–116, 119, 122, 128, 135, 148, 175–176 visual focus 6–8, 83–84, 99, 101, 106, 115, 122–123, 154, 174–175 visual focus of attention 84, 99 visual inspection 118, 120, 131, 174 visual paths 91 visual representations 149, 168, 176 visual scene 46, 137, 145, 147, 177, 180, 186 visual stream 92, 101, 119, 126 visual thinkers 55, 64, 66–67, 151, 157, 182 visualisation(s) 40, 49, 151, 153, 157, 164–165, 167–168, 172–173, 177 visualiser 68 visualisers 68–69 visually present objects 92 vocaliser 68 W windows to the mind 6, 79, 84, 94, 98, 174 Z zooming in 101 zooming out 127, 130, 132–134
In the series Human Cognitive Processing the following titles have been published thus far or are scheduled for publication: 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
Holšánová, Jana: Discourse, Vision, and Cognition. 2008. xiii, 202 pp. Berendt, Erich A. (ed.): Metaphors for Learning. Cross-cultural Perspectives. 2008. ix, 249 pp. Amberber, Mengistu (ed.): The Language of Memory in a Crosslinguistic Perspective. 2007. xii, 284 pp. Aurnague, Michel, Maya Hickmann and Laure Vieu (eds.): The Categorization of Spatial Entities in Language and Cognition. 2007. viii, 371 pp. Benczes, Réka: Creative Compounding in English. The Semantics of Metaphorical and Metonymical Noun-Noun Combinations. 2006. xvi, 206 pp. Gonzalez-Marquez, Monica, Irene Mittelberg, Seana Coulson and Michael J. Spivey (eds.): Methods in Cognitive Linguistics. 2007. xxviii, 452 pp. Langlotz, Andreas: Idiomatic Creativity. A cognitive-linguistic model of idiom-representation and idiom-variation in English. 2006. xii, 326 pp. Tsur, Reuven: ‘Kubla Khan’ – Poetic Structure, Hypnotic Quality and Cognitive Style. A study in mental, vocal and critical performance. 2006. xii, 252 pp. Luchjenbroers, June (ed.): Cognitive Linguistics Investigations. Across languages, fields and philosophical boundaries. 2006. xiii, 334 pp. Itkonen, Esa: Analogy as Structure and Process. Approaches in linguistics, cognitive psychology and philosophy of science. 2005. xiv, 249 pp. Prandi, Michele: The Building Blocks of Meaning. Ideas for a philosophical grammar. 2004. xviii, 521 pp. Evans, Vyvyan: The Structure of Time. Language, meaning and temporal cognition. 2004. x, 286 pp. Shelley, Cameron: Multiple Analogies in Science and Philosophy. 2003. xvi, 168 pp. Skousen, Royal, Deryle Lonsdale and Dilworth B. Parkinson (eds.): Analogical Modeling. An exemplar-based approach to language. 2002. x, 417 pp. Graumann, Carl Friedrich and Werner Kallmeyer (eds.): Perspective and Perspectivation in Discourse. 2002. vi, 401 pp. Sanders, Ted, Joost Schilperoord and Wilbert Spooren (eds.): Text Representation. Linguistic and psycholinguistic aspects. 2001. viii, 364 pp. Schlesinger, Izchak M., Tamar Keren-Portnoy and Tamar Parush: The Structure of Arguments. 2001. xx, 264 pp. Fortescue, Michael: Pattern and Process. A Whiteheadian perspective on linguistics. 2001. viii, 312 pp. Nuyts, Jan: Epistemic Modality, Language, and Conceptualization. A cognitive-pragmatic perspective. 2001. xx, 429 pp. Panther, Klaus-Uwe and Günter Radden (eds.): Metonymy in Language and Thought. 1999. vii, 410 pp. Fuchs, Catherine and Stéphane Robert (eds.): Language Diversity and Cognitive Representations. 1999. x, 229 pp. Cooper, David L.: Linguistic Attractors. The cognitive dynamics of language acquisition and change. 1999. xv, 375 pp. Yu, Ning: The Contemporary Theory of Metaphor. A perspective from Chinese. 1998. x, 278 pp.