Stefan Sonvilla-Weiss (ed.)
MASHUP CULTURES
Professor Dr. Stefan Sonvilla-Weiss ePedagogy Design – Visual Knowledge Building Aalto University Helsinki / School of Art and Design
With financial support of the Austrian Federal Ministry of Science and Research Vienna and Aalto University, School of Art and Design Helsinki. This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machines or similar means, and storage in data banks. Product Liability: The publisher can give no guarantee for all the information contained in this book. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. © 2010 Springer-Verlag/Wien Printed in Germany SpringerWienNewYork is part of Springer Science+Business Media springer.at Cover Illustration and Design: Stefan Sonvilla-Weiss Copyediting: Cindy Kohtala Printing: Strauss GmbH, D-69509 Mörlenbach Printed on acid-free and chlorine-free bleached paper SPIN: 12725670 With 41 figures Library of Congress Control Number: 2009942196 ISBN 978-3-7091-0095-0 SpringerWienNewYork
Contents
! " #
" # $ %&
'(
) * + # , % . / " $ " 0 # ,
-
(5
89
5
< 0 %"
5
12 $ % % 34 $ * 6+" 7
# $ % # $ "$ +
: !$ % %% ) # ;
, = > * # =?
@'A
!"
B " CD " #
@-8
$ E E F $ EEE, E# E
EEEEEEEEEGHI
$ %
# # , # .$ %) +
"
@
&'&! (
# '3A!
@5
% ) * +,
# ) J , + 1K L@4
'A(
%M N! + C + " , B +
''-
) % ? % %# $ %
'-5
#
#-. !(&-
+%
'(8
'9-
Acknowledgements First and foremost I would like to thank all the authors who contributed so heartily to the successful completion of this volume. It has been a great pleasure to experience the motivating and encouraging mental support from the expanding community around the international MA study programme ePedagogy Design – Visual Knowledge Building at Aalto University – School of Art and Design Helsinki in cooperation with University Hamburg. Therewith inseparably connected is our shared interdisciplinary curriculum, exchange of information and knowledge that greatly benefited our students, lecturers, professors and stakeholders. Thus my gratitude goes to the “Hamburg team”, Prof. Torsten Meyer, Wey-Han Tan, Christina Schwalbe, Ralf Appelt and the “Helsinki team”, Owen Kelly, Prof. Martti Raevaara and Prof. Juha Varto for substantially supporting the idea, implementation, maintenance and sustainability of this international study and research collaboration. Since my intention was to bring this community into a new and refreshing dialogue with international cutting-edge thinkers and scholars working in mediatheoretical, -practical and -educational contexts, I am indebted to the spontaneous acceptance from Henry Jenkins, David Gauntlett, Mizuko Ito, Axel Bruns and Eduardo Navas to contribute to this book, which appears on the occasion of the fifth anniversary of this international study programme. In a similar vein, I’d like to thank my master’s and doctoral students and graduates for their valuable input and research projects for which this publication should give enough stimuli and motivation to continue with their work. Mashup is also a project work topic students are engaged with this year, and the project results will be discussed and further developed at our next international seminar. Special thanks goes to Cindy Kohtala who patiently proofread all texts and the final manuscript, and Owen Kelly for reviewing and critical remarks. Last but not least, I’d like to thank my wife Barbara for the countless inspiring “garage talks” and motivating support during the life-span of this book project.
7
Introduction: Mashups, Remix Practices and the Recombination of Existing Digital Content
The genesis of this volume is partly owed to the fact that the international study programme ePedagogy Design – Visual Knowledge Building is celebrating its fifth anniversary, and what could be a better symbolic, practical and intellectual present to myself, my students, co-workers and affiliates on this very occasion. At first my intention was to try to bring together well-known international experts in their field with whom we partly collaborated in our study programme and to confront them with our community of experts and students so as to trigger a vibrant discussion across borders and disciplines. What has soon become clear to me was the importance of highlighting the interdisciplinary approach of the study while at the same time proposing a good deal of critical reflection and rethinking upon what we have achieved so far. It is part of an international study programme, especially in our case, to permanently readjust modes of communication and collaboration across the great variety of expertise and cultural backgrounds the international student community holds. As the study programme is situated at the intersections of art, media and education it appeared natural to group together expert voices from these fields under a unifying umbrella of exploring together with the students, teachers and experts key concepts, ideas and paradigms of participatory media culture. Why I have chosen Mashup Cultures as the title for this book has basically two main reasons: one is connected to the definition of mashup, which in Web developments denotes a combination of data or functionality from two or more external sources to create a new service (in the case of this compilation hopefully new insights), and the second reason puts the cultural dimension into the foreground, as these developments permeate through almost all cultural techniques and practices on a global scale. If we consider mashup as a metaphor for parallel and co-existing ways of thinking and acting rather than exclusionary, causal and reductionist principles of either or instead of as well as, then we might gain a broader understanding of the unique characteristics of the plural in mashup cultures. A historical comparison might also be helpful to find distinguishable and discernable criteria for sometimes confusing terminologies using the example of remix practices. In retrospect we can ascribe these practices certain kinds of techniques (collage, montage, sampling, etc.) and different forms of appropriations within specific socio-cultural contexts, for example John Heartfield’s political photomontages in the 1930’s, or James Tenney’s early sampling of Elvis Presley’s “Blue Suede Shoes” 8
'
in the 1960’s. Yet how these cultural practices significantly differentiate from today’s mashup cultures could be outlined in the following: a) Collage, montage, sampling or remix practices all use one or many materials, media either from other sources, art pieces (visual arts, film, music, video, literature etc.) or one’s own artworks through alteration, re-combination, manipulation, copying etc. to create a whole new piece. In doing so, the sources of origin may still be identifiable yet not perceived as the original version. b) Mashups as I understand them put together different information, media, or objects without changing their original source of information, i.e. the original format remains the same and can be retraced as the original form and content, although recombined in different new designs and contexts. For example, in the ship or car industry standardised modules are assembled following a particular specific design platform, or, using the example of Google map, different services are over-layered so as to provide for the user parallel accessible services. c) Remix and mashup practices in combination can be considered as a coevolving, oscillating membrane of user-generated content (conversational media) and mass media.
In other words mashups follow a logic that is additive or accumulative in that they combine and collect material and immaterial goods and aggregate them into either manifested design objects or open-ended re-combinatory and interactive information sources on the Web. In fine arts, Pointillism would be a good example to demonstrate how this technique is in sharp contrast to the more common methods of blending pigments on a palette or using commercially available premixed colours. This analytical painting technique is in fact analogous to the subtractive colour method in the four-colour CMYK printing process, or the additive colour methods that occur with television or computer monitors and stage lights using a “pointillist” technique to represent images through Red, Green, and Blue (RGB) colours. The more pixels are displayed on a screen, the sharper the image is in analogy to the information or bit-level: the more information is encoded between the bits and the output medium. Moreover the more complex the hard- and software, which is essentially defined by its usability and modes of interaction, algorithms, metadata, formats, and protocols, the more difficult it is for the individual on both the micro and macro level to piece together these fragmented parts into a whole new picture in the very metaphorical and practical sense. Defragmentation seems to be a key concept in networking culture, trying to re-establish alienated modes of common understanding through aggregation, augmentation, reconfiguration and combination of information, quite similarly to what the hard disk does when physically organising the contents of the disk to store the pieces of each file close together and contiguously. From the standpoint of information sciences information is defined by its existence 9
as a bit – in Bateson’s formulation, “a difference that makes a difference.” This is an important aspect insofar as it holds an immanent power relationship: that of control within a complex system of hierarchical order and manipulative control mechanisms. Control of information and communication equates with control of code, leading again to fundamental questions: “Who is the owner of the code, where is it stored, and what are the consequences of misuse?” Code is the language of our time, as there is hardly any consumer article in our daily usage free of computer-supported, automated mass production chains. Our cultural production with all its pluralistic modes of expression and formats produces a parallel digital universe that is stored in and dispersed through a gigantic network of databases around the globe. Richard Stallman was right with his claim on “Free Software”, as it engages with non-exclusionary and non-hierarchical forms of co-evolvement of intellectual goods manifested in code for the purpose of constant improvement in an open source environment. Many attempts have been made to translate open source principles into the broader cultural realm (cf. for example Lessig’s Free Culture) yet under the wrong premises of mixing commodity value up with intellectual value, which does not necessarily generate surplus value but rather exchange value. The value of code, then, is not generated through multiplication of single copy software products but instead through exchange of different versions, modifications according to needs, specific needs a piece of software should fulfil. “You can buy a picture, but you cannot buy the image in it” might work as an analogy for liberating code from any kind of commodity boundaries. In other words, code must circulate freely to build upon collective intelligence, something that has been convincingly demonstrated as a successful business model in almost all big ICT branches. Free software does not mean that everything is for free: it can for example lead to new business models and products in which code remains still free and modifiable by the open source community. Comparably, in Mashup Cultures the code that makes possible information and knowledge exchange must be maintained liberally as a public good. Important steps in this direction are APIs (Application Programming Interfaces) that allow Web communities to create an open architecture for sharing content and data between communities and applications. In this way, content that is created in one place can be dynamically posted and/ or updated in multiple locations on the Web; for example photos can be shared from sites like Flickr to Social Network sites like Facebook and MySpace. The interconnectivity of software applications and their users on the Web appears from today’s perspective a literacy with which most teenagers and prosumers are familiar. Yet the impact of such a remarkable media revolution as that of Web 2.0 on individuals and society at large can only be fully understood in a media-historical context: understanding what and how communication media has transformed within the complex interplay of perceived needs, competitive and political pressures, and social and technological innovations. 10
'
MASHUP FORMULA: API X + API Y = Mashup Z API X + API Y = API Z /0)) ' / 1,,1'' & !) 1,,) , ! & ! '!&2,'&&,!& !,1&,) ,' ! &, '!&3
11
Key Concepts, Projects, Ideas in New Media History Digital culture is driven by rapidly changing technologies, which have altered the entire media landscape over a period of less than 20 years at a never-before-seen pace. With the advent of the Internet and the popularisation of the WWW, for the first time in the history of humankind a communication technology based on soft- and hardware has paradigmatically altered the entire media landscape in size, scope, functionality, access and modes of interaction. In retrospect one dares to assert that mass media has lost its privileged status of media control and monopoly despite one last defiant struggle of the old tycoons – and in return participatory media is on the verge of breaking through the prevalent one-way (dimensional) mass media information channel. At least this is the hope of those who pledge for democratisation and active engagement of the user, who is now empowered by means of user-friendly technology to actively take part in the making and shaping of public opinion. In practice this would mean transforming the intimate media sphere of the couch potatoes into a self-directed interaction with a new and responsive medium. Although the hope for abolishment of producer and recipients (consumers) remains an unfulfilled promise of new media technologies, early 20th century utopian expectations of a new medium for a new social system through conversion of the conditions of communication from passive consumption to active participation has greatly influenced recent media theory and practices (e.g. social media). In 1932 for example Bertolt Brecht notes: “…the radio could inarguably be the best apparatus of communication in public life, an enormous system of channels – provided to function not only as a sender but also as a receiver. This means making the listeners not only listen but also speak; not to isolate them but to place them in relation to others.”1 Art as a collective form could in Brecht’s terms revolutionise the existing social system by collectivising senders and receivers. Accordingly, every loudspeaker could potentially be used as a microphone – and as Enzensberger puts it, “every receiver” could be “a potential sender”.2 As a consequence of mass media critique, which pointed to the artificial separation between producer and consumer, Max Neuhaus, an American pioneer of sound art in his first Public Supply project in 1966, combined a radio station with the telephone network and created a two-way public aural space twenty miles in diameter encompassing New York City, where any inhabitant could join a live dialogue with sound by making a phone call. Neuhaus’s project was quite remarkable insofar as his approach did not purely concentrate on the inherent possibilities of new media but rather on “proposing to reinstate a kind of music which we have forgotten about and which is perhaps the original impulse for music in man: not making a musical product to be listened to, but forming a dialogue, a dialogue without language, a sound dialogue.”3 In 1965 Ted Nelson published the conference paper “A File Structure for the 12
'
Complex, the Changing, and the Indeterminate”, in which he coined the term “hypertext”, i.e. “a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.”4 In his groundbreaking work Nelson introduced a new concept of file operations based on entries, lists and links called an Evolutionary List File (ELF), which for the first time in computer history allowed text, films, sound and video recordings to be arranged as non-linear systems. What seems to be nowadays a natural routine in daily computer work was more than forty years ago a revolutionary step forward in new media history, which a decade later peaked in – some call it the most important book in the history of new media – Nelson’s book on Computer Lib/Dream Machines. Essentially, if I may say so in regard to the complexity of his visionary thoughts, Nelson argued that computer experiences are media that need to be co-designed with the audience in mind as a creative process. Most striking was his proposal to place these design processes in a radical, open publishing network that ought to support reconfiguration, comparison and interconnection, which lent leeway to his ongoing project Xanadu.5 Roy Ascott in his seminal article “The Construction of Change”6 that first appeared in 1964 conveys a bold statement on the interdependency of art, science and education in the form of a progressive cybernetic concept in teaching learning. Cybernetics, Ascott claims, is not only changing our world, it is presenting us with qualities of experience and modes of perception which radically alter our conception of it. Science can inform art, yet the goal is not to produce a scientific work but to substantiate empirical findings and intuitions with profound analysis and reason. Creative synthesis would then be the final stage of putting together a coordinating matrix of images, concepts, annotations, ideas found in the products of science with an understanding of the concepts behind them. Ascott’s didactical concept in his work with art students puts great emphasis on individual creative identity, flexible structure and arrangements within which everything can find its place, every individual his way (Ibid., p. 132). In 1985, Richard Stallman published “The GNU Manifesto”, which in my opinion did not only revolutionise software culture but to a much greater extent, it fundamentally altered our concepts of material and immaterial good, in favour of liberating source code from any kind of ownership towards the concept of open source. I will thus reproduce some of his key thoughts leading to Free Software Movement: -
You have the freedom to run the program, for any purpose. You have the freedom to modify it to suit your needs. You have the freedom to redistribute copies, either gratis or for a fee. You have the freedom to distribute modified versions of the program, so that the community can benefit from your improvements.
13
Since “free” refers to freedom, not to price, there is no contradiction between selling copies and free software. In fact, the freedom to sell copies is crucial: collections of free software sold on CD-ROMs are important for the community, and selling them is an important way to raise funds for free software development. Therefore, a program that people are not free to include on these collections is not free software.7
Media Literacy and Remix Cultures In her chapter “Mobilizing the Imagination in Everyday Play: The Case of Japanese Media Mixes” Mizuko Ito investigates how and why the spread of digital media and communications in the lives of children and youth has driven an increasingly interactive and participatory media ecology where Internet communication ties together both old and new media forms. She asserts that a growing recognition of this role of digital media in everyday life has been accompanied by debate as to the outcomes of participation in convergence culture. Although concerns about representation are persistent, particularly regarding video game violence, many of the current hopes and fears of new media relate to new forms of social networking and participation. As young people’s online activity changes the scope of their social agency and styles of media engagement, they also encounter new challenges in cultural worlds separated from traditional structures of adult oversight and guidance. Thus, issues of representation will continue to be salient in media old and new, but modes of participation and interaction are undergoing a fundamental set of shifts that are still only partially understood and recognised. Mizuko Ito uses the cases of Yu-Gi-Oh, Hamtaro and Pokemon as examples of how broader trends in children’s culture and new media are manifest in Japaneseorigin media and media cultures that are becoming more and more global in their reach. She argues that part of the international spread of Japanese media mixes is tied to the growth of more participatory forms of children’s media cultures around the world. At the same time she allocates certain areas of specialisation in different national contexts: for example the US media ecology continues to remain dominant in film- and internet-based publication and communication whereas in Japan a particular media mix formula has emerged. Thus the focus in this chapter is on outlining the contours of these shifts. How do young people mobilise the media and imagination in everyday life? And how do new media change this dynamic?
Database Logic In “On the Database Principle: Knowledge and Delusion” Torsten Meyer investigates how and to what degree a cultural leap in our perception and interaction with 14
'
the digital and analogue world can be grounded in the shift from the perspective to the database as a symbolic form as paradigmatically argued by Lev Manovich. Along with the disruption of linear perspective as an appropriate medium to make visible all the aspects that artists wanted to make visible or that can be (re-)presented in general, the turn of the last century more generally marked out the so-called “crisis of representation”. The subjective view superseded the linear perspective interpretation of what is outside ourselves (our worldview). The logic of the database apparently holds a strong notion of the current historical a priori which constitutes how and what we can see. Consequently, Meyer dwells on how digital formats and their representations – based on discrete variables in contrast to continuous variables in analogue representations – make obsolete the classical ontological triplet of truth, being (existence) and creation, in favour of new discursive forms with a digital medium equating with the post-industrial triplet of fiction, simulation and medium (techno-aesthetics). In that digital information is scalable, infinite, non-material, simultaneous, and ubiquitous, the reference system is longer “the real” but a sign system (semiosis) based on source and object code. The database Meyer is most interested in has no form and cannot be transformed into a story, a narration or a history without a concrete detailed query. Using the example of Christopher Nolan’s montage technique in his seminal film Memento, he scrutinises through experiments with cutting the movie into 44 pieces to see whether a non-linear multidirectional story generated through database logic will ever succeed in becoming a knowledge-generating text if there is no longer an author, a story, or a theme, let alone moral or order – neither a defined series of data or thereby presented objects, nor words or sentences that would render the objects or data presentable etc.: Everything inherent to a story – as a narration or history – is missing.
Interactive Non-Linear Narration and Spectatorship Doris Gassert conducts a profound analysis about aesthetic appreciation of the movie Fight Club, which has regained fresh momentum in current academic discourse. She considers the hybrid moving images and the influence of digital effects on narrative in Fincher’s movie as paradigmatic for the reorganisation of image production and cinema’s established conventions of ‘seeing’. The digital image can seamlessly merge live-action with computer-generated shots to generate new hybrid moving image forms that can be entirely restructured, allowing for an infinite variety of multiple layers and composite structures. These new cinematic images present – when analysed with the knowledge of the underlying digital production process – hybrid mashups that expose the digital transformations as subliminal yet conceptual re-configurations that allow for new possibilities to frame and thus transgress traditional configurations of ‘eye, gaze and image’. 15
In Gassert’s view Fight Club’s narration creates a tension between analogue and digital manipulation with a subliminal trick causing corporeal effects: children are confused and start to cry, while adults show feelings of vague discomfort. In contrast to the passive ‘consumers’ who remain immobile in their seats, “the DVD crowd” asks for a new form of spectatorship, which is to a certain degree interactive. That is, the audience of the ‘digital era’ is equally invited to ‘manipulate’ the narrative (on DVD) by rewinding, fast-forwarding, progressing frame by frame and freezing them so as to get a glimpse of the ‘cues’ Fincher has spliced in – and which turn out to be clues for the overall understanding of Fight Club’s narrative (and visual) logic.
Crowdsourcing and Social Networking When I first contacted David Gauntlett to ask him if he would be interested to contribute to this book, the initial idea was to interview him, in this case to put forward a few questions on mashup-related topics. After emailing each other on how to proceed, I came up with the idea to more unconventionally conduct an interview purely with visual arguments based on heavily semantic connoted image montages – albeit knowing this would probably be asking too much. Alternatively David came up with the idea to open up the interview in the spirit of “Mashup Cultures” to the social media sphere, i.e. to invite people to send him questions via Twitter and Facebook. As David notes, it is not truly a mashup, but it is at least questions coming together from different sources, and from people around the world. So it is actually another buzzword – crowdsourcing. The questions arrived from different places in Europe, the United States, and Australia, and were sorted into sequences of questions beginning with a discussion of ‘Web 2.0’, whether it is a useful or distinctive term, and then turning to ethical issues, implications for education, the ‘making is connecting’ project, and academic public engagement. In conclusion, David Gauntlett encountered questions in the area of Web 2.0 that are not merely technology centred but rather deal with the ethics and behaviour of the human beings using them. Specifically in academic communication a notably generation-based-driven shift towards open discussions, sharing and exchange on the Web presupposing a high degree of ethos in work and social relationships can be observed.
Universities’ New Role in a Networked Society In her chapter on “Change of Media, Change of Scholarship, Change of University: Transition from the Graphosphere to a Digital Mediosphere”, Christina Schwalbe observes a far-reaching process of cultural change in higher education that is closely connected to the rapid technological developments in the field of 16
'
digitally networked media. Along with changes in networked communications, universities are forced to adapt to new forms of collaboration, communication and connecting inside and outside the academic world. In search of possible causes, she borrows a cultural theoretical perspective from Régis Debray’s mediology, providing a methodical basis for her research on the correlations of technology, culture and society. Accordingly, the focus is not on the media per se but on the processes of mediation and transmission; i.e. the changing and rapidly growing online media landscape requires new didactical concepts and scenarios as to changing practices of knowledge – but as Schwalbe concludes, the university as a whole is challenged by a cultural leap holding a big question mark as to what the structural answers of the university to the introduction of a digitallynetworked medium will be in the near future.
Second Order Gaming Wey-Han Tan in his chapter “Playing (with) Educational Games – Integrated Game Design and Second Order Gaming” gives a brief overview on how playing can be interpreted as an integrated activity of toying, game creation and game play, with liberating, reconstructive, reflective and innovative aspects. The need for an Integrated Game Design is grounded in the necessity of the rule system and the narrative structure for an adequate situating of content. By asking how different gaming modes raise awareness of the player’s boundaries and contexts of games, and how internal boundaries and contexts inform the player’s mode of acquisition, Tan calls for a specific mode of design and playing, which he calls Second Order Gaming. This is presented in three approaches, in metagaming, transmediality and unusability. The potential that second order gaming may have in future game-based learning approaches lies in gamers’ creative design contributions, reinterpretation of regulative and narrative elements, and transgression and modification of medial boundaries, emerging properties in networked communities in which a toy approach can take advantage of metaphors and motivation to experiment with these properties in game contexts and in other digital applications.
A Model for a Media Education Curriculum In his chapter Owen Kelly chooses a non-linear narrative, which is borrowed from biji – a genre in classical Chinese literature. In the quasi-style of this ancient book type Kelly paraphrases the original concept that contains anecdotes, quotations, random musings, philological speculations, literary criticism and almost everything that the author deems worth recording, to describe in the form of a mashup the genesis of the five-year project called Marinetta Ombro. This project developed by Kelly together with a colleague at Arcada University of Applied Sciences in 2002 17
aimed to create a metaphorical platform for learning that could serve as the basis for a media education curriculum. The challenges of building the virtual island Rosario lay in bringing together fact and fiction, moulded into a culture in which the pedagogical objectives were deemed to be rather challenging inasmuch as the virtual environment should comprise a variety of functionalities and educational simulations paralleling real-life scenarios. Moving the project to Second Life gave leeway to less technical constraints but instead rendered new possibilities for visualising the complex culture of the diverse user groups that came into existence over the course of time. By engaging this wider community within an emerging pedagogical framework the virtual island of Rosario turned into a flourishing philosophical laboratory and “ontological Petri dish” for both the university and the virtual college La Kolegio Ilana. When the project reached a certain degree of saturation by its users the educational focus became increasingly blurred although individual creative interventions greatly enriched existing project strategies. Next steps in the project genesis foresee the use of Second Life as an alternative to video-conferencing, which is intended as a deliberate shift away from the virtual culture back into the delusional world of consensus reality.
Wikification and Hermeneutics Juha Varto and Tere Vadén in ”Tepidity of the Majority and Participatory Creativity” are pondering the next step in knowledge building after the postmodern condition. It appears that a distinction between the medium and the mediated is becoming increasingly obsolete as the floating signs almost randomly and indistinguishably hook on different forms of media, and the effects of one medium may be mediated by use of another medium. As a consequence of the ongoing evolution of media which is currently peaking in never-before-seen ubiquitous and mobile user access and interaction, Varto pursues the question whether this movement of non-hierarchic belief in the viability of anything – at least in the virtual world – would eradicate our epistemological and ontological fundament. The main epistemological impacts of wikification of information are identified by Vadén in the commonly accepted imperfection of Wikipedia, as this encyclopedia is in constant flux and collectively developed in joyful co-creation. But then the deletionist tendency inside Wikipedia privileges only “important” versus “nooriginal-research” articles, taking away the more radical potential of open content. One option to resolve this restriction could be the forking of Wikipedia into politically, ideologically, socially, or geographically motivated versions, which would in Vadén’s view mark out the next step towards democratisation. However, existing media culture asks for new strategies and concepts to refrain from normative standards, power and control in support of reconfiguring existing 18
'
paradigms, by for example “corresponding to the ideology of capitalism-with-X: capitalism-with-Russian-values, capitalism-with-Chinese-communism, and so on”, as Vadén puts it.
Artistic Tools and Collaborative Practices on the Web Brenda Castro in her chapter “The Virtual Art Garden” explores the possibilities of designing a virtual learning tool that aims at motivating distant learning communities of art, design or related fields of study to share visual learning material and work collaboratively. Thereby she puts specific emphasis on the Graphical User Interface (GUI) accessible with a computer or mobile browser and the modes of interactions by linking artworks with each other, organising, juxtapositioning and tagging them according to user needs. In a field study accompanying this research, Castro involved the target groups in the design process, which helped to shape the learning opportunities for students of Art and Design so as to promote collaborative practices and participatory learning through visually-based interaction and through the building of shared identities. In pursuit of networked practices Castro engages with core concepts of virtual communities, which increasingly determine the hermeneutics of society as a dynamic process of information exchange and meaning making. Innovative online tools hasten the speed of global communication and – at least on a conceptual level – accelerate transformation of cultures into communities of practice with the aim of sharing, transforming, and producing information. The result of her project is a hi-fi prototype for a desktop Web browser proposing development potential of a mobile version.
Produsage Axel Bruns in his chapter “Distributed Creativity: Filesharing and Produsage” introduces the concept of produsage, a neologism describing an ongoing, never finished process of content development by a vast community of users and producers who apply remixing practices in pursuit of new possibilities, whose artefacts are digital objects. Examples of these practices are Open Source, Wikipedia and YouTube, which in Bruns’s view provide an excellent opportunity to trace the emergence and development of cultural memes from the initial to the various encoded and permutated forms of cultural expressions. On the basis of these and other significant examples, Bruns argues that produsage is about establishing a kind of organisational structure for community-driven, collaborative content creation online leading to significant new creative and informational resources that are challenging mass media industries through a number of key universal principles: for example, by means of open participation, communal evaluation; fluid heterarchy, ad hoc meritocracy; unfinished artefacts, continuing 19
process; common property and individual rewards. As these principles can be already observed in a wide range of produsage projects and environments, Bruns advocates with regard to viable forms of cultural music preservation a structure of content curation, which unlike museums or archives engages the participants in compiling a comprehensive archive of bootleg recordings from throughout an artist’s career so as to ensure the best fidelity achievable, and to safeguard continuing circulation within a filesharing network.
Remixing Henry Jenkins in his chapter “Remixing Moby Dick” unfolds a broad range of theoretical and practical explorations on new media literacies to better understand the kinds of learning (formal and informal) and the skills necessary to meaningfully sample and remix the contents of our culture for new expressive purposes. In Jenkins’s terms, a participatory culture is one where there are relatively low barriers to artistic expression and civic engagement, where there is strong support for creating and sharing one’s creations with others, where there is some form of informal mentorship whereby what is known by experienced community members is passed along to novices, where each member believes their contributions matter, and where they feel some degree of social connection to each other. Operating in a globally connected world where more and more people use new powerful tools to express themselves and to disperse their ideas on the Web asks for greater sensitivity regarding what it means to be an author or a reader and how the two processes are bound up together. With regard to increasing media empowerment of young people, Jenkins asks what it means to teach canonical works so that young people learn how to read in order to know how to create and to understand that the works they consume are resources for their own expressive lives. As a consequence they seek to internalise meanings in order to transform, repurpose, and recirculate them, often in surprising new contexts – and likely, Jenkins aptly concludes, “this may be one of the core insights we take from Ricardo Pitts-Wiley’s Moby-Dick: Then and Now Project – nothing motivates readers like the prospect of becoming an author or a performer of your own new text. In this context, literacy is no longer read as a set of personal skills; rather, we understand the new media literacies as a set of social skills and cultural competencies, vitally connected to our increasingly public lives online and to the social networks through which we operate.”
Regressive and Reflexive Mashups Eduardo Navas in his chapter “Regressive and Reflexive Mashups in Sampling Culture” examines remix and mashup practices in different media and in detail in 20
'
their usage in Web applications so as to enhance broader understanding of sampling as a critical practice in Remix and Critical Theory. On approaching a definition of mashups he differentiates between reflexive (e.g. Web 2.0 applications), regressive (e.g. music remixes) and regenerative (discursive and dynamic) forms in order to reconsider current remix in both modern and postmodern contexts. Navas further argues that mashups, whether they are regressive or reflexive, are dependent on sampling. On introducing a broad range of historical examples and comparing them with today’s Web 2.0 applications, an additional differentiation can be made between citing and/or copying from a source. As mashups on the Web dynamically update information generated by service providers and user-generated content creators, they sharply contrast with regressive mashups in music, because in Navas’s view they sample to present recorded information which immediately becomes meta information, meaning that the individual can then understand it as static, knowing it can be accessed in the same form over and over again. This is an important observation, as the spatiotemporal dimension in new media holds both the utopia of perdurability in archives, monuments, libraries and the utopia of the “global village”, “telepresence” and “ubiquity” to overcome space and time. According to Kittler, the aspect of space is associated with transmission whereas the aspect of time is associated with data recording, and the third function of media, that of information processing – if we consider our vast digital archives in combination with dynamically generated forms of just-in-time information – will eventually merge into new fields of cultural production.
Future Learning Spaces and Techniques Joni Leimu and Noora Sopula in their chapter “A Classroom 2.0 Experiment” discuss ubiquitous computing and its effects on learning, coming from a joint MA thesis work. More specifically, they have planned and built a ubiquitous learning environment and looked into the numerous possibilities this 21st century classroom has to offer. Amongst others the most prevalent problems encountered in their research tackled the issues of telepresence, interaction and interface design, ambient technologies, open source and remotely organised real-time collaboration in different learning contexts. They have intensively studied touch-oriented technologies and solutions and their relation to a ubiquitous learning environment, and they also describe in detail the process of building a set of proof-of-concept solutions as an interactive multi-touch-screen computer system. The outcome is an intuitive interface including audio-visual communication, multi-touch and body language. In addition, wall projections are also interactive and the computer can be controlled with hand gestures where an important role is offered to online participation for learners who are not physically present but can also take part in learning sessions via the Internet. 21
New Communication Techniques, Practices and Strategies In my chapter “Communication Techniques, Practices and Strategies of Generation Web n+1” I dwell on how important it is to understand that the meaning of technology is not inherent in the technology but arises from how technology is used. Clearly the meaningful use of everyday technology requires contextual and organisational as much as spatial/physical skills, as well as competences in shaping action and in providing people with the means to interpret and understand action. Whether it concerns human-human or human-machine communication, the meaning of action accrues from interaction. The temporal context is also involved insofar as novel forms of expressions and strategies of encoding and decoding multimodal information (e.g. chatting, teleporting in Second Life, short messaging etc.) gain their meaning and intelligibility from being interpreted as part of a larger pattern of activities. This suggests that the meaning of the use of technology is in permanent flux and thus prone to readjustments in order to be able to support the communication of meaning through it, within a community of practice. A new species, the social networker, has come into being. He/she is a multitasking information producer and manager, a multimedia artist and a homepage designer, an actor and a director of self-made videos, an editor and an author of his/her blog, a moderator and an administrator of a forum, to name only a few of the aforementioned characteristics. Social networkers select and publish their own information and put it straight from other networkers’ flows directly into their own communities. These forms of interaction require personal communication skills and competences to judge information for its relevance and added value by sharing it with others. In my attempt to describe some of the prevalent techniques and tools applied to “Web n+1” technologies I try to identify at the same time the main characteristics and how they can be approached as discernible techniques and strategies in communication culture.
Notes 1
2 3 4
5
22
Brecht, B. (1967). Der Rundfunk als Kommunikationsapparat, in: Gesammelte Werke 18. Schriften zur Literatur und Kunst I, Frankfurt am Main, p. 127-134. Enzensberger, H. (1970), Baukasten zu einer Theorie der Medien. In: Kursbuch, Bd. 20, p. 116, Frankfurt: Suhrkamp. Neuhaus, M., The Broadcast Works: Public Supply, http://www.max-neuhaus. info/audio-video/index_cpr.htm (Accessed 12.01.2009). Nelson, T. (1965), A File Structure for the Complex, the Changing, and the Indeterminate. In: Wardrip-Fruin, N. and Montfort, N. (2003), The New Media Reader, p. 144. Cambridge, Massachusetts: The MIT Press. http://www.xanadu.net/
' 6
7
Ascott, R. (1964), The Construction of Change. In: Wardrip-Fruin, N. and Montfort, N. (2003), The New Media Reader, pp. 128-132. Cambridge, Massachusetts: The MIT Press. Stallman, R. (1985), The GNU Manifesto. In: Wardrip-Fruin, N. and Montfort, N. (2003), The New Media Reader. Cambridge, Massachusetts: The MIT Press.
23
Distributed Creativity: Filesharing and Produsage
The culture of mashups examined by the contributions collected in this volume is a symptom of a wider paradigm shift in our engagement with information – a term that should be understood here in its broadest sense, ranging from factual material to creative works. It is a shift that has been a long time coming and has had many precedents, from the collage art of the Dadaists in the 1920’s to the music mixtapes of the 70’s and 80’s, and finally to the explosion of mashup-style practices that was enabled by modern computing technologies. To claim, then, that there has been a rapid and unprecedented transformation during the past decades, from audiences as passive consumers of media to users as active content creators, is necessarily an oversimplification – and yet, the rhetoric emanating from the music and movie industries, amongst others who see their established positions threatened by the rise of user-generated content, seems to claim just that, and appears to harken back to some mythical good old days when audiences were still acting their part as “nothing more than a giant maw at the end of the mass media’s long conveyor belt, the all-absorbing Yin to mass media’s allproducing Yang”, as Clay Shirky has so memorably put it (1999). What has really happened is that the increasing availability of symmetrical media technologies – of networks like the Internet that afford their participants an equal chance to have their message heard – has simply amplified the existing cultural activities of independent fans and artists to an extent that they now stand side by side (and sometimes overshadow) the cultural output sanctioned by conventional publishers. Artful, clever, or simply funny mashups, news about which is spread by word of mouth, may now attract as much or even more attention as the original source material they draw from, comment on, or send up. This is a trend that is by no means limited to artistic pursuits, of course – the rise of citizen journalism has been built on its ability on occasion to provide more insightful commentary and more fruitful discussion than conventional news publications (Bruns, 2005); open source software is seen to be more stable and reliable – and much cheaper – than many commercial products; and Wikipedia has become the world’s preferred source for encyclopaedic information in less than a decade (Bruns, 2008). Each such case, and many beyond these iconic examples, has had its unheralded predecessors – pamphlet writers, amateur software authors, or independent enthusiasts for specific areas of knowledge – for whom only a lack of appropriate technological support kept them from achieving major recognition in their own right. Most importantly, what technological support for such independent activities has enabled is that these activities need no longer take place in isolation but can be aggregated – that groups of participants can pool their resources, coordinate 24
( 45 &1/ 1
their efforts, and develop central platforms from which their outcomes can be disseminated to the wider world. The availability of such technology, then – today in the shape of what we have come to describe as Web 2.0 and social media – does not determine the success of such collaborative projects in achieving their aims, of course; for every Wikipedia there are a multitude of failed social media initiatives which for one reason or another did not manage to attract a committed and sizeable community of participants. Within the wider field of online collaboration as well as in the specific area of creative mashup upon which this chapter will focus, there is a range of other, more important factors that influence the fate of such initiatives. However, these collaborative, communal projects can substantially benefit from utilising the available online technologies effectively. Collaborative efforts to engage in creative, artistic mashups can be described as a form of distributed creativity: they are projects that harness the creativity of a large range of participants to build on and extend an existing pool of artistic materials. Such projects include ccMixter, the music sharing site operated by the Creative Commons group: here, individual musicians (more recently also including a few of the more progressive artists in the mainstream, from Nine Inch Nails to Radiohead) are able to upload their own recordings under an appropriate Creative Commons licence, which allows other members of the community to build on their work by adding further instrumental or vocal tracks, remixing the material, or using it in other ways in their own compositions (Stone, 2009). The site provides the functionality to track such re-use, making it possible for users to trace the artistic genesis of the complete song from a single violin solo to a fully-featured ensemble piece, for example – performed and produced quite possibly by musicians who have never met in person. Other projects explore similar opportunities for photos and videos, using sites such as Flickr and YouTube to coordinate the joint effort, or – like Pool.org.au, the user-generated content site operated by the Australian public broadcaster ABC (ACID, 2009) – provide their own platforms for collaborative multimedia work.
Collaborative Mashups as Produsage Such community efforts at collaborative content creation form part of the wider phenomenon of audiences becoming more visibly and more thoroughly active in creating and sharing their own content than ever before, as we have described it above. But for the most part, this is not just a case of more participants becoming active content producers in any conventional sense: many participants in such joint efforts are no more producers of the outcomes of such projects than the individual assembly line worker can be said to be the producer of the car that finally rolls off the production line. The contributor fixing a formatting error here or there in Wikipedia or the programmer tracking down an obscure bug in an open source package – and even the 25
musician mixing together a number of separate tracks found on ccMixter – do not very well fit the conventional image of the content producer (or the artist) as an entity that exists in separation from distributor or consumer: rather, their acts of participation merely form part of an ongoing stream of content development and content improvement. They come to the collaborative space first and foremost as users, but it is also easy for them to become engaged in content creation; they occupy a hybrid position as user and producer at the same time – they are produsers (Bruns, 2008). By extension, produsage, the collaborative, communal practice of content creation in which they engage, describes not a conventional production process that is orchestrated and coordinated from central office and proceeds in a more or less orderly fashion to its intended conclusion (the completion of the finished product), but instead constitutes an always ongoing, never finished process of content development and redevelopment which on occasion may fork to explore a number of different potential directions for further development at one and the same time. It is a continuous process of remixing and/or writing over what has come before, in pursuit of new possibilities, whose artefacts are digital objects that resemble medieval palimpsests – multi-layered texts that still bear the imprints of the generations of scribes whose successive efforts have led us to the current point. Open source works that way, as does the Wikipedia (whose edit histories chronicle every changed comma, every fixed typo) – but so do mashups: YouTube, for example, provides an excellent opportunity to trace the emergence and development of cultural memes from the initial, notorious video clip to a host of mashups, parodies, reinterpretations using Lego or in the style of Star Wars, and further (see e.g. Burgess & Green, 2009). Through these and other leading examples, produsage is rapidly establishing itself as the standard mode of organisation for community-driven, collaborative content creation online; produsage communities are building significant new creative and informational resources and in doing so are beginning to challenge the established industries in their fields. Across their very different thematic preoccupations, these produsage efforts are predicated on a number of key universal principles:
26
Open participation, communal evaluation: produsage is based on the collaborative engagement of (ideally, large) communities of participants in a shared project. The community engages in a continuous peer review of all participants’ contributions. Fluid heterarchy, ad hoc meritocracy: members of a community of produsage participate as is appropriate to their personal skills, interests, and knowledge; such participation further changes as current points of focus for the produsage project change. Unfinished artefacts, continuing process: content artefacts in produsage projects are continually under development and therefore always unfinished; their development proceeds along evolutionary, iterative paths. Common property, individual rewards: produsage adopts open source- or creative
( 45 &1/ 1 commons-based licence schemes which explicitly allow the unlimited use, development, and further alteration of each user’s individual contribution to the communal project.
These principles can be observed in a wide range of produsage projects and environments, and it is those environments which adhere most closely to these foundational principles that tend to be most successful in the long term (Bruns, 2008). In our reading of mashup culture as a form of produsage, then, it is the last of these principles which turns out to be most problematic. The very idea of the mashup implies the existence of prior content to be remixed and remade – and while communities such as ccMixter and ABC Pool have gone to great lengths to track the provenance of their materials and ensure that only appropriately licenced source materials are incorporated into their own efforts (in this they closely follow the example of open source software, whose very continued existence depends on averting any potential threat of legal action over copyright or patent infringements), the same cannot be said even with remotely comparable certainty about other mashup initiatives. Far from it, indeed: perhaps the majority of mashups, from political parodies that remix news footage to musical styles that are predicated on the use of sampled sounds, depend on source materials of dubious legal status. The arguments surrounding such mashups are well-rehearsed and need not be repeated in any detail here: those defending mashups in the debate cite ‘fair use’ provisions (especially for satirical uses) that continue to exist as copyright exceptions in the intellectual property rights legislation in various countries, point to the often minute nature of the seconds-long musical samples used in hip hop and other musical styles, or highlight the inherent artistic and cultural value of the new works which are created in the process; those taking the opposite view express their right to protect themselves against what they perceive as copyright violations and often seek to further limit ‘fair use’ exceptions. A major cause célèbre in this context is DJ Danger Mouse’s Grey Album, which mashed up the album The Beatles (colloquially known as The White Album) and rapper Jay-Z’s Black Album to create a new work in its own right, but without explicit permission from either of the original copyright holders (Lessig, 2004). The case, which has received substantial scholarly attention, demonstrates both the significant creative potential inherent in mashup approaches and the difficult legal questions they raise. Importantly, many copyright scholars have used it to highlight not current legal practice surrounding copyright cases, but the original intent of the law, which aimed to balance the rights of the copyright holder to profit from their work with the rights of users – that is, to build and improve upon existing material and thereby contribute to further innovation. This copyright aspect of copyright legislation has been gradually backgrounded over past decades, in parallel with the rise to greater economic prominence of the copyright industries (Lessig, 2008). 27
Distributing Creative Works Alongside other fundamental limitations in the existing conventional mechanisms for distributing creative works in physical or electronic form, it has been this perception of a profound and worsening imbalance between the two purposes of copyright legislation that has contributed substantially to the development of alternative distribution networks outside the control of the content industries. Such networks chiefly include the filesharing networks from Napster to Soulseek and the various BitTorrent-based systems, as well as torrent search sites such as Pirate Bay and Dimeadozen. In fact, such networks can themselves be understood from the perspective of produsage: what has been prodused here, through the collaborative efforts of many thousands of participants, are the very means of distribution for creative content themselves (for music and other audiovisual materials, but also for digital content in many other forms). Here, too, the core principles of produsage can be observed: the networks are open to participation by anyone with the necessary client software and are designed to be more efficient at distributing content the more clients are connected and help share content; the torrent search sites play home to a community of sharers that is engaged in a continuous process of peer evaluation, assessing the quality of the shared content and the availability of sharers, valuing active sharers, and ostracising mere leechers (users who download but do not share in turn). From out of this continuing process arises a strong sense of a nonetheless fluid and changeable community comprising leading filesharers, worthy contributors who may not add much new content but ensure that the shared materials continue to be available, and marginalised freeloaders. Additionally, the network structure is constantly changing and never complete, as users connect and disconnect; (for the same reasons) it is a virtual, communal network, owned by nobody and superimposed onto the physical infrastructure of the Internet. To describe filesharing in this fashion does not intend to glorify the practice – but neither does it aim to vilify it by claiming (as the music and movie industries are wont to do) that all filesharing is ‘piracy’. For better or for worse, what has emerged here is a stable and sustainable distribution network for digital contents, built through produsage, and it is the properties that this network and this practice have inherited from their origins in produsage that have made it so difficult for affected industries to successfully combat filesharing. In stark contrast to the firstgeneration filesharing system Napster, modern filesharing networks are flat and fluid, and there is no central controlling authority that may be eliminated using legal or technical means; even well-known support sites such as The Pirate Bay, subject of a protracted law suit in its native Sweden (Masnick, 2009a), ultimately remain ancillary to the networks and the practices of filesharing themselves – Pirate Bay’s potential disappearance may temporarily frustrate filesharers, but replacement sites will quickly reappear in another location. 28
( 45 &1/ 1
Whether it disseminates legal or illegal content at any one point, in other words, the overall practice of filesharing as a means of distributing digital content has now established itself alongside the distribution networks of the mainstream copyright industries much in the same way that the practice of citizen journalism or the practice of open source software development have emerged to compete with the conventional industries in their fields. These new practices cannot be undermined by anything but prohibitively massive legal action, since they exist in the highly decentralised, non-hierarchical structure that is most suited to produsage processes, and they cannot be curtailed by hiding more and more of the commerciallyproduced content with which they engage behind intrusive technological protection measures (such as content paywalls for journalistic material or copy protection mechanisms for CDs and DVDs) without also causing substantial annoyance to customers and thus making filesharing an even more convenient alternative source for the content. By contrast, what filesharing – and many other produsage-based sources of content – are vulnerable to is competition on the basis of quality and convenience, at an appropriate pricing level: this is the lesson from Apple’s introduction of iTunes as a moderately priced, convenient source of audiovisual content with relatively few limitations. Of course, ‘filesharing’ is an unfortunately vague term, due not least to the efforts of those who seek to undermine the practice and simply decry all filesharers as ‘pirates’ – and even suggest that “piracy is being used to fund terrorist groups” (Duff & Browne, 2009). By contrast, the reality is that the filesharing networks that exist today – the majority of which are built on the decentralised BitTorrent technology – simply constitute a collaboratively prodused and maintained technology for the safe and speedy transmission of structured data. They are no more predestined to be used for ‘piracy’ (whatever that term may mean) than cars are designed to be used for the transport of stolen goods. Indeed, it is important here also to highlight some of the perfectly legal and legitimate uses of filesharing technology: for example, BitTorrent networks are used to make available the sizeable software installation packages for the freely available open source operating system Linux in its many flavours, and even a number of commercial music labels (such as DGMLive, which we will return to later in this article) utilise BitTorrent as a delivery technology for music recordings once the music has been legitimately purchased by customers in their online stores. It is worth noting in this regard: any heavy-handed legal intervention to shut down filesharing networks altogether, by blocking the TCP/IP transmission ports they use or by introducing draconian ‘three strikes’ laws against Internet users who have been found to be running filesharing software, would also undermine such entirely legal uses, and would break the business model such legitimate services rely on. Even where copyrighted materials – chiefly, music and movies – are shared across the network without permission by their copyright holders, the designation of all such practices as ‘piracy’ should be questioned. In spite of over a decade of highly 29
publicised lamentations by the music and movie industries that such ‘piracy’ is diminishing their profits, or even ‘killing the business’, there appears to be precious little reliable evidence linking filesharing and a decline in revenue. While the music and movie industries regularly make claims of crisis and of revenue losses in the billions, they have consistently failed to produce independently verifiable evidence to support such claims; by contrast, independent research finds that overall music revenues have continued to grow even in spite of filesharing or external financial crises (Masnick, 2009b). Additionally, quite to the contrary of industry rhetoric, research also suggests that it is exactly those who are most engaged in filesharing who are also the music industry’s most loyal customers (Shields, 2009). At least for these lead users of music filesharing networks, then, the appropriate ‘piracy’ analogy is not with the buccaneers of old, robbing innocent traders of their goods to make a quick profit: instead, it lies rather closer to home. The piracy we encounter here is merely a modern-day equivalent of the pirate radio ships moored just outside British waters in the 1960s, set up to circumvent extant popular music broadcasting restrictions and deliver rock music to the unserved majority audience. At that time, the music industry not only turned a blind eye to such piracy (that is, to broadcasters acting outside applicable legal frameworks), but actively supported it, as the pirate broadcasts substantially boosted its sales potential. It is time for the industry to stop believing its fulsome rhetoric, then, and to come to terms with the possibility that music filesharing today may well serve a similar role of providing users with an opportunity to try new music before committing to the CD purchase or iTunes download. This is not to argue that all music downloaded from filesharing networks will be bought ‘properly’ by its listeners in the end – no more so than any song listened to on the radio or received from a friend on a mixtape was ever purchased by all of its listeners. However, here as much as in these non-digital precedents for music sharing, the music industry’s overly simplistic calculation that every song downloaded from a filesharing network equates to a loss of revenue simply cannot be upheld with any seriousness.
The Ethos of Music Filesharing In combatting the tired and unsubstantiated stereotype of all filesharers as pirates of the pillaging and plundering variety, then, it is also necessary to distinguish between a number of different groups of music filesharers, and between the types of content they share. The distribution of new music close to or even before the official release date makes up only part of the overall activity in filesharing networks, after all – and motivations for participation here may vary from a very strongly held enthusiasm in fans who absolutely must hear their favourite band’s latest work immediately, but remain highly likely to purchase the CD or DVD as soon as they can afford to (these are the same fans who are also catered to by studio outtakes and bonus tracks) to much more casual listeners who wish to explore highly hyped 30
( 45 &1/ 1
bands without purchasing their music outright (and whose access to music via filesharing is thus no different to taping a song off the radio – neither constitutes a lost sale to the music publisher). But beyond this lies a very different network of music filesharers: a network of fans and communities who have made it their goal to ensure that their favourite musicians’ unreleased and bootleg recordings remain in circulation and are available in the best possible condition. Away from the generic BitTorrent tracker sites like Pirate Bay and Torrent Reactor, such communities have built their spaces in sites such as Traders’ Den and Dimeadozen, as well as in torrent tracker sites dedicated to specific bands or musical styles. Many such communities have established a surprisingly strict set of rules for participation; such rules commonly include provisions against mere ‘leechers’ who do not contribute to the sharing process, requirements to remove any officially available material from the bootlegs shared in the community, and prohibitions against any commercial exploitation of the shared materials. Many of these communities, in fact, have grown out of pre-Internet trading communities which operated through elaborate ‘tape trees’ – bootleg audiotape trading networks which were structured so as to minimise the inevitable loss of fidelity as tapes were copied from one participant to the next – and many also share these earlier communities’ ethos of beating the commercial ‘grey label’ publishers of unauthorised bootleg recordings, which often exploited enthusiasts by charging massively inflated prices for their tapes and CDs, at their own game. (Indeed, some recordings shared in these communities are described as ‘liberated boots’ – liberated, that is, not from the artist who performed on them, but from the grey label which attempted to make money from them.) What emerges even from this brief description of such bootleg sharing communities, then, is evidence of a relatively complex ethical framework. Participants have no compunctions about sharing otherwise unreleased bootleg recordings, and clearly do not feel that their doing so undermines the professional prospects of the artist or their label, but take great care not to share any officially, legitimately released music (to the point of removing individual commercially released live tracks from the full concert bootlegs shared in the community); they take pride, by contrast, in undermining the business of dedicated grey labels, which are seen as parasites attempting to feed on the bootleg community. Indeed, in stark contrast to standard music industry rhetoric, too, many such sites see themselves as doing a service to the artists they follow, by showcasing the quality of their live performances, spreading the word about upcoming gigs and new releases, and saving fans from spending money on grey label bootlegs that could better be used on purchasing legitimate releases. Some bootleg sharer community sites even offer an opt-out facility for artists who do not wish to see their music shared this way (and by and large, such wishes appear to be respected). Addressing artists and the music industry directly, for example, the Dimeadozen Web page footer provides useful insight into the mindset 31
of the community (Dimeadozen, 2009): If you’re an artist (or a legal representative of an artist or its estate) and you don’t want your ROIOs [recordings of indeterminate origins – a community term for ‘bootleg’] shared on DIME for free among your fans, you may opt out any time by sending email to the site admin. We will then put you in our list of not allowed artists, known as the NAB list. This will halt all sharing of your ROIOs using DIME’s trackers within minutes. BTW, the ROIOs exist, you can’t make them vanish. So, why not let your fans get them for free from one another instead of having to purchase them from commercial bootleggers on auction sites?
On the basis of the evidence available from these sites, it appears necessary to rehabilitate at least a substantial portion of those Internet users engaged in filesharing. Undoubtedly, there are a significant number of filesharers who use BitTorrent networks simply to access the latest software, movies, and music without paying – but whether they constitute the majority of users, and whether their activities have a measurable impact on the fortunes of the industries they are said to affect, still remains to be independently documented. By contrast, at any rate, the users participating in the bootleg sharing communities we have encountered here are no pirates, and certainly no terrorists: instead, they care deeply about the music they share, go to great pains only to work with materials that are not officially available, and their participation is driven by loyalty and enthusiasm for the artists whose bootlegs they share, rather than by a desire to harm them. This enthusiasm, indeed, is also documented by the material they share. Bootlegs shared with the help of sites such as Dimeadozen range from raw recordings, uploaded only days after a recent concert, to digitised decades-old archival tapes that were only just rediscovered in someone’s attic or basement; in many cases groups of committed users will use such recordings as the source material for a major restoration or remastering project that aims to substantially improve the audio quality of the bootleg. Depending on availability, such restoration projects will combine (in other words, mash up) a number of alternative sources and/or sync audio recordings with related film or video footage; many also add ready-to-print CD or DVD cover graphics which are delivered alongside the completed bootleg. What emerges in the end necessarily depends on the quality of the original sources, but it is not unusual for these restored recordings to match or compare favourably with the official live releases of the artist. To highlight just one recent example: on 10 December 2007, the remaining members of Led Zeppelin, joined by the late John Bonham’s son Jason, reunited to perform a much-anticipated one-off tribute concert for Ahmet Ertegün, co-founder of Atlantic Records, who had died one year earlier. In spite of the enormous fan interest in the concert, to date, no official CD or DVD recording has been released. 32
( 45 &1/ 1
What does exist, however, are painstakingly mixed audio and video bootlegs of the event which remain freely available through bootleg filesharing networks, compiled from individual recordings shared in the weeks following the concert through Dimeadozen and other sites. There can be little doubt that even in spite of the availability of this bootleg, an official CD or DVD release of the concert would still be a massive sales success, even with those filesharers who have already downloaded the bootleg; in accordance with the bootleg community’s own rules, the bootleg would also be removed from the network as the official release became available. What the filesharers engaged in this and similar compilation projects are doing is far from an act of piracy as the term is commonly understood – rather, they perform a service to the music and the artists they love (even if some of these artists themselves may disagree), and to their fellow fans. They do so not to profit monetarily from their work, but to show their commitment as fans. In truth, the core of our argument here should be less about the immediate industrial and commercial aspects of bootlegs such as these, and more about the broader cultural dimension: whether we are concerned with the Led Zeppelin reunion or another concert of note, from a longer-term perspective these performances deserve to be preserved for posterity, and to date it has been bootleg filesharers, not the music industry, who have ensured that they are. Global culture would be poorer without the many illegitimate bootlegs that have document the development of blues, jazz, and rock, and it is poorer for the many other recordings that have been lost through poor storage and lack of dissemination – bootleg filesharing communities ensure that the remaining recordings, and those of the new artists and styles yet to emerge, need never suffer the same fate.
Produsage as Curation In this collective effort to preserve culture, then, we can once again see produsage-based mashup processes at work: leaving aside the dubious provenance of the source materials, the restoration, recombination, remixing and remastering of a clutch of audio recordings by a community of music enthusiasts clearly follows the iterative and palimpsestic processes we have outlined before, and its outcomes remain unfinished artefacts which may change yet further if additional alternative source recordings become available and are added into the mix. Additionally, at a higher level, the participants involved in both restoring and sharing these recordings engage in a form of content curation which is similarly orchestrated through community-based produsage: not unlike museum and archive curators who are charged with preserving the cultural record for their field of interest, the participants in this form of bootleg filesharing collaborate on compiling a comprehensive archive of bootleg recordings from throughout an artist’s career, on ensuring that these materials are available in the best fidelity achievable, and on safeguarding their continued circulation within the filesharing network. 33
In the absence of any other cultural institution which was able and available to do so, in other words, these bootleg filesharers have now become the curators of ‘their’ artist’s live œuvre. Through their work, musicians, fans, and researchers who seek to track the artistic development of a particular performer through their live performances stand a better chance of doing so by accessing these bootlegs through filesharing networks than by going to any other source, and this, not the dubious and highly disputable sale price the industry may wish to attach to such recordings for the purpose of tallying its supposed losses from ‘piracy’, constitute the real value of these fan-curated bootleg collections, and of the enthusiast-driven produsage and mashup efforts which have helped to create them. What emerges here, in fact, is a division of labour which in reality has been long in place already, but which the music industry continues to pretend does not exist: limited as it is by its need to pursue the next breakthrough artist and to publish the latest releases, the industry’s maintenance of back catalogues and live releases has long been neglected except where a substantial continuing revenue stream could be guaranteed. Many studio as well as live recordings of substantial cultural significance have fallen out of circulation as a result, and are rescued from this limbo – if at all – only through the efforts of niche labels that engage in music publishing out of an enthusiasm for the music more so than in hopes of making significant profit, or through the efforts of fans and bootleg sharers who circulate copies of these legacy recordings through other means (and today not least through filesharing networks). The mainstream industry continues to persecute these alternative means of sharing the music, and to frustrate the efforts of many niche labels to re-release outof-print recordings, but very clearly does not show an interest in its own right to keep these recordings in circulation – ultimately, this deprives audiences around the world of a major portion of their recorded cultural heritage. As we have seen, filesharers have already pushed ahead in wresting control over this heritage away from the music industry, if in dubious legal circumstances – perhaps it is now time to push for a greater legal recognition of the rights of audiences to their cultural heritage, even where this happens to the detriment of the music industry. In lamenting the power of the journalism industry over what is reported in the news, the journalism scholar Herbert Gans once suggested that “the news may be too important to leave to the journalists alone” (1980: 322) – and this realisation, shared by many, was a significant contributor to the rise of citizen journalism as a produsage-based alternative to the mainstream media. By analogy, perhaps we should now assert that in light of the important role played by music in our cultural lives and heritage, music, too, is too important to be left to the music industry alone.
Towards a Pro-Am Model of Music Curation and Distribution Although the conflict between professionals and amateurs has by no means been 34
( 45 &1/ 1
solved yet in journalism, it is nonetheless possible to identify a number of models through which professional news producers and citizen journalist news produsers are beginning to find common ground and arrange themselves in cooperative Pro-Am frameworks (Leadbeater & Miller, 2004; Bruns, 2010). In such models, both sides contribute what they do best – professional journalists, for example, tend to be able to use their institutional backing to gain better access to sources and engage in longer-term investigative research, while citizen journalists provide a wider range of perspectives and a deeper communal memory that can be brought to bear to analyse, evaluate, and comment on current affairs. If the current deep misgivings between the music industry and music fans, fuelled by heated rhetoric on both sides, can be overcome, then here, too, more fruitful and mutually beneficial arrangements are possible; in some niche spaces within the industry, they are already becoming visible. It appears obvious that the mainstream music industry will for now remain better able to support musical talent, fund new studio recordings and live tours, and promote them through advertising; at the same time, it is also evident that the community of music enthusiasts is better able to keep alive the excitement around established artists and to curate the back catalogue and live archive. Additionally, the fan community has also been able to develop filesharing into a viable alternative distribution network for digital content – a network which remains open for both enthusiast and industry use. Any Pro-Am approach explored in the music industry would need to take into account these fundamental observations and build its business models upon them. Some early steps towards this are already evident; some have been in place for some time by now: as early as 1991, for example, Frank Zappa officially released a hitherto illegitimately circulated, fan-curated collection of live recordings in his “Beat the Boots” CD series; more recently, and though deeply critical of the practice of bootlegging itself as a disruption of the live experience, Robert Fripp’s DGMLive label has begun not only to utilise existing fan bootlegs of King Crimson shows for its live releases, but also to use Bittorrent networks to deliver these recordings as (paid) downloads to its customers. Rather than condemning bootleg filesharer communities outright and seeking to criminalise their behaviour through the sponsoring of further legislation, thus, legitimate artists and their labels could well find it more fruitful to track the collaborative efforts of their fans as they curate the communal archive of live recordings, and to pick out the best and most-shared of these recordings for official release on CD. In turn, this requires the music industry to at least partially release its hold on recordings which it has no intention to utilise in any substantial way. Again, a number of bands and artists have already declared themselves to be ‘taper-friendly’, even to the point of reserving special areas in the live venues where they play for audience members wishing to make an audio or even video recording of the concert they have come to see; the dissemination of such recordings through filesharing networks is accepted at least tacitly in such arrangements, and participating artists 35
tend to point to the valuable word-of-mouth advertising benefits they derive from allowing their shows to be bootlegged. On the available evidence, the creation of a comprehensive fan-curated live archive for each tour that follows from this practice does not appear to undermine the sales potential of any subsequent official tour CD or DVD – indeed, the ability for fans to preview through bootlegs the kind of performance that may be documented on the official release may boost rather than reduce sales. Beyond this, there is a real potential that fan curation could also be harnessed for the maintenance of artists’ back catalogues – and if niche labels which engage in similar practices are included here, to some extent it already is. Rather than attempting for themselves to maintain the recordings archive of long-established archives, often with results that are as frustrating for the label as they are for the community of listeners, major labels could work with the leaders of the fan community to make such legacy recordings available at reduced prices through filesharing networks and could empower reliable community members to engage in remastering, remixing, even mashup activities as they see fit. At the very least, this would keep the back catalogue in circulation and fans satisfied; it may also lead to renewed interest in such legacy recordings if they lead to notable new uses for this existing material. Such developments may appear unlikely in the present circumstances, at a time when major labels are struggling with substantial financial problems – which, it should be noted, have much more to do with inept management and endemic corruption in an industry that is “founded on exploitation, oiled by deceit, riven with theft and fuelled by greed”, as Robert Fripp has put it (1997), than with the impact of filesharing. However, the likely eventual collapse of the mainstream music industry model as we know it also holds within it a substantial opportunity for real change that may deliver a new and more sustainable arrangement between fans, artists, and labels than has existed for a very long time. It appears to be high time to mash up the music industry.
Works cited Australasian CRC for Interaction Design (ACID). (2009) Pool User Research. http://pool.acid.net.au/wp-content/uploads/2009/08/pool-inter im-report-may-2009.pdf Bruns, Axel. (2005) Gatewatching: Collaborative Online News Production. New York: Peter Lang. Bruns, Axel. (2008) Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang. Bruns, Axel. (2010) “News Produsage in a Pro-Am Mediasphere.” In News Online: Transformations and Continuities, eds. Graham Meikle & Guy Redden. London: Palgrave.
36
( 45 &1/ 1 Burgess, Jean, and Joshua Green. (2009) YouTube: Online Video and Participatory Culture. London: Polity. Dimeadozen. (2009) http://www.dimeadozen.org/ Duff, Eamonn, and Rachel Browne. (2009) “Movie Pirates Funding Terrorists.” Sydney Morning Herald 28 June 2009. http://www.smh.com.au/national/mov ie-pirates-funding-terrorists-20090627-d0gm.html Fripp, Robert. (1997) “DGM’s Founding Aims and Mission Statement.” DGMLive. http://www.dgmlive.com/about.htm Gans, Herbert J. (1980) Deciding What’s News: A Study of CBS Evening News, NBC Nightly News, Newsweek, and Time. New York: Vintage. Leadbeater, Charles, and Paul Miller. (2004) The Pro-Am Revolution: How Enthusiasts Are Changing Our Economy and Society. London: Demos. http://www. demos.co.uk/publications/proameconomy/ Lessig, Lawrence. (2004) “The Black and White about Grey Tuesday.” Lessig 2.0, 24 Feb. 2004. http://lessig.org/blog/2004/02/the_black_and_white_about_ grey.html Lessig, Lawrence. (2008) Remix: Making Art and Commerce Thrive in the Hybrid Economy. London: Bloomsbury. Masnick, Mike. (2009a) “Pirate Bay Loses a Lawsuit; Entertainment Industry Loses an Opportunity.” Techdirt 17 Apr 2009. http://www.techdirt.com/ articles/20090417/0129274535.shtml Masnick, Mike. (2009b) “Mainstream Press Waking Up to the News That Musicians Are Making More Money.” Techdirt 16 Nov 2009. http://techdirt.com/ articles/20091114/1835036932.shtml Shields, Rachel. (2009) “Illegal Downloaders ‘Spend the Most on Music’, Says Poll.” The Independent 1 Nov. 2009. http://www.independent. co.uk/news/uk/crime/illegal-downloaders-spend-the-most-on-musicsays-poll-1812776.html Shirky, Clay. “RIP the Consumer, 1900-1999”. Clay Shirky’s Writings about the Internet: Economics & Culture, Media & Community, Open Source 1999. 24 Feb 2007.
. Stone, Victor. (2009) ccMixter: A Memoir, or How I Learned to Stop Worrying about the RIAA and Love the Unexpected Collaborations of Distributed Creativity During the First Four Years of Running ccMixter. http://fourstones. net/ccMixter_A_Memoir.pdf.
37
The Virtual Art Garden: A Case Study on User-Centred Design for Improving Interaction in Distant Learning Communities of Art Students
This paper describes the methodology used for designing a virtual learning tool that aims at motivating distant learning communities of art, design or related fields of study to share visual learning material and work collaboratively. The Virtual Art Garden is a concept for a Graphical User Interface (GUI) that supports group participation and sharing of visual information (or artworks) as the basic outcome from students of art and design fields. Groups, not individuals, compose the Virtual Art Garden, where students take part as they take part in a lecture. All individuals from each group or community are administrators, having rights to add, edit, or even remove content. The lecturer can be part of the group just as students can, to share content and participate through it, as well as to follow the learning process and give expert feedback. The Virtual Art Garden concept has been designed to be accessible with a computer or mobile browser (even though the UI has been designed for implementation in pervasive media, the current demo features only desktop browsers). Each Art Garden provides users with the freedom to arrange the visual content and so create a hierarchical organisation of the shared media depending on their learning methods and needs. Users can also link artworks with each other to give emphasis, to organise, study styles, similarities, etc. and this can be done manually by selecting them or automatically by tagging them. This freedom of structuring an image catalogue makes it possible for the learning group to adjust the categorisation of the visual information depending on their needs. From user studies to workshop-like interviews, the presence of the user in the design process of this project has been fundamental. The project of the Virtual Art Garden is grounded in the aspect of collaboration as an aim in the user experience, but the design itself is also a collaborative process: experiences and knowledge from people in different fields, mainly those in Art Education who have been involved as a user group, have been influencing the design process through all its phases. The project of The Virtual Art Garden was born from a clear need to open up more learning opportunities for students of Art and Design, requiring diverse alternatives to reach the content and the community. The project is grounded in needs analysis and user observations from a community of virtual learning in the field of Art Education at the University of Art and Design Helsinki. Through the project we determined a set of problems that affect the learning process of these groups to try and solve them through the design of the GUI. The main objective 38
#&-
from the design of this interface is to promote collaborative practices and participatory learning through visually-based interaction and through the building of shared identities.
Introduction Due to the fairly simple ways to share and access text-based information through common telecommunication means, a majority of the distance learning initiatives in the last decade have shown clear focus on theoretical fields of study. Millions of students worldwide have benefited from the advantages of virtual learning programmes for both distance learning and presence-based learning, but so far, many subjects that require development of more practical skills than only theoretical knowledge are left behind in this expansion of education to the pervasive world. A large variety of Internet-based services are published and available to the online community, allowing information sharing through diverse media and creation of communities of practice, showing better opportunities to engage different sectors of the educative community in virtual projects. This diversity in services and the popularisation of information access through the Internet offers options for experimenting with new methods for learning, helping to awake interest in adding more subjects to the e-learning curricula in recognised institutions. Thus, the integration of fields for Art and Design in virtual learning strategies is inevitable. The user group studied for this project already had an established virtual network, making it easier to discover any lack in their current tools and their needs to fulfil the learning objectives. The design process began with different approaches considering the advantages and weaknesses of mobile devices, Web applications, and virtual tools for the design of those concrete needs we needed to address.
Tools in Virtual Learning Communities Tools, in general terms, have the functionality of making tasks easier to perform. Nowadays, performing most of our social and professional tasks in urban societies is very much related to information and communication: today’s “global informational society” (Castells 2000) has the need to constantly produce knowledge and to communicate it in order to apply it in practical environments. With the objective of fostering knowledge building through an adequate visualisation of objects of information, the interface of the Virtual Art Garden works as a tool that can facilitate tasks related to this salient need. In order for a tool to be efficient, in general terms, it ought to provide visible benefits to the users through a simple and self-explicit mechanism (Schneiderman 1998; Hix & Hartson 1993). This applies to tools of all kind of technologies, both to physical (e.g. a pencil) and to digital (e.g. a word processor). When a tool is efficient and it manages to be attached to a community, it influences social 39
transformations and becomes a symbol that represents certain characteristics of the community, the users and their whole social, historical, and economic environment. Groups of people create, accept, adjust, and (sometimes) depend on innovations; efficient tools are innovative artefacts that determine the productivity and growth of a community: coexistence, common understanding, and similar aims. The constant development of tools and people’s adopting and adapting those tools to their daily lives make society dynamic, always transforming through the new ways in which tasks are performed. People evolve with the impact of the tools they commonly use, and so, they also modify those tools as a process of understanding their benefits (Krippendorff 2006). Tools, therefore, become a representation of the community using them; they become real objects of identity. In this sense, the Virtual Art Garden is conceptualised as a tool that approaches the user group through identity. The evolution of identities within a virtual community is a collective process of participatory behaviour. Online communities shape themselves throughout internal activities, and the way in which members interact to perform those activities is very much influenced by the virtual environment: by its symbolic and its functional characteristics. The interaction design of the Virtual Art Garden thus intends to determine the feasibility of participatory behaviour to influence the creation of visual-based knowledge. Knowledge building, a term which refers to the way societies learn in a world of information reachable through pervasive media (Scardamalia 2003), is a result of a selective gathering of data and its application in real situations. The process of creating knowledge is nowadays closely related to digital media: the way communities of practice emerge with the aim of sharing, transforming, and producing information evolves into a digital data landscape that transforms the world itself. Knowledge is created from social needs, and therefore it forms one basis of social development: this is one of the significant reasons why motivating the development and improvement of tools for enhancing information sharing practices in the context of learning is fundamental. The possibilities of virtual environments construe modern knowledge as an undeniably community-based experience (Capurro 1986). Virtual communities determine the hermeneutics of society as a collaborative process of selecting information and building meaning out of it, in relation to the diversity that comprises those communities. Innovative tools are core in the transformation of cultures. Today, online tools that provide means of communication at a global scale not only transform cultures existing independently from each other; they foster a nexus between communities that are almost anywhere and lead to a conceptualisation of a global society, based on access to information through digital technologies. Our current informational society is transforming all the time around forthcoming technologies. The design of technological tools very much shapes this transformation: in the way that the tools provide elements which guide the way people interact with the available information. 40
#&-
Diversity between members of communities – and between communities themselves – that interact online is very rich for positive transformations in global economies. These distant practices help participants, who are sometimes spread in different corners of the world, to understand the effects of evolving changes within a variety of environments.
Semiology of The Virtual Art Garden Communication processes happen through symbolic systems that cannot be separated from the cultural context in which they take place. Visual communication is always coded, and these codes allow common understanding in society (Kress & Leeuwen 1998). Understanding means de-codifying and recodifying existing information into personal mental representations. Through a process of association, new concepts integrate with assimilated ideas to construct new interpretations. This forms a set of both abstract and concrete mental representations that can be transformed into language – spoken, written, or graphical – through the use of universal conventions (signs). This is a fundamental consideration in the design of interactive systems, where communication depends on the user’s actions over a digital interface, which has to be understood, learnt, and identified by the user. A visual interface aiming at promoting the sharing of learning content ideally works as an object of interaction that gathers and presents information through an easily accessible environment. An interface is something that acts “in between”: between people, between individuals and information; and this aspect of being in-between is at the same time path and barrier. Through its main characteristics of interactivity, dynamics, and autonomy, an interface in the context of computers and virtual environments tends to amplify the user’s mind (Krippendorff 2006). When designed for learning communities, graphical user interfaces must act as a simplified symbolic system of information-communication that enhances participation of the community and opens possibilities of achieving personal and common cognitive results. In an interface design that pursues integration with an educational model, it is important to consider how hierarchy and categorisation of information under a codified structure will be presented. Categorisation of informational objects in a learning environment is core to the understanding of themes and to the stepwise development of skills. Nevertheless, categorisation of information and learning processes tends to differ as learning methods differ from purely theoretical fields of study to artistic or more practical subjects. This concern has been tackled in the design of the Virtual Art Garden by experimenting in giving open freedom to the users to influence it. This was done also as a matter of motivation in the process of building knowledge and to create a meaningful virtual environment where information is constantly evolving, dynamically organised, easily restructured, and productive. 41
6 ') &'
The Virtual Art Garden as User-Centred Design Part of the purpose of designing the Virtual Art Garden is to experiment with how learning practices in virtual communities can be improved by user-centred interfaces that consider the importance of social identities and thus encourage the cognitive development of individuals as part of a social system. As Dan Saffer puts it, “users know best”, and this is a statement that determines the concept, the graphical interface, and the interaction design of the Virtual Art Garden. Even though, usually, one simple tool can fulfil a diversity of expectations in several user groups, a design as a solution to a set of problems is best if framed, at least in the beginning, in a specific focus group. By studying the community that will be using the tool, it is expected that a design solution will be developed that can be easily accepted by the users and incorporated into their daily practices. Therefore, a set of goals that users need to achieve can be determined to lead the design of the tool (Saffer 2007). Furthermore, this allows the possibility of finding common interests or behaviours that arise when experimenting with the interfaces. “User-centered design consists in more than observing and interviewing the users” (Hix & Hartson 1993, 30); getting to know the users in a deeper way leads to creative and innovative design solutions. User studies certainly require a complex range of human resources in order to gain a clear and precise set of information from the user group. Understanding user tasks, capabilities, and preferences (Preece 2000) are key to developing the right tools that an online community demands.
Methodology The evolution of this project encompasses several stages: from the determination of the user group and user observation and analysis, to the sketching of the different approaches to the concept, until the resolution of the final concept and its 42
#&-
progress towards a hi-fi prototype. The last stages of the design process consist of the interaction design for user-user and system-user experiences and the visualisation of the tool from a functional perspective (by considering tasks and features). The methodology in this process was based on designing virtual environments as experimental rather than expectant. This means that the user must have certain freedom of interaction, the possibility to take different routes to achieve one goal. The interface should, therefore, provide a set of objects of information that members of the community can freely decide how to use. Needs and motivations are changing constantly and designs have to be adaptable to the practical experiences of human-computer interaction. A set of metaphors were sketched and analysed during the concept design as a process of brainstorming. These metaphors were the result of the user studies. The most determinant findings from the brainstorming phase were then considered in a final metaphorical approach to the design solution; therefore the title of this work, The Art Garden, is strictly related to the use of those metaphors.
Metaphors An approach to digital technologies through an effective use of metaphors as models implies natural interaction between individuals and machines: “The use of visual metaphors (...) informs the design process as much as it enables users’ understanding” (Krippendorff 2006, 99). The role of rhetoric in designing digital media solutions is no longer that of persuading as in Aristotelian times, nor that of illustrating in order to magnify the importance or the beauty of an idea; in digital media design, rhetoric is about the use of appropriate tangible phenomena as grounds (i.e. a model) to create understanding between users and technologies. By using situations, words, and pictures to compose visual metaphors that are natural and known to most users, expectations about an interface are supported, and cognitive directness is increased (Hix & Hartson 1993). Thinking of metaphors in the field of mediated communication and interaction design thus helps to create a natural approach to human needs, by generating a spontaneous association between the new artefact that is being designed and common places that are immediately identified by the users as human beings living in a certain context. Metaphors are very much related to artefacts, in the sense that natural phenomena have inspired technologies and served as a model in the designing of tools. Richard Coyne sees this inspiration as a two-fold effect: “technologies are described biologically and biology is sometimes understood in terms of technologies” (Coyne 1995, 280). It is human nature to use signs as an understanding of phenomena, to explain something through examples and comparisons. The use of rhetoric in this sense, in the context of interface design, relates to the purpose of finding natural and ergonomic associations between the medium and the human action. 43
The Virtual Art Garden Interface The selected metaphor refers to the title of this paper, the Art Garden, and is a direct result of the user studies and engaging users’ workshops. The initial association of the project with a garden comes from the idea of sharing a place that starts as an empty field and can be grown in beauty and harmony with combined effort and commitment. This metaphor is grounded in the idea that each artwork is a very particular piece of creation, just as a plant: even if it belongs to a specific category, it will never have an exact replica. Plants need to be taken care of so they can flourish; a plant little by little changes its form, and it contributes to the flourishing and transformation of the garden as a whole. As in a field owned by several experimental gardeners, the gardeners can always grow more plants, modify the environment or can even cut, but with the understanding of sharing a common space that has to become a complex experience besides that of each single element. In the garden interface, paths are a connection between artworks, and they follow patterns of identity and personal interests. Different paths can be drawn between artworks to express for example emotional or technical associations. A garden, as a natural environment immersed in the urban space, gives us a certain feeling of relief as a small way out from rush and stress. The Virtual Art Garden should work in a similar way, as a place for enjoyment after fulfilling the duties of studies. From a basis of intrinsic motivation, a place like this can be used to stimulate learning activities in a social, friendly way. The main characteristic of the digital garden is that it enables the collaborative building of identities through a generative visual environment. As in the architectural design of a botanic park, paths become essential. If one artwork is left alone without any path leading to it or away from it, the artwork will be less visited, and therefore, less commented upon. On the contrary, a work that has many ways in and out will be more visited and will thus receive constant feedback. The Art Garden should act as a place where users participate freely and interact around a joint creativity. Following a metaphorical approach, the aspect of identity and collaboration is sought through a graphical user interface taking advantage of web-based technologies and of the association with a common physical space.
Concept Aesthetics A visual element (as almost anything related to human perception) is perceived differently from one person to the other depending on individual mental associations and on the cultural context in which the individual is immersed. Aesthetic perception is an individual experience but seeks always a universal agreement (Kant 1987). Aesthetic experiences, which ever since Plato have been discussed in relation to beauty and perception, refer to the emotional reaction that an individual 44
#&-
6(,
has after an object or a representation of an object. It is an interactive experience between the individual and the object, or more likely, the “interdependence among the elements of an object” (Moynihan & Mehrabian 1981, 323). From the motivational perspective, the aspect of the aesthetic plays an important role, as it is strictly related to human emotional activity, something that is subjective but at the same time aims at being shared. In the way the virtual art garden works, a common building of a visual interface represents an aspect of motivation that is propagated through aesthetics. Trying to avoid a cliché of design that represents beauty, the graphics are clean enough to let the user create their own aesthetics or to experience their own idea of beautiful, ugly, serious, funny, etc. To motivate the community, the interface features a common visualisation created from individual images as a result of collaborative work. Aesthetic experiences are individual but are determined socially; in this sense, participants of the Virtual Art Garden are expected to react emotionally to the visual interface that is generated within a context. Through these experiences, a visual interface influences the activity of the group members and this is understood in terms of social learning as a factor of motivation. The way the Virtual Art Garden approaches aesthetics is through the interaction 45
of the users with the interface, or more accurately, through the users themselves through the interface. This is comparable to the vision of aesthetics proposed by Krippendorff, which focuses on the aesthetic experience as an interaction of the user with the artefact (Krippendorff 2006) that is external to the expressive purposes of the designer.
Expected Contributions through the Virtual Art Garden The concept of the interface is based on two main ideas, identity and motivation, which are used in terms of the hypothesis to improve learning practices that are virtual and community based. The elements of identity are mostly the artworks appearing in the interface as a map. The artworks alone are the objects of information; they carry feedback and other data. The placement of these artworks is also an object of information in itself, in the sense that they act as a pattern of recognition within the community. The placement in the map is thus visual information about the community, its identity and activity. Users add images and link them with each other intentionally or non-intentionally creating a visual object of their own. This is the main aspect within identity issues: the way the interface grows and evolves is through a process of participants’ recognition in which the users become represented as a community. Motivation is the second basic purpose of the project. The interface itself should work as a motivational element by influencing the participants to get involved in the evolution of the visual space. Other elements that are directly related to the artworks, as feedback comments, work as motivational factors as well. Paths opened from artwork to artwork are also a means of motivation.
Conclusions Designing technologies, technological tools and applications implies transforming thinking processes and behaviour, which in turn demands new thinking and understanding of those technologies. Designing successful tools for digital media can be approached as an iterative design process that makes use of exploratory behaviour. An iterative method of practical experimentation allows, and even expects, continuous reshaping of a tool. The design of virtual tools can take advantage of experimenting with ideas that can be reshaped by the users or with direct users’ feedback. The presence of the user in the design process is now almost a requisite. Following and reinterpreting what Eskelinen and Koskimaa describe as a functional theory of media, conceptual designers do not need to consider how a medium works or what its limitations are, but rather how it is practically used and how it is integrated with people’s emotions, body, and daily activities: something that is done by the individuals themselves, through real experiences with the technology 46
#&-
(Eskelinen 2002). Collaborative processes are more than ever taking advantage of cultural and knowledge diversity and are integrating developers and users of tools in joint development where “the aim is to progress in a collaborative way towards a global sustainability” (Himanen 2004, 19). The project of the Virtual Art Garden is grounded in the aspect of collaboration as an aim in the user experience, but the design itself is also a collaborative process: experiences and knowledge from people in different fields, mainly those in Art Education who have been involved as a user group, have been influencing the design process through all its phases. The result of the project is a hi-fi prototype for a desktop Web browser developed together with Alfredo Rodríguez Montemayor. A following stage could include the design and prototype of a mobile version and the proposal for a beta implementation. A diversity of knowledge and experiences have been gathered in this project from the beginning, until it is finally being shaped as a prototype in an interesting collaborative implementation, taking place through virtual means between two people located more than ten thousand kilometres apart: a designer in Helsinki and a programmer in Mexicali. This aspect can be used as an example to demonstrate the possibilities of collaborative processes achievable through virtual environments, breaking the boundaries of distance and merging cultural interests.
Works Cited Capurro, R. 1986. “La Hermenéutica y el Fenómeno de la Información”. In International Conference on Phenomenology and Technology (Updated version 2002). http://www.capurro.de/herminf.html. Free translation by the author. (Last reviewed March 2007). New York: Polytechnic University. Castells, M. 2000. The Rise of the Network Society. Oxford: Blackwell. Coyne, R. 1995. Designing Information Technology in the Postmodern Age, from Method to Metaphor. Cambridge: MIT Press. Eskelinen, M. & Koskimaa, R. 2002. (eds.): “Introduction: Towards a Functional Theory of Media”. In CyberText: Yearbook 2001. Jyväskylä: RCCC University of Jyväskylä, 7 - 12. Himanen, P. 2004. Challenges of the Global Information Society. Helsinki: Committee for the Future Parliament of Finland. Hix, D. & Hartson, H. R. 1993. Developing User Interfaces: Ensuring Usability through Product and Process. New Jersey: Prentice Hall. Kant, I. 1987. Crítica del Juicio. Oviedo: Losada. Kress, G. & Leeuwen, T. 1998. Reading Images. The Grammar of Visual Design. London: Routledge. Krippendorff, K. 2006. The Semantic Turn. A New Foundation for Design. Boca Raton: T&F.
47
Moynihan, C. & Mehrabian, A. 1981. “The Psychological Aesthetics of Narrative Forms”. In H. I. Day (ed.): Advances in Intrinsic Motivation and Aesthetics. New York. Plenum, 323 - 340. Preece, J. 2000. Online-Communities: Designing Usability, Supporting Sociability. London: Wiley. Saffer, Dan. 2007. Designing for Interaction. AIGA. Scardamalia, M. & Bereiter, C. 2003. “Knowledge Building”. In Encyclopedia of Education (2nd ed). Cambridge: Macmillan Reference. Schneiderman, B. 1998. Designing the User Interface: Strategies for Effective Human-Computer Interaction. London: Addison-Wesley.
48
“You met me at a very strange time in my life.” Fight Club and the Moving Image on the Verge of ‘Going Digital’
“You are not your job.” Tyler Durden (Brad Pitt) looks around, angrily but restrained, from side to side. “You’re not how much money you have in the bank, not the car you drive… you’re not the contents of your wallet.” His voice is steady, his jaws strained. The camera slowly moves towards Tyler, close up until his face fills the movie screen. “You’re not your fucking khakis. You are the all-singing, all-dancing crap of the world.” As he utters these words, looking straight into the camera, at us, the picture on screen starts to vibrate, more violently by the second, until the perforations on both sides of the film become visible on screen – creating “the illusion of the celluloid itself jumping off track as it move[s] through the projector”1 (cf. fig. 1). Then Tyler turns away and leaves, as the movie’s narrative cuts to another scene. Was this the end of the reel (real) world? Vacillating between black humor, hilarious irony and pungent cynicism, David Fincher’s Fight Club (USA 1999), based on Chuck Palahniuk’s novel, violently depicts the existential struggle of a nameless protagonist and first-person (off-voice) narrator in late capitalist American consumer society and his way to literally being ‘punched to his senses’. After its release, Fight Club received much scholarly attention because of its raw and offensive portrayal of brutal violence, its ambiguous rendering of a ‘crisis’ of masculinity and its homoerotic elements.2 Some academics denounced it as pedagogically irresponsible, “symptomatic of a wider symbolic and institutional culture of cynicism and senseless violence that exerts a powerful pedagogical influence on the imagination,”3 while critics even went so far as to scorn it as “the dumbest of the entries in Hollywood’s anti-consumerist new wave.”4 Nearly a decade later, Fight Club once again becomes prominent in academic discourse – yet, this time in terms of aesthetic appreciation: the digital effects in Fincher’s movie have now become center of attention in contemporary discussions on the hybrid moving image and the influence of digital effects on narrative.5 The opening sequence of Fight Club, a “story-driven virtual camera shot”6 through a rhizome of (computer-generated) neurons and firing synapses inside the protagonist’s brain, and its seamless merging with live-action images epitomizes, according to Sebastian Richter, a paradigm shift in the reorganization of image production and marks the transition to a newly organized (digital) film aesthetics.7 Shilo T. McClean, in her investigation on the interplay between digital visual effects and storytelling, furthermore reinforces this notion of a ‘paradigmatic transformation’ at play in Fight Club in referring to a change of ‘seeing conventions’: while at the film’s release Fight Club’s opening scene constituted an enigmatic ‘rupture’ in conventional visual story-telling, “this once-innovative technique is used now 49
51373 '& , 897:::;3
commonly in standard television dramas.”8 These observations point to new visual conceptions and configurations of the ‘cinematic language’ that are surfacing, as the cinematic images are – in the shift from analog to digital filmmaking – all the more re-structured from the ‘inside out’ by the computer. Fight Club’s Jittercam-scene9 (see fig. 1), described in the beginning, clearly visualizes that something crucially ‘disturbing’ is happening with cinema and its image at the dawn of the new millennium: the film itself unraveling as the (indexical) photographic image basically jumps off the film reel rattling through the mechanical projector “was our way of cuing the audience that reality was getting ready to run off the rails.”10 Given the overall media-reflexivity of David Fincher’s movie, this scene can well be understood to transcend its narrative function. Reality would then not only be ‘running off the rails’ in the mind of Fight Club’s nameless protagonist, but literally straight off the film reel: as the cinematic images are detached from their indexical nature, altering film’s ontological status – undermining its reference to ‘reality’11 – the moving image on the verge of ‘going digital’ challenges 50
<= ,, 1,, 3>
the very definition of ‘film,’12 while simultaneously exposing and reinforcing cinema, the predominant medium of the 20th century, as a hybrid ‘simulation machine’ long before the computer had appropriated the terms.13 Film and computer thus seem to have been predestined from the start to engage in an intermedial relationship that revolves around a ‘media(r)evolutionary’ tension, putting cinema – for the third time in its media history (after film and video) – into a state of crisis, presumably even re-configuring and marking digital film, as Gundolf S. Freyermuth suggests, as a new ‘categorical difference.’14 It thus comes as no surprise that the symbolically charged year of 1999 – the year of Fight Club’s release – proved to be the year of cinema’s proclaimed end against the backdrop of the digital ‘revolution’: “At the supposed turn of the millennium, the one-hundred-plus reign of celluloid was over; film was dead; digital was It.”15 The changeover from analog to digital projection – ‘the end of the reel’ – had triggered special public attention: it had turned out to be an effective homonymic wordplay that captured the cultural anxieties of the imminent arrival of a new millennium whose predicted future saw the digital code of the computer as the basis of its media-cultural logic.16 Fight Club’s Jittercam-scene, I suggest, captures the changeover: it figures as a metaphor for the potential transformational effects of computerization and the phenomenon of ‘digitization’ in contemporary media culture. The end of the reel/real world thus presents the starting point for the following reading of Fight Club, which, as I argue, tests, transgresses and re-negotiates the boundaries of the cinema medium and the moving image on the verge of ‘going digital’, while its narrative simultaneously negotiates the shifting notions of time, space, memory and subjectivity against the backdrop of its computer-generated imagery. “In an astonishingly short period of time,” Peter Lunenfeld states in The Digital Dialectic, the computer, “a machine that was designed to crunch numbers has now come to crunch everything from printing to music to photography to cinema.”17 Cinema’s ambivalent ‘liaison’ with the computer is expressed in its dual appropriation of the ‘new medium’: while cinema has been screening the computer as a mythical site of projection for the most horrific visions of potential cultural effects of computerization – from its anticipation in 2001 – A Space Odyssey (USA 1968) to its culmination in The Matrix (USA 1999)18 – behind the screen, digitization has initiated an ongoing (emphatic) exploration and (economic) exploitation of the creative and dynamic potential of the computer as a tool for filmmaking. This intermedial ambivalence entered the field of vision as the narratives that featured computer technology as pro- and antagonists became themselves “partly a showcase of cinematic technologies,”19 whereby the digitally manipulated images expose and exploit their ‘dubitative’ nature20 and their ability to be truly awe-some: though not realistic, they are frightfully ‘real’ in appearance, visually enhancing the narratives in which the computer ultimately poses a threat to human perception and ontology as the borders between the real and the virtual 51
are gradually blurred. Approaching Fight Club in this intermedial trajectory – for which the Jittercam-scene offers an incentive – Fight Club marks a paradigmatic shift in the encounter of film and computer at ‘a very strange time’, where the end of the real and the end of the reel intersect to expose the changeover from analog to digital as a concern that has become, as Chris Chesher argues, “philosophical before it is technical.”21 In largely removing the computer and placing it behind the screen, Fight Club has detached the medium from its technology to highlight the transformational effects of its culturally incorporated mediality, of ‘that which precedes us or comes in-between,’22 that which frames and shapes what and how we perceive and gain knowledge of ourselves and the world around us. At the end of the reel/real world, Fight Club’s changeover presents a narrative rupture in which the tension of mediation and manipulation is ‘tested’ and their boundaries are (re-)negotiated. As the story unfolds in a complex interlacing of anticipations within a montage of flashbacks, Fight Club’s nameless narrator (Edward Norton) suddenly ‘stops’ the film – the image freezes – to emerge, in another setting, from his (invisible) off-voice narration to directly address the audience. What follows is the literal exposure of the workings of the cinematic apparatus as Fight Club’s protagonists give us a glimpse into the projection booth: into what is by nature of cinema’s dispositif always outside of the frame, and thus outside of the spectator’s gaze. “A movie doesn’t come in one big reel, it comes on a few,” the narrator says while we see Tyler Durden in the background, tinkering about with a movie reel. “So someone has to change the projectors at the exact moment one reel ends and the next one begins. If you look for it you can see little dots coming in the upper right hand corner of the screen.” As if on cue, a dot appears in the corner to which Tyler Durden points his finger and says: “In the industry we call them cigarette burns” – the cue for the changeover. Overtly media-reflective and self-conscious, this scene ruptures and simultaneously exposes cinema’s illusion of transparency by literally displaying invisible frames of (mediated) perception as ‘subliminal’ film frames spliced into the motion picture by Tyler Durden – a (trick) technique of film manipulation visibly marked in Fight Club as analog. Being handed over the agency of a film director, Tyler Durden re-arranges the frame(s) by an act of intrusion: he splices single frames of pornography into family films. In this act of framing, Tyler is exposed as an unscrupulous and subliminally aggressive (memory-)operator that manipulates and intrudes into the cinematic space, in turn exposing the cinematic apparatus as an ‘ideological machine’ that frames and thus constitutes collective and individual memory, perception and knowledge, while all the while simulating the ‘big picture’. Yet with all this (analog) ‘sampling’, ‘framing’ and ‘simulation’ – Tyler’s anarchistic (film-)technique is staged as old-fashioned handicraft work with a cutter and tape – which have become the key words for digitization, this scene opens up a dialog on the ‘media(r)evolutionatry’ tension between film and computer by playing (on) a trick. This is a trick that highlights cinema’s essential characteristics: 52
<= ,, 1,, 3>
film’s innate quality to be a ‘simulation’ rather than merely a ‘representation’ – long before editing and montage went digital – and the inherent manipulatory quality of cinema’s underlying mechanical structure that relies on a trick on human perception to suggest motion, i.e. the “sampling of time […] twenty four times a second”, which, as Lev Manovich argues, already “prepared us for new media […]; what cinema accomplished was a much more difficult conceptual break – from the continuous to the discrete.”23 Though cinema’s end was most fiercely announced just as the movie projector and the film reel were to be replaced by new digital formats – which has, even ten years later, not yet been fully realized – there still remains skepticism as to whether the film industry’s ‘going digital’, something that remains, as John Belton insists, “relatively invisible to the average moviegoer,”24 actually does mark a revolutionary paradigm shift, or whether digitization is to be considered as the evolutionary perfection of what has always been cinema’s ‘essence’. Clearly, ‘digital film’ does not break with cinema’s over one hundred years of media-history. “Digital technologies,” Bruno Lessard argues, “do not erase a century of film-making practices and cinematic heritage overnight. Issues of representation, mise-en-scène, montage and performance do not disappear with the advent of digital media; they come back to life in forms that are mediated in a new way.”25 Though contemporary ‘digital film’ has become a hybrid mashup of “live action material + painting + image processing + compositing + 2-D computer animation + 3-D computer animation,”26 the transformational effects of digitization remain especially debated in terms of new aesthetics of realism, for even today, ‘digital film’ still appears as ‘film’ that chooses the option to appear ‘as-if ’ (analog) ‘film.’27 Yet, as “digital encoding frees signs from a dependence on the medium of transmission,”28 the digital image can seamlessly merge live-action with computer-generated shots to generate hybrid moving images that can be entirely restructured from within as every pixel is potentially up for rearrangement. Allowing for an infinite variety of multiple layers and composite structures, these new cinematic images present – when analyzed with the knowledge of the underlying digital production process – hybrid mashups that expose the digital transformations as subliminal yet conceptual re-configurations that allow for new possibilities to frame and thus transgress traditional configurations of ‘eye, gaze and image’29 – eventually resulting in a ‘digital realism’ that looks ‘real’ even though it contradicts the laws of physics, gravity and optics as the computer generates ‘impossible images’ that, however, only expose themselves as such at a second, more rigorous inspection.30 It is Fight Club’s demand for this second, more rigorous inspection that marks the subliminal nature of the digital revolution31 as a paradigmatic shift that re-organizes not just the moving image, but (more visibly) the entire cinematic experience, and with it the cinematic gaze and cinema’s established conventions of ‘seeing’. For while in Fight Club Tyler splices frames into the moving picture, this technique is similarly performed by David Fincher himself, albeit with digital technology, and 53
more importantly with the digital taken into account: a closer (DVD) viewing shows images of Tyler flashing up in single frames long before he is introduced as a protagonist to the narrative (cf. fig. 2).32 Fight Club thus creates a tension between analog and digital manipulation by playing them off against each other while testing the effect of the same trick – performed differently – on the spectator. In Fight Club’s narrative, the trick is subliminal, yet its effects are corporeal: children are confused and start to cry, while adults show feelings of vague discomfort. While these spectators (must) remain immobile and passive ‘consumers’, the same trick, digitally performed and intended “for the DVD crowd”33 asks for a new form of spectatorship which is to a certain degree (inter-)active: just as Fincher uses freeze frames and flashbacks as recurring film styles throughout Fight Club, the audience of the ‘digital era’ is equally invited to ‘manipulate’ the narrative (on DVD) by rewinding, fast-forwarding, progressing frame by frame and freezing them so as to get a glimpse of the ‘cues’ Fincher has spliced in – and which turn out to be clues for the overall understanding of Fight Club’s narrative (and visual) logic.34 Tyler Durden’s established first rule of Fight Club, “you do not talk about fight club”, thus reinforces and simultaneously challenges traditional rules and the traditional setting of cinema: while the ‘first impression’ of Fight Club must remain unbiased and ‘raw’ as its narrative relies on the final plot twist, the splicing of frames marks an inversion of anticipation and reconstruction – of the (temporal) experience of pro- and retention which is essentially simulated by cinema’s mechanical structure – subliminally shifting the boundaries of in/visibility as film gradually moves ‘out of cinema’. Fincher’s demand to watch Fight Club again and actively look for the spliced frames in order to reconstruct an anticipation that is hidden in clear view thus expands the cinematic experience onto new digital formats, whereby film, mediated in new ways, is re-configured as “part-text, partarchive, part-point of departure, part-node in a rhizomatic, expandable network of inter-tribal communication.”35 Fight Club thus stands as a paradigmatic example for film on the verge of ‘going digital’: it incorporates a new set of rules demanded by the growing digital media landscape as it is yet still figuring them out and testing them on its viewers.36 The tests on the viewer are manifold, as Fight Club’s narrative unfolding on screen plays with and consistently breaks traditional rules of storytelling to reinforce the tension of mediation and manipulation, while it merges the narrator’s (self-)delusion with that of the spectator. The mediated tension is, from the start, Fight Club’s premise, further reinforced by the computer-generated images and the ways in which they are inserted into and comment on the narrative. In the first twenty minutes of Fight Club, the nameless narrator takes the audience on a (virtual) trip in non-linear flashbacks, to recount in voice-over narration the story of how he came to meet the charismatic aphorist Tyler Durden who, towards the movie’s close, turns out to be merely the narrator’s imaginary, ‘embodied’ alter-ego. The mediated tension in Fight Club manifests itself as a pathological dis-order: “For six 54
<= ,, 1,, 3>
513?3 3'& , 897:::;
months, I couldn’t sleep,” the narrator tells us, as we see him lying in his bed, eyes wide open, with a cordless telephone by his side. The sentence resounds in distancing, dissociating echoes that merge with the sound of a clock ticking with accelerated speed and a palpitating sound, like a nervous heartbeat. A constant buzzing noise, as if coming from an electronic amplifier, merges with the sound of a photocopier; here and there, unobtrusive electronic beeping sounds come from somewhere far off. While the overall cinematographic look of this scene is kept “fairly bland and realistic,”37 the desaturated colors in contrast to the media-oversaturated sounds give an accurate feel of the narrator’s monotonous and emasculated life and enhance the mediated tension in the world of all-encompassing mass consumer culture portrayed in Fight Club. In this world, the narrator remains nameless:38 even he himself is depicted as nothing more than a free-floating signifier, on the verge of losing all reference to the real. Having incorporated Western late capitalist consumer culture to the extreme, the narrator becomes a pathologic ‘postmodern case study’, presenting all symptoms of what Jean-François Lyotard termed the ‘postmodern condition’ which generates, according to Frederic Jameson, a “new depthlessness […] in a whole new culture of the image or the simulacrum” in which signifiers are bereaved of their significance.39 Suffering from insomnia, the narrator remains in a detached and disoriented (mediated) ‘in-between state’ in which, he tells us – while we see him lying on the couch at night, his zombie-like eyes glued to the home shopping network channel on TV – “you’re never really asleep and you’re never really 55
awake.” His descriptions of insomnia, where “nothing’s real. Everything is a copy, of a copy, of a copy,” resonate with Jean Baudrillard’s dystopian theories of ‘hyperreality’ in which the boundaries between the real and representation are blurred: signs no longer refer to the real, but only to representations, resulting in a disorder of consciousness that disturbs the narrator’s sense of presence as he is subjected to a totalizing media system and the all-encompassing multinational consumer culture whose “new area of commodification [is] pre-eminently representation itself.”40 As such, the narrator’s pathology must result in ‘déjà-vu’ experiences and a fallible memory, for memory is an act of “re-representation, [of ] making present”, and is, as Andreas Huyssen writes, “always in danger of collapsing the constitutive tension between past and present, especially when the imagined past is sucked into the timeless present of the all-pervasive virtual space of consumer culture.”41 Stuck in precisely this mediated in-between state of perpetual presentism where the borders – between past and present, real and representation – have collapsed, Fight Club exposes the narrator’s flashback – the narrative act of recollection, thus always already ‘mediated’ – and his (split) voice-over narration as a flawed, possibly misarranged construct, an unreliable, virtual account: only one possible version of a story that could have been different (and in the end turns out to be). Configuring memory as re-representation and representation as hollowed-out, free-floating signifiers in a virtual world of consumerism, Fight Club’s narrative deals with the eminent concerns about the effects of digitization as a cultural and philosophical phenomenon. Overtly resonating critical postmodern thought and its ‘rhetoric of loss’, it stages the end of the real at the end of the reel world while exemplifying Vivian Sobchack’s assertion that our everyday encounters with mediatechnology “transform us as subjects.”42 As the narrator struggles through the first part of the movie, Fight Club depicts – in order to brutally transgress – the effects media have “upon the historically particular significance or ‘sense’ we have and make of those temporal and spatial coordinates that radically inform and orient our social, individual, and bodily existences.”43 Yet, Fight Club does not mark off the virtual world by means of a computer screen – as does i.e. The Matrix, following the tradition of a long list of cyberpunk films;44 instead, it configures the mediated space that surrounds us, that precedes us and comes in-between as essentially virtual, and it does so literally: by depicting the narrator’s imagination and the world of consumer culture with the help of computer-generated, digital imagery (see fig. 3). As the narrator walks through his apartment, ordering via telephone the newest IKEA collection, the camera, in a panning shot, follows him as the room fills with his desires: the furniture he orders, and even the catalog captions, pop up on screen in real-time, to unfold into a “virtual catalog showroom.”45 In this crucial scene, the computer-generated digital imagery transcends its narrative function and becomes meta-reflective, marking digitization not just as a technical changeover, but as an all-embracing cultural paradigm shift that crucially affects the narrator’s ‘being-in-the-world’ and the ways in which he reacts with the world around him. 56
<= ,, 1,, 3>
The uncanny nature attached to the postmodern conception of the digital image furthermore reinforces the uncanny nature of what should be the narrator’s home, which in turn is literally exposed as uncanny, un-heimlich, un-homely. But not only does the narrator’s virtual surrounding, his condo “full of condiments and no food”46 lack any personal or ‘historical’ dimension, even the narrator, deprived of any profound human relationships, defines his very identity merely through commodities that reduce his self to the status of representation and display. As such, the narrator seems to be surrounded by a digital space that lacks “the temporal emphases of historical consciousness and personal history” and thus “becomes abstract, ungrounded, and flat – a site for play and display rather than an invested situation in which action ‘counts’ rather than computes.”47 More than Neo, ‘the chosen one’ in The Matrix who sets out to fight against an autonomous system of artificial intelligence that has enslaved human kind – or any other cyborg ‘gestalt’, Fight Club’s everyday man represents the new ‘posthuman’ in his struggle to cope with the shifting notions of self, memory and body – of ‘beingin-the-world’ – in a culture that is becoming an increasingly mediated, globally expanding computer network, driven by the invisible laws of consumer culture. Where cyberpunk films tend to neglect material and physical concerns, projecting instead a “lethargic worldview, with the rebels jacked-in minds and comatose bodies fully indicating the consequences of disengagement,”48 Fight Club stages materiality in the most literal sense: the human flesh. Literally ‘punched to his senses’, the narrator is gradually relieved from his mediated apathy through a corporeal experience: the un-mediated experience of physical pain, leaving behind “bodily traces” that constitute, with the words of Aleida Assmann, “contact with reality,”49 traces that eventually lead the narrator to his self. Rather than indulging in a world that has left behind the real, Fight Club’s punches brutally resonate Sobchack’s simple yet striking reminder that “there is nothing like a little pain to bring us (back) to our senses.”50 Otherwise, ‘going digital’ would mean to literally ‘cross the borders of madness.’51 It is not until almost the end that Fight Club’s audience realizes that the ‘borders of madness’ have been crossed from the very beginning. It is here, at the film’s end, that we must appreciate Fight Club’s opening scene as the epitome and anticipation of the (medially) disturbed postmodern state of ‘being-in-the-world’: a crisis of body, memory and representation for which Fight Club’s virtual camera ride through a digitally animated neuronal landscape stands as emblematic. As the virtual camera pulls back from within a rhizome of computer-generated neurons and firing synapses to move from the inside out, the inversion of the cinematic gaze marks a paradigmatic shift in the re-organization of in/visibility enabled by digital imaging technologies and their medical illustrations which Fight Club’s opening scene is modeled after. Fight Club’s starting point thus cinematically appropriates a point-of-view derived from the scientific incorporation of the interface: the digital media-technology is already inside the mind as it inverts the cinematographic gaze 57
513@3- 1& ! ,3'& , 897:::;
to move, now, from the inside out. Fight Club’s virtual realization of the (literal) ‘mind’s-eye view’ thus stands for the (metaphorical) incorporation of the medium, marking its invisible ‘in-between’ as the a priori of human perception, while exposing the digital revolution as essentially a conceptual reorganization of ‘makingmeaning’ and ‘making-sense’ – once again, ‘sense’ both in a metaphorical and literal ‘sense’. The paradigmatic shift of the digital revolution is thus exposed in Fight Club, literally displayed even, as the effects of an incorporated logic of perception that has pathologically re-configured the narrator’s ‘mindset’, generating a delusional logic of inference reinforced by Fight Club’s consistent inversion of the literal and the metaphorical.52 Occasionally accompanied by ironic or cynical remarks, the narrator’s ‘mind’seye-views’ – (computer-generated) visualizations of his imaginary thoughts – are inserted into the narrative to comment on what is happening with and around him. As the narrator imagines a pyrotechnic setup, an aerial collision or an explosion inside his apartment, the virtual gaze of his imagination (the virtual camera) travels “at the speed of thought” while trespassing any physical constraint as the gaze moves through a sidewalk or a van wall, or inside a bomb to show how it has been wired.53 In an ironic inversion of science fiction, even Fight Club’s litter figures as a metaphor for the wide-ranging effects of computerization: no longer a zoom into outer space, the computer’s vision figures as a virtual zoom pulling back through the narrator’s office wastebasket filled with (computer-generated) ‘corporate litter’ – accompanied by science fiction sound effects – as he cynically remarks that“ [w]hen deep space exploration ramps up, it will be the corporations that name everything: The IBM Stellar Sphere. The Microsoft Galaxy. Planet Starbucks.” Though an ironic inversion, this scene expresses the (postmodern) fears of technological appropriation by multinational corporations and the questions of gaining, distributing and forming knowledge – of “who will have access” and “who 58
<= ,, 1,, 3>
will know:”54 questions becoming ever so prominent with computerization, most notably with the introduction of the global computer network. Even though the narrator intends to take refuge from this all-encompassing system in his underground fight club, the project is doomed, from the start, to take on the same fatal structures of the (cultural) logics of computerization the narrator has incorporated. While the fight clubs gradually disperse, like an information network, throughout the whole nation in an unforeseen dynamics, the narrator himself is equally dispersed, dis-embodied and dis-placed in time and space during his narcoleptic moments where, as he tells us, “I nod off, I wake up in strange places, I have no idea how I got there.” Equally, Project Mayhem’s wish to resurrect the socially and economically ‘emasculated’ male results in the contrary, as it reinforces the dissolution of its members in the mass anonymity and facelessness of organized fascist totalitarianism. Anticipated by Fight Club’s physical punches, it is the phenomenology of the flesh which, in times of digital dematerialization (and ultimately dis-embodiment), promises resurrection through a corporeal experience of perception. In a final act of self-destruction, the narrator fires the gun he points at “not my head, Tyler, our head,”55 in turn killing off Tyler Durden, his imaginary alter ego, as the bullet permeates the narrator’s flesh: the intertwining of inner/outer, in/visible, real/illusion, of touching and being touched, of perceiving and being perceived, of seeing and being seen.56 “Tyler, I want you to really listen to me,” the narrator says just before he pulls the trigger, “My eyes are open.” Yet, even though Fight Club’s annihilation of the virtual persona (presumably) marks the end of the narrator’s self-delusion, what the narrator sees, now that his eyes are open, is slightly irritating from the perspective of the film’s reading I have suggested. Fight Club ends with the narrator witnessing the annihilation of the financial district: corporate buildings collapsing, one by one, to perform a perfectly orchestrated, computer-generated “theatre of mass destruction.”57 Designed with the intention “not to make it look real, but to make it look cool”, as “really good animation” is, according to David Fincher, “better than the real thing […] – every time,”58 the final scene, reinforced by the song “Where Is My Mind” from the Pixies, poses a final question as to how far ‘going digital’ has already progressed and altered our perception of the ‘real’, and how far-reaching the consequences of the digital revolution will be in conceptually reorganizing the ‘order of things.’59 If Fight Club does, as I argue, capture the paradigmatic shift of digitization and its effects on media culture, the film’s envisioned cinematic speculations must allegorize, in hindsight, at least to some extent, ‘real’ cultural anxieties of the new millennium. Ten years after Fight Club’s release, it is unsettling in many ways that in retrospect, the movie’s final scene has turned out to be (culturally) predictive – both literally and metaphorically.60 In 1991, digital media artist Char Davies wrote that 59
[i]f we create a model of a bird to fly around in virtual space, the most this bird can ever be, even with millions of polygons and ultra-sophisticated programming, is the sum of our (very limited) knowledge about birds – it has no otherness, no mysterious being, no autonomous life. What concerns me is that one day our culture may consider the simulated bird (that obeys our command) to be enough and perhaps even superior to the real entity. In doing so we will be impoverishing ourselves, trading mystery for certainty and living beings for symbols.61
Since then, almost two decades of digitization have passed, and the transformational effects of the digital revolution on the reconfigurations of ‘eye, gaze and image’ are only gradually emerging on the surface of the cinema screen: as inversions of the cinematic gaze or ‘impossible images’ that appear to be ‘real’, calling for a discussion on their altering effects on human perception which resonate in Kevin Tod Haug’s – in the light of Char’s concerns – slightly perplexing remark that they “added some CG-pigeons to make it look real.”62 Notes 1 2
3
4 5
6 7
60
Martin, Kevin H. 2000. A world of hurt. Cinefex 80: 115–131, 117. i.e. Juhasz, Alexandra. 2001. The phallus unfetished: the end of masculinity as we know it in late-1990s “feminist” cinema. In: The end of cinema as we know it. American film in the nineties, ed. Jon Lewis, 210–221. New York: New York University Press; Brookey, Robert Alan and Robert Westerfelhaus. 2002. Hiding homoeroticism in plain view: the Fight Club DVD as digital closet. Critical Studies in Media Communication 19 (1): 21–43. Girourx, Henry A. and Imre Szeman. 2001. Ikea boys fight back. Fight Club, consumerism, and the political limits of nineties cinema. In: The end of cinema as we know it. American film in the nineties, ed. Jon Lewis, 95–104. New York: New York University Press, 102. O’Hehir, Andrew. 1999. ‘Fight Club’. The late-90s crisis of masculinity has arrived in pop culture with a vengeance. Salon, October 15. i.e. McClean, Shilo T. 2007. Digital storytelling: the narrative of visual effects in film. Cambridge (Mass.): MIT Press; latest publications in German: Richter, Sebastian. 2008. Digitaler Realismus: zwischen Computeranimation und Live-Action. Die neue Bildästhetik in Spielfilmen. Bielefeld: transcript; Flückiger, Barbara. 2008. Visual Effects: Filmbilder aus dem Computer. Marburg: Schüren. McClean, 47. Richter analyzes this change on the aesthetic level, entirely leaving out the film’s narrative, while McClean investigates the narrative functions of digital visual effects.
<= ,, 1,, 3> 8
9
10 11 12
13 14 15 16
17 18
19
20
21
McClean, 47. The audience was thus not able to ‘read’ the narrative implications of this digital visual imagery: namely that it presents the first clue to the overall understanding of the film’s narrative logic. cf. Martin, 117 and 126. The scene was nicknamed “Jittercam” by the director and his visual artists because a handheld camera shot of Brad Pitt, which was taken as the operator was rocking the camera from side to side, was digitally manipulated by adding motion blur. David Fincher, as quoted in Martin, 126. The film’s reference to reality is what French film theorist André Bazin designated as one of the medium’s essential characteristics. Lev Manovich speaks of “the ‘crisis’ of cinema’s identity”: Manovich, Lev. 1999. What is digital cinema? In The digital dialectic. New essays on new media, ed. Peter Lunenfeld, 172–192. Cambridge (Mass.): MIT Press, 173; cf. also the collection of essays that trace the effects of digitization on cinema in: Kloock, Daniela (ed.). 2008. Zukunft Kino: the end of the reel world. Marburg: Schüren. i.e. As discussed in the context of the ‘apparatus theory’ by Jean-Louis Baudry or Jean-Louis Comolli. cf. Freyermuth, Gundolf S. 2009. Digitale Lektionen: Medien(r)evolution in Film und Kino. Film-Dienst 02: 6–9. Belton, John. 2002. Digital Cinema: A False Revolution. October 100: 98– 114, 103. These anxieties were most obviously expressed on cinema’s screen in eXistenZ, The Thirteenth Floor, and The Matrix, which were all released in 1999, only a few months before Fight Club. Lunenfeld, Peter. 1999. The real and the ideal. In The digital dialectic. New essays on new media, ed. Peter Lunenfeld, 2–5. Cambridge (Mass.): MIT Press, 3. The long list of American/Hollywood films starring the computer as a pro- or antagonist already starts in the 1950s, with Gog (USA 1954), Invisible Boy (USA 1957) or Desk Set (USA 1957). Bukatman, Scott. 1998. Zooming out: the end of offscreen space. In The new American cinema, ed. Jon Lewis, 248–272. Durhan & London: Duke University Press, 249. This is the case in most cyberpunk films of the 1990s, i.e. the early Tron (USA 1982), The Lawnmower Man (USA 1992) or Johnny Mnemonic (USA 1992) and The Matrix (USA 1999). The term was introduced by Lunenfeld, Peter. 2000. Digital photography: the dubitative image. In Snap to grid. A user’s guide to digital arts, media, and cultures, ed. Peter Lunenfeld, 55–69. Cambridge (Mass.): MIT Press. Chesher, Chris. 1997. The ontology of digital domains. In Virtual politics. Identity and community in cyberspace, ed. David Holmes, 79–92. Thousand Oaks, Calif.: Sage Publications, 86.
61
22 23 24 25
26 27
28 29
30 31 32
33 34
35
36 37
38
62
Cf. Tholen, Georg Christoph. 2005. Einleitung. In SchnittStellen, ed. Sigrid Schade, Thomas Sieber & Georg Christoph Tholen, 15–25. Basel: Schwabe, 20. Manovich, 50. Belton, 103. Lessard, Bruno. 2005. Digital technologies and the poetics of performance. In New punk cinema, ed. Nicholas Rombes, 102–112. Edinburgh: Edinburgh University Press, 102. Manovich, 301. cf. Ochsner, Beate. 2009 (forthcoming). Zur Frage der Grenze zwischen Intermedialität und Hybridisierung. In Intermediale Inszenierungen im Zeitalter der Digitalisierung, ed. Andreas Blättler, Doris Gassert, Susanna Parikka-Hug & Miriam Ronsdorf. Bielefeld: Transcript. Chesher, 86. cf. Tholen, Georg Christoph. 2003. Dazwischen. Zeit, Raum und Bild in der intermedialen Performance. In Medien und Ästhetik. Festschrift für Burkhardt Lindner, ed. Harald Hillgärtner & Thomas Küpper, 275–291. Bielefeld: Transcript. cf. Richter. cf. Belton. There are spliced frames of Tyler Durden e.g. when the narrator is in front of the photocopier at work, when the doctor advises him to visit support groups, or after a support group meeting when he watches Marla walk away. Martin, 121. The changeover, which constitutes a splitting/doubling of the reels, has been similarly performed in the persona of the narrator, exposing Tyler Durden as an imaginary alter ego that has sprung from a mentally deluded mind. That Tyler Durden is personified by Brad Pitt is no coincidence: Fincher is clearly playing on Hollywood’s celebrity culture in which Brad Pitt’s appearance both projects and is molded into the stereotypical category of the ideal (vs. the real), the masculine, and the heroic; emphasizing, in turn, the role media play in forming and propagating ideal (stereo-)types and self-perceptions. Elsaesser, Thomas. 2009. The mind-game film. In Puzzle films: complex story telling in contemporary cinema, ed. Warren Buckland, 13–41. Oxford: Blackwell, 35. cf. Elsaesser, 35ff., esp. 37. Contrarily, the scenes with Tyler Durden are “more hyper-real in a torn-down, deconstructed sense”; cf. Probst, Christopher. 1999. Anarchy in the U.S.A. http://www.theasc.com/magazine/nov99/anarchy/index.htm/. Accessed 1 August 2009. He (possibly) refers to himself as Jack by refashioning a reference from “an article written by an organ in the first person: I am Jack’s medulla oblongata”,
<= ,, 1,, 3>
39 40 41 42
43 44
45 46 47 48 49 50
51
52 53 54 55 56 57 58 59 60
an expression which his voice-over occasionally picks up to comment on his (emotional) experience: “I am Jack’s broken heart”, or “I am Jack’s wasted life”. Jameson, Frederic. 1992. Postmodernism, or, the cultural logic of late capitalism. Durham: Duke University Press, 6. Connor, Steven. 1990. Postmodernist culture: an introduction to theories of the contemporary. Oxford; Cambridge (Mass.): Blackwell, 46. Huyssen, Andreas. 2003. Present pasts: urban palimpsets and the politics of memory. Stanford: Stanford University Press, 10. Sobchack, Vivian. 1994. The scene of the screen: envisioning cinematic and electronic “presence”. In Theories of the new media: a historical perspective (2000), ed. John Thornton Caldwell, 137–155. London: The Athlone Press, 137. Ibid. cf. ‘The Matrix/Fight Club trailer mashup’ done by ‘kinomozg’, which plays with the striking similarities between the two films, http://www.youtube.com/ watch?v=2ctZAysb8Ms/. Accessed 1 August 2009. Martin, 121. The narrator in Fight Club. Sobchack, 151. Short, Sue. 2005. Cyborg cinema and contemporary subjectivity. Basingstoke: Palgrave Macmillan, 174. Assmann, Aleida. 1996. Texts, traces, trash: the changing media of cultural memory. Representations 56: 123–134, 132. Sobchack, Vivian. 1991. In response to Jean Baudrillard. Baudrillard’s obscenity. Science Fiction Studies, http://www.depauw.edu/SFs/backissues/55/forum55.htm/. Accessed 1 August 2009. cf. N. Katherine Hayles. 1991. In response to Jean Baudrillard. The borders of madness. Science Fiction Studies, http://www.depauw.edu/SFs/backissues/55/forum55.htm/. Accessed 1 August 2009. cf. Trifonova, Temenuga. 2002. Time and point of view in contemporary cinema. CineAction: 11–21. cf. Martin, 126ff. Lyotard, Jean-François. 1984. The postmodern condition: a report on knowledge. Minneapolis: University of Minnesota Press, 6. The narrator in Fight Club. Maurice Merleau-Ponty, Le visible et l’invisible, 1964. Tyler Durden in Fight Club. Martin, 129. It took fourteen entire months to digitally create the final shot. Michel Foucault, The order of things (Les mots et les choses, 1966). Literally, with the collapse of the Twin Towers in 2001, and metaphorically with the collapsing of the finance system, resulting in the financial crisis of
63
61
62
64
2008/2009. Davies, Char. 1991. Natural artifice. In Virtual seminar on the bioapparatus, ed. Mary Ann Moser. The Banff Centre for the Arts, http://www.immersence. com/publications/char/1991-CD-Bioapparatus.html/. Accessed 1 August 2009. Kevin Tod Haug, Hollywood visual effects designer, speaking about his work as an artistic director for Quantum of Solace (USA 2008) at fmx/09, 14th International Conference on Animation, Effects, Games and Digital Media, on 7 May 2009.
Creativity, Participation and Connectedness: An Interview with David Gauntlett
Stefan Sonvilla-Weiss invited me to contribute to this book, and suggested an interview. In the spirit of ‘Mashup Culture’, I invited people to send me questions via Twitter and Facebook. (So, it’s not really a mashup, but at least it’s questions coming together from different sources, and from people around the world. So it’s actually another buzzword – crowdsourcing). The questions arrived, of course, in a random order, from different places in Europe, the United States, and Australia. I have tried to sort them into a sequence of questions which makes some kind of sense. I have to apologise to the several people whose questions I haven’t used. Typically these were excellent questions, but about issues or areas where I had no knowledge or little to say, apart from some admiration for the question and perhaps some speculation. Since readers don’t really have any use for my admiring, speculative answers, I thought it was better to leave those out. We begin with a definition and discussion of ‘Web 2.0’, and whether it is a useful or distinctive term. We then turn to ethical issues, implications for education, the ‘making is connecting’ project, and academic public engagement.
The Meaning of Web 2.0 Maria Barrett, by email: Can you give us a simple, one line definition of Web 2.0? David Gauntlett: That’s a good place to start. Here’s my attempt at a single sentence definition: ‘Web 2.0 is about the Web enabling everyday users to share their ideas and creativity, and collaborate, on easy-to-use online platforms which become better the more people are using them’. Now I’ll take several more sentences, to explain it. For a slightly longer explanation of Web 2.0, I tend to say that the former way of doing things – which we might retrospectively label the ‘1.0’ approach – was as though each individual who contributed to the Web was making their own individual garden. Each of these gardens might be lovely, and full of good things, but they were separate, with a big fence between each garden and the next one. Visitors could look at the garden, and make comments, but that was the extent of the interaction. Web 2.0, meanwhile, is more like a shared allotment. Anyone can come along with their spade and their seeds, plant new things, change what’s there, and do what they like in the space. Because it’s a communal space, it is likely to be ‘policed’ 65
by other contributors, who will (generally) want to keep it nice – so I am not describing a wholly anarchic picture. Visitors who don’t want to actively participate, of course, can just look at it, or just make comments. That is a description of how Wikipedia works – Wikipedia being the archetypal example of Web 2.0 collaboration in action. It also more-or-less describes Flickr, YouTube, eBay, Facebook, and other such Web 2.0 applications, although of course the details of what you can and can’t change, in each one, will vary. I should mention, incidentally, that the ‘1.0’ model is not necessarily a terrible way of doing things. My own website, Theory.org.uk – and the other ones I’ve made – are generally like that, where I just want to ‘broadcast’ some of my own material, and get responses back. And, since my sites are entirely handmade by me, they are limited by my own technical abilities, and I don’t have the skills to create a very Web 2.0-enabled site – although, actually, these days there are some handy online Wiki tools where you can just manage a Wiki that’s already set up on someone else’s server. So, with some things, I’m a bit old-fashioned, and I want to retain control over how my stuff is presented (although, of course, people can take and change and remix it if they want). On the other hand, Theory.org.uk was originally a site with resources about particular thinkers, such as Michel Foucault, Judith Butler, Anthony Giddens, Theodor Adorno, and others. Several years ago, I realised that it was pointless to have me tending my own little Michel Foucault ‘garden,’ when there was a community of expertise doing something much better on the communal Michel Foucault ‘allotment’ in Wikipedia, so I gave up on that area and added a page advising users to go to Wikipedia instead. There are certain things, though, like the Theory.org.uk Trading Cards, and Lego models of social theorists, which would have no place on Wikipedia and are still in some quirky corner of my site. Returning to the one-sentence definition, in my formulation I deliberately highlighted the role of ideas, creativity, and collaboration; and I said they should be ‘easy-to-use online platforms’ for ‘everyday users’ because we are talking about stuff which is not necessarily new for the very technically-minded. The point about Web 2.0 as a recent phenomenon is that suddenly there are nice, simple tools which most Web users would feel comfortable with. That’s what has emerged in the past few years. It runs on the same old Web, the one invented by Tim Berners-Lee almost 20 years ago, but it took time before some clever people, with the common touch, could design some friendly interfaces. And things like the growth of broadband have also helped. YouTube on a dial-up modem is pretty pointless. Finally, my one-sentence definition says that the platforms ‘become better the more people are using them’, which is the point made by Tim O’Reilly, who coined the term ‘Web 2.0’, that these are sites which embrace their network of users, and consequently become richer as more and more people contribute to them. (See O’Reilly, 2006a, for a good account of this.) There are other brief definitions of Web 2.0, of course. In a blog post entitled, 66
6/') '
‘Web 2.0 Compact Definition: Trying Again,’ Tim O’Reilly (2006b) himself suggests this definition: ‘Web 2.0 is the business revolution in the computer industry caused by the move to the Internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them. (This is what I’ve elsewhere called “harnessing collective intelligence”)’.
Frankly, I prefer mine. To me, Web 2.0 is all about everyday users being able to share, create, and collaborate. Characterising it as a ‘business revolution in the computer industry’ seems to rather surprisingly miss the most exciting points. Maria Barrett, by email: Is Web 2.0 simply communicating and connecting – by which I mean, haven’t we always had this? David Gauntlett: Obviously, humankind has indeed enjoyed creating, connecting, and collaborating, for several thousand years. The thing that is new is that people who didn’t previously know each other, spread around the world, who would never have met, can come together online because of a shared interest, or a friend of a friend of a friend, and discuss, create, or plan things instantaneously – things which otherwise would have been impossible, or very slow and difficult to organise. And incidentally, the people don’t have to be spread all over the world, of course. I live in Walthamstow, a town on the edge of London, and if I contribute to the Wikipedia page about Walthamstow, I am collaborating with people who probably mostly live within one or two miles of me, but I still would most likely never have had any interaction with them in my physical life. Jason Hartford, via Facebook: What is the point – the technology, or the reaction to it? David Gauntlett: The ‘point’ of Web 2.0, and what makes it interesting, is not the technology, but what people do with it. I wouldn’t call this a ‘reaction’ as such, as this seems to situate users as an audience for technological innovation. The point is that people take up these tools and use them in inventive ways. So it’s not technology, or a reaction to technology, but an everyday creative use of tools which, ideally, are enabling kinds of tools which mean that people can communicate, create, and collaborate in new ways. More cautious critics would point out that the creative individuals do not own the online tools themselves – instead these tend to be profit-oriented companies who can choose to enable or not enable different kinds of activity, and in some cases may make claims over the material produced. It’s important to remember these cautionary points, and we’ll come onto these issues in a later question. 67
Steven Green, by email: Is Web 2.0, social networks, and the related discussion about creativity and collaboration, really new? Those of us in the 1980’s in interest groups on PRESTEL and BBs were doing something very similar – limited by the technology but still creating bodies of knowledge and a communicating community. Then we did the same on Compuserve before the broadcast nature of the early World Wide Web made us take a step backwards. Surely now we are just re-inventing? David Gauntlett: Well, yes, some of those early networks did have some of those features, and brought people together to work on projects of shared interest, and so on. I’m not especially concerned with whether Web 2.0 is something new or not on a technical level. Indeed, from the very start, Tim Berners-Lee intended the Web to be a place where people would collaborate and share ideas and information – to be ‘writers’ as well as ‘readers’, or ‘producers’ as well as ‘audience’. So, I’m not interested in making any claims about newness, but as I said above, the important thing is accessibility and reach. Nowadays we have easy-to-use online tools which enable people to communicate and collaborate without them needing much technical know-how. And it’s sufficiently popular, so that they can find other people who share their interests, no matter what those interests are. Fifteen years ago, that was not really the case – then, you did need technical skills. Around that time, I started making a site in HTML using Notepad, the very basic text editor that comes in Windows, and a free graphics program and a free FTP program, so this was very cheap and, if you’re reasonably familiar with computers, quite easy – I’m a sociologist, not a trained programmer. But my point is that you had to go ‘behind the scenes’ of the Web browser, which would be offputting to many users. Ten years ago, you could make your own individual ‘garden’-type site within your browser using an online tool such as GeoCities, and it was rather clunky, but ok. Five years ago, blogs had suddenly become common, and the other tools were becoming better-known and easier to use. But something that has been technically possible for twenty or more years, has only really come of age, and become mainstream, in the past five years or so. It’s Web 2.0 as a social phenomenon, not as a technological achievement, which is the interesting thing.
Ethics and Exploitation Stefan Sonvilla-Weiss, by email: Nicholas Carr has said: ‘By putting the means of production into the hands of the masses but withholding from those same masses any ownership over the product of their work, Web 2.0 provides an incredibly efficient mechanism to harvest the economic value of the free labor provided by the very, very many and concentrate it into the hands of the very, very few.’ Do you agree? David Gauntlett: Carr’s argument is well-intentioned, and would be really powerful 68
6/') '
if it was entirely correct, but actually it’s not really the whole story and doesn’t apply across the board by any means. Frankly, the best-known Web 2.0 projects simply don’t claim ownership in the way he asserts – although they obviously support themselves, and may seek to profit (often not very successfully, to date), by hosting interesting content made by others. Let’s look at a few examples. Wikipedia, which as I’ve said is perhaps the most exemplary case of Web 2.0 in action, doesn’t claim ‘ownership’ of people’s contributions, and does not seek to profit from the content. All of the content is available for free, under a Creative Commons ‘Attribution – Share Alike’ licence, which basically says that you can do what you like with the material, as long as you credit the original source. YouTube has come under a bit more fire, for example in the Lawrence Lessig article which Carr was responding to when he wrote the text quoted in the question (see http://www.roughtype.com/archives/2006/10/web_20ier_than.php), because it does not enable users to download the original uploaded video files in order to be able to modify or play with them (although downloading a version at the non-original quality that you see on your screen is straightforward). But even so, YouTube does not take ownership of uploaded videos: the Terms of Service explicitly state ‘You retain all of your ownership rights in your User Submissions, but you are required to grant limited licence rights to YouTube and other Website users’. You grant YouTube a worldwide, non-exclusive, royalty-free licence to show your work, and a similar licence to YouTube users, to use and remix material from the site. This seems fair enough, and is not a hidden contractual detail that would surprise or infuriate most YouTube users. On Flickr, the site does not claim ownership of the photos uploaded by users and instead gives them the option to post their material with ‘All rights reserved’, or to assign one of a range of licences, such as a Creative Commons licence. The popular blogging sites, such as WordPress and Blogger, similarly claim no ownership over the material you might put there, and indeed are unsurprisingly keen to assert that they have no ownership or responsibility for communications presented on their platforms. Facebook follows a similar model and attracted controversy when it tried to change the Terms of Service so that the licenced use of material could potentially be forever (see the Wikipedia article, ‘Criticism of Facebook’). And so on. On the whole, these services want to profit from presenting people’s work, often alongside advertising, but do not seek to own the work or limit its use elsewhere. This is the deal that surely almost all users knowingly accept. In terms of user experiences, I think it’s reasonable to say that most people do not feel that they are creating content which is then stolen by a corporation, and as I’ve said, on a technical or legal level, it’s simply not the case that these services take ownership of the content. Presumably these services are so popular because people find them to be helpful platforms, on which they can share their ideas, images, 69
and feelings, using systems that are both easy and free. They may not be perfect, and there are limitations, and so on, but if you want a simple overview statement, you wouldn’t say that primarily it’s about people being exploited, and surrendering control and ownership of their material. I’d say it’s primarily about people now having a very straightforward, easy to use, and easy to understand way in which to share their work, their pictures and media, and their ideas with others, without surrendering control or ownership to anyone else. I’m not normally someone who wants to defend ‘industry,’ or to assert that the market has all the answers, but in this case the supposedly ‘critical’ perspectives just don’t really seem to match up with reality. Stephen Harrington, by email: What are some of the major ethical issues surrounding ‘Web 2.0’ that have not yet been adequately addressed? If a key feature of this communication development is that, in the words of Tim O’Reilly, ‘Users Add Value’, then are these users being treated appropriately (or indeed rewarded appropriately) in return for playing this vital role? David Gauntlett: This follows on, of course, from the previous question. Although there might be some unusual exploitative examples, I generally think that yes, surprisingly enough, Web 2.0 companies do not seem to really be ripping people off, if only because the well-connected, highly articulate communities online would quickly express their disgust and move to a different service. Being able to benefit from other people’s contributions, whether that is Wikipedia entries, or product reviews on Amazon, or blog posts or videos or whatever, is a ‘reward’ in itself that people are happy with, I think. Other ethical issues that we might be concerned about, in the online world, tend to be aspects of wider questions that already exist in the human world. For instance the valuable report by The Children’s Society, A Good Childhood (Layard & Dunn, 2009), raises concerns that social networking sites might lead to a ‘commoditization of friendship’ if young people are only bothered about how many people they can list as ‘friends’ – rather than having relationships of good quality – and an over-valuation of ‘relationships by exhibition,’ where it’s all about what you can show on your profile page. I would agree that we need to guard against such potential trends, but the answers, like the problems, lie within people rather than technology. Similarly, a system which enables the creation of online communities of goodness is – putting it very simply – also going to facilitate online communities of badness. Again, such problems can only be addressed by looking at rather ageold problems of human behaviour. Stefan Sonvilla-Weiss, by email: Can we really speak of a participatory media culture, if we consider only a small number of the one billion Internet users being creative producers, creating and sharing audio-visual information and the rest is paying attention? 70
6/') '
David Gauntlett: At the moment, it is obviously empirically correct to say that most people are ‘viewers’ and ‘readers’ rather than active producers. And it may always be the case that people will spend more time as ‘readers’ than as ‘writers’, which would make sense, otherwise there would be far too much original material in the world and nobody would be consuming it, simply because they wouldn’t have time! At the moment we’re looking at the potential of the Internet. Who knows what it will look like in 10 or 20 years. But what we do know is that, in the past few years, we have already seen a great explosion in shared creative activity. (There’s an emphasis on ‘shared’ there, because previously, people were probably being just as creative in their lives, but they did not have a system for sharing it with many others; now, they do). It seems reasonable to assume that this is a growing phenomenon.
Education and Media Studies Julian McDougall (JulianMcDougall) on Twitter: Does treating ‘prosumer’ creations as worthy of academic study necessarily lead to a ‘relativist’ approach to media studies? David Gauntlett: On the one hand, it would obviously be wrong to believe that only industry-produced media is ‘proper’ media, and worthy of study. But if by ‘relativist’ you mean that we forget all quality judgements and just assume that all media is of equal quality, then I’d say no, because we can still make intelligent judgements – but they would be based on the quality of the artefact rather than who produced it. So, if media studies becomes more agnostic about whether ‘media’ is something produced by the BBC, or by Sarah in her bedroom, I’d say that’s a good thing, because that’s how media-making and media-sharing is going. Alice Bell (alicebell) on Twitter: Chris Anderson recently suggested that doing media will become more of a hobby than a job. What do you think? David Gauntlett: To be fair, he didn’t quite say this as a prediction. It’s worth looking at the original quote, where, answering a question about the future of journalism, Chris Anderson said: “In the past, the media was a full-time job. But maybe the media is going to be a part time job. Maybe media won’t be a job at all, but will instead be a hobby. There is no law that says that industries have to remain at any given size. Once there were blacksmiths and there were steel workers, but things change. The question is not should journalists have jobs. The question is can people get the information they want, the way they want it? The marketplace will sort this out. If we continue to add value to
71
the Internet we’ll find a way to make money. But not everything we do has to make money.” (Anderson, 2009)
I think really it will be a mix of things, won’t it? There are some professionallycreated media experiences which are very distinctive and which people are clearly still very happy to pay for. Think of going to see an amazing film at the cinema, or a brilliant BBC drama. I don’t think there’s any sign that we want to actually swap these things for a funny six-minute YouTube video. But it’s not a matter of one or the other. There’s no reason at all why we wouldn’t be big fans of both kinds of experience. Some things, such as professional and investigative news-gathering, documentary making, or feature films, take a lot of time and work, by large teams of people, and these may be joined by homemade versions, but it’s not necessary to assume that free homemade things will replace the glossy, professional media. Anderson’s view that everything could be free, meanwhile, has the obvious problem that someone’s always paying somewhere, and often in his examples it is advertisers. But the idea that there’s enough advertising money to go around, to support all this stuff, seems highly unlikely. I’m not an economist, but I’m sure it doesn’t add up. Mark Squire (markcsquire) on Twitter [sent in three parts]: Is there not a danger of eLearning producing a generation of surface-skimming dilettantes? This contrasts with the sustained engagement demanded by traditional text-based learning. The appearance, texture, heft & smell of a book provide ‘handles’ through which the student latches onto to the contents. David Gauntlett: Well, I like books too, although I’m not sure that it’s logical to say that because some of us love the physicality of books, then students are necessarily drawn to them too. To answer the question, there is certainly a positive potential in the fact that students have access to a great range of sources on any subject. It compares very favourably with my experience as a student, where you got at best a handful of books from the library, whatever you could get your hands on, and you couldn’t really verify their content using other sources, and had to patch together an essay. The downside of today’s situation, of course, is that students are often not very good at finding or assessing good-quality sources, and also yes, perhaps they don’t engage so much with single texts in depth. So this, then, is a challenge for educators: we need to help students to get better at these things. At my university, we stress a combination of reading proper theoretical texts in depth, alongside gathering relevant and intriguing material online. Getting students to read books, or longer texts in any format, is the more challenging one, certainly. But to ‘blame’ the Internet for the fact that some people don’t use it with an academic level of care would not be justified, of course. 72
6/') '
Making is Connecting Catherine Vise, by email: Your ‘Making is Connecting’ work seems to be about a number of interesting things, like ‘everyday creativity’, Web 2.0, and social capital. It also seems to suggest a manifesto for making the world a better place. Can you give a simple summary of how this all fits together? David Gauntlett: Making is Connecting is a book I’m writing (during 2009–2010), accompanied by a website that’s already open at www.makingisconnecting.org. The title came into being because, like other people, when discussing Web 2.0 and social media I was talking a lot about making, and about connecting, ‘making and connecting’ – as well as other words like sharing and collaboration and so on – but then it struck me that an ‘is’ in the middle summed up pretty well what I wanted to say. And that I wanted to make this discussion not just about digital media but about creativity in general. So ‘making is connecting’ because it is through the process of making that we (1) make new connections between our materials, creating new expressive things; (2) make connections with each other, by sharing what we’ve made and contributing to our relationships by sharing the meanings we’ve created, individually or in collaboration; and (3) through making things, and sharing them with others, we feel a greater connection with the world, and more engaged with being more active in the environment rather than sitting back and watching. So, it concerns some of the themes of Web 2.0, but it’s broader than that. In a sense it wonders whether the Wikipedia model of collaboration online, which people do not for reward but because they think it’s a good project, can be taken as a metaphor for people doing nice collaborative stuff in everyday life. The experience of Web 2.0 – especially Wikipedia and the non-profit ‘social innovation’ projects – can shift people’s perceptions of how to go about things, I think. The people I know who are enthusiastic about Web 2.0 are also enthusiastic about real-world community projects, and it’s not likely that that’s a coincidence. So that connects with the literature on social capital – which is about the ways in which people feel connected with their communities, and whether they are motivated to make a positive difference – and indeed with the literature on happiness, and on loneliness. (See for example the Richard Layard book, Happiness: Lessons from a New Science, 2006, and the book by John T. Cacioppo and William Patrick, Loneliness: Human Nature and the Need for Social Connection, 2008.) This research shows that happiness comes from creative engagement, community, and social relationships. A sense of well-being comes from feeling that you are making a difference. In the disciplines of sociology or social policy, ‘happiness’ sounds like a rather fluffy measure, but actually, of course, people’s satisfaction with their own lives is crucially important. And so hopefully you can see how ‘making is connecting’ fits in there. Richard Layard says, ‘Prod any happy person and you will find a project’ – and he’s an economist who says this on the basis of data; it’s not a 73
new-agey sentiment. So as I argue in Making is Connecting, through making things, online or offline, we make connections with others and increase our engagement with the world. And this creativity can be fostered to tackle social problems and global issues. It’s kind of ambitious and optimistic, obviously. Julie Borkin, via Facebook: How can we assess that social network connections really enhance engagement? Put differently, is this essentially a Putnam-esque argument that connections are potentially productive and therefore ‘real’ engagement? David Gauntlett: Clearly having an online ‘connection’ in itself – such as adding a ‘friend’ on Facebook – doesn’t mean much per se. Or even finding a new person to discuss work or opinions with, via email or an online network, is not what people would usually recognise as, say, ‘civic engagement,’ which typically means something like a helpful activity in the local community, or holding a real-life political debate. So it depends what you mean by ‘engagement’. In any case, it’s obviously the case that if people are talking about a particular kind of engagement, such as participation in charity work, or with business, or political issues, or whatever, then they need to look at the impact on that specifically, and not confuse it with more superficial online links. Having said that, although social connections should not be equated with or counted as civic participation – or anything else that they are not – we should not dismiss them either. A 20-year longitudinal study recently demonstrated that having just one additional ‘happy friend’ can increase an individual’s personal happiness by nine per cent (Fowler & Christakis, 2008). If you want to process that information in government or social policy terms, happiness is highly correlated with both physical and mental health – therefore people with friends cost less to the state, in terms of health and social services. Stefan Sonvilla-Weiss, by email: In ‘The Make and Connect Agenda’ [http://www. theory.org.uk/david/makeandconnect.htm] you suggest, amongst other things, ‘Tools for Thinking’ which strongly emphasise hands-on experiences in the creative and meaning making process. How and why did you become attracted by Lego pieces, which appear frequently in your work in this area? Is there something unique about making things physically, which means we cannot translate this to the digital realm? David Gauntlett: Well, it’s not all about Lego! Although I have found Lego to be an especially accessible tool. People who are just hearing about my research using Lego, who haven’t taken part in a workshop, sometimes say to me, ‘Well, I wouldn’t be able to do this,’ or ‘I wouldn’t like it,’ but my experience with many groups – women and men, all ages from teenagers to retired people, and from all backgrounds including unemployed people who left school with no qualifications 74
6/') '
as well as rather reserved middle-class people – is that they all take to it quite happily after a couple of exercises. The point of that work, I should explain, was to get people building metaphors of their identities, in Lego. That project is covered in the book Creative Explorations (2007) and in various online presentations which you can see at www.artlab. org.uk/lego. More recently I’ve used it as a more general way of getting people to communicate ideas, often around the theme of a better society, the results of which you can see in some videos at www.theory.org.uk/video. ‘Tools for thinking,’ which you mention, could take a number of forms, of which a process using Lego would be just one. But you ask specifically about whether this is a ‘hands on’ process that is necessarily physical, in the real world, rather than digital. That’s an interesting question for me, because on the one hand, as you know I am very interested in digital media and especially the do-it-yourself opportunities for people to make and share things online. That kind of activity is basically people sitting at screens, clicking and typing. And at the same time I have been working with processes of self-expression, collaboration and communication which are very rich, and which have nothing to do with electronic media, and instead are based on the physical engagement with materials that you put together with your hands. These are very much related, but different. I know from my own experience that doing things digitally does not distance you from the creative process – for instance I have made and designed all my own websites, and have used ‘making a website’ as a way of thinking about an issue or subject, which pretty much exactly matches the experience in other contexts where we make something as part of the process of thinking about it. So these types of experiences can be parallel, but clearly different. I don’t have a clear answer on this yet, but in the work I’ve done with people from Lego and Massachusetts Institute of Technology (MIT) around the question of how ‘hands on’ creativity and learning translates into the digital realm, the best answers tend to be hybrid experiences where you combine some screen-based activity with some other going-out, finding-out, experimenting kind of activity away from the screen. Govinda Dickman, via Facebook: If making is connecting, does that mean that breaking is disconnecting? Is connexion always positive/creative; is disconnexion always negative/destructive? Culture, for instance, is a cybernetic system that “connects” agents within its network, but in doing so it also inevitably: (a) reduces the possibilities of those connections to the language of the network itself: we do not dream our own dreams, we dream the dreams of our cultures; and (b) arguably disconnects both the agents and the network that links them from their true context. Culture, which is connection, which is making, arguably alienates us from the reality of our reality, both inner (psychological) and outer (ontological). 75
David Gauntlett: That’s an interesting question – if connecting is seen as basically ‘good’, does that mean that disconnecting is ‘bad’? My immediate answer is no: although social connections are largely good for people, it doesn’t follow that disconnecting is a negative thing, on an individual level. In his book Solitude, for instance, psychiatrist Anthony Storr (1989) offers a powerful hymn to the creative benefits of being on your own, thinking your own thoughts. He highlights the fact that many of our most noted philosophers and writers have been fundamentally solitary beings. At a broader, more social level, however – and perhaps aside from the ‘creative geniuses’ that Storr’s account leans towards – mass disconnection would not be a good thing. The evidence shows that society benefits, very considerably, from having people who feel connected with others as individuals, and with the notion of their ‘communities’ more generally. When creativity is part of that connectedness and participation, I think that makes for an even more positive proposition, leading to greater general life satisfaction (happiness) and consequently less depression, less crime, and better physical and mental health. Govinda also asks if ‘breaking is disconnecting,’ and of course, taking things apart can be part of a very creative process, so that one is easy – breaking is fine. But we should also consider the argument in the second part of his question – the idea that connectedness means that we are removed from our ‘true context,’ and that ‘our own dreams’ are replaced by ‘the dreams of our cultures’. This seems like a significant concern, but I think it rests on a kind of notion of individual specialness which you can take too far. We exist in a social and cultural context, and this shapes everything we think and do, to some extent. This is our ‘true context’. Social and cultural context is inescapable, and so the idea of a ‘pure’ vision, untainted by culture, doesn’t really work. I’m sure we all want people to ‘think freely’ and to be imaginative rather than just trying to fit within social norms. But I don’t believe that being part of a social conversation means that one’s own creativity, or ideas, are necessarily limited. Individuals who want to dream their own dream on top of a mountain are very welcome to do so, of course, but if we’re thinking about the vitality of a community, then obviously, it relies on people having connections and inspiring each other.
Academic Public Engagement Anthony Sternberg, by email: In an article in Times Higher Education recently you argued that academics should be communicating their research more directly with the public. What would that mean, and don’t we need the traditional machinery of academic journals and peer review to sort out the work of good quality? David Gauntlett: That article was responding to an Arts and Humanities Research Council report, and argued that arts and humanities researchers often need to 76
6/') '
express more clearly why they do what they do, and should become more innovative and engaged with social and environmental issues, rather than, say, just writing rather derivative reflections on some creative cultural artefact which had been previously produced by someone else. And it made the point that these people put vast amounts of time and effort into their work, but then seem unconcerned about getting it out into the world, and are happy for it to be stuck in an academic journal where it will typically be read by a very small number of people. The Web in general, and easy-to-use Web 2.0 tools in particular, make it pretty easy for academics to disseminate their work and ideas in an accessible way to anyone who might be interested, and I think they’ve got a responsibility to do so. This is something I’m rather passionate about. The questioner asks, don’t we need academic journals and peer review to ensure quality. Well, what I’m suggesting is that researchers should still publish books and articles, and we can expect that they would continue to be judged on the quality of those traditional outputs, but also that they should do things such as YouTube videos, podcasts, and imaginative websites with interesting ways of presenting information. These are also likely to be judged by their peers, and other interested parties, and be rated and linked to online – which is also a form of ‘peer review’. It’s more informal, but may also be more open and responsible, than the official system, where selected academics get to boost their mates, or shoot down ideas they don’t like, from behind the curtain of anonymous reviewing. In the past, there was a distribution problem for many academics who wanted to get their work out in the world, but since the Web has emphatically fixed that one, I can’t really see or understand why many academics aren’t using the full range of tools at their disposal. Some say, ‘I’m too busy writing my journal articles, I haven’t got time to do that as well,’ but that would seem to embody a reckless disregard for communicating with people.
In Conclusion We began with an outline of Web 2.0, and then considered a number of different things which were all affected by it in some way. Creativity came up a lot, alongside collaboration, community, happiness, and also loneliness. I am very happy to be knitting together these things, alongside others who are all working on different but related strands. In the section on ethical issues, I noted that in the area of Web 2.0, the potential problems were not necessarily around the technologies themselves, or even the way in which companies were implementing them, but rather were to do with the ethics and behaviour of the human beings using them. This theme returned in the last section, on academic communication, which I think is itself an ethical issue: do researchers think it is reasonable to keep their material buried in academic journals, or are they willing to spend some of their time engaging with interested people, 77
and trying to communicate and discuss their work? The Web has changed the way in which we do so many things – this is just one instance. It’s always leading to new questions, as well as opportunities, so it’s a very stimulating time to be thinking about all these interconnected, interdisciplinary issues. You can find links to other work by David Gauntlett, from books to YouTube videos, at: www.theory.org.uk/david
Works Cited Anderson, Chris (2009), quoted in Spiegel Online, ‘Chris Anderson on the Economics of “Free”: “Maybe Media Will Be a Hobby Rather than a Job”’, 28 July 2009, http://www.spiegel.de/international/zeitgeist/0,1518,638172,00. html Cacioppo, John T., & Patrick, William (2008), Loneliness: Human Nature and the Need for Social Connection, New York: W. W. Norton. Fowler, James H., & Christakis, Nicholas A. (2008), ‘Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study’, BMJ 2008;337:a2338, http://www.bmj.com/ cgi/content/full/337/dec04_2/a2338 Gauntlett, David (2007), Creative Explorations: New approaches to identities and audiences, London: Routledge. Layard, Richard (2005), Happiness: Lessons from a New Science, London: Penguin. Layard, Richard, & Dunn, Judy (2009), A Good Childhood: Searching for Values in a Competitive Age, London: Penguin. O’Reilly, Tim (2006a), ‘Levels of the Game: The Hierarchy of Web 2.0 Applications’, 17 July 2006, http://radar.oreilly.com/2006/07/levels-of-the-gamethe-hierarc.html O’Reilly, Tim (2006b), ‘Web 2.0 Compact Definition: Trying Again’, 10 December 2006, http://radar.oreilly.com/archives/2006/12/web-20-compact.html Storr, Anthony (1989), Solitude, London: Flamingo.
78
Mobilizing the Imagination in Everyday Play: The Case of Japanese Media Mixes
The spread of digital media and communications in the lives of children and youth have raised new questions about the role of media in learning, development and cultural participation. In post-industrial societies, young people are growing up in what Henry Jenkins (2006) has dubbed “convergence culture”—an increasingly interactive and participatory media ecology where Internet communication ties together both old and new media forms. A growing recognition of this role of digital media in everyday life has been accompanied by debate as to the outcomes of participation in convergence culture. Many parents and educators worry about immersion in video gaming worlds or their children’s social lives unfolding on the Internet and through mobile communication. More optimistic voices suggest that new media enable young people to more actively participate in interpreting, personalizing, reshaping, and creating media content. Although concerns about representation are persistent, particularly of video game violence, many of the current hopes and fears of new media relate to new forms of social networking and participation. As young people’s online activity changes the scope of their social agency and styles of media engagement, they also encounter new challenges in cultural worlds separated from traditional structures of adult oversight and guidance. Issues of representation will continue to be salient in media old and new, but issues of participation are undergoing a fundamental set of shifts that are still only partially understood and recognized. My focus in this chapter is on outlining the contours of these shifts. How do young people mobilize the media and the imagination in everyday life? And how do new media change this dynamic? A growing body of literature at the intersection of media studies and technology studies examines the ways in which new media provide a reconfigured social and interactive toolkit for young people to mobilize media and a collective imagination. After reviewing this body of work and the debates about new media and the childhood imagination, I will outline a conceptual framework for understanding new genres of children’s media and media engagement that are emerging from convergence culture. The body of the paper applies this framework to ethnographic material on two Japanese media mixes, Yugioh and Hamtaro. Both of these cases are examples of post-Pokemon media mixes, convergence culture keyed to the specificities of children’s media. I suggest that these contemporary media mixes in children’s content exemplify three key characteristics that distinguish them from prior media ecologies: Convergence of old and new media forms; authoring through personalization and remix, and hypersociality as a genre of social participation. My central argument is that these tendencies define a new media ecology keyed to a 79
more activist mobilization of the imagination in the everyday life of young people.
The Imagination in Everyday Life Current issues in new media and childhood are contextualized by longstanding debates over the role of media, particularly visual media, in the imaginative lives of children. At least since television came to dominate children’s popular cultures, parents, educators, and scholars have debated the role of commercial media in children’s creativity, agency, and imagination. One thread of these debates has been concerned with the content of the imagination, examining issues such as representations of gender or violence. Another strand of the debate, which I will examine here, focuses on the form, structure, and practice of the imagination. What is the nature of childhood imagination when it takes as source material the narratives and characters of commercial culture? What are the modes of social and cultural participation that are enabled or attenuated with the rise of popular children’s media? Does engagement with particular media types relate to differences in childhood agency or creativity? Behind these questions is the theoretical problematic of how to understand the relation between the text produced by the media producer and the local contexts of uptake by young people. Framed differently, this is the question of how the imagination as produced by commercial media articulates with the imagination, agency, and creativity of diverse children going about their everyday lives. In this section, I review how this question has been taken up and suggest that theories of participation and collective imagination are ways of resolving some of the conceptual problematics in a way amenable to an analysis of new interactive and networked media. Our contemporary understandings of media and the childhood imagination are framed by a set of cultural distinctions between an active/creative or a passive/ derivative mode of engaging with imagination and fantasy. Generally, practices that involve local “production”—creative writing, drawing, and performance—are considered more creative, agentive, and imaginative than practices that involve “consumption” of professionally or mass produced media—watching television, playing video games, or reading a book. In addition, we also tend to make a distinction between “active” and “passive” media forms. One familiar argument is that visual media, in contrast to oral and print media, stifle creativity, because they do not require imaginative and intellectual work. Until recently, young people almost exclusively “consumed” dynamic visual media (i.e. television and film), unlike in the case of textual or aural media where they are expected to also produce work. This means that visual media, particularly television, has been doubly marked as a consumptive and passive media form. These arguments for the superiority of “original” authorship and textual media track along familiar lines that demarcate high and low culture, learning and amusement. For example, Ellen Seiter (1999) analyzes the differences between a more working class and an upper middle class preschool, 80
( 1&,1 $ /
and sees the distinctions between “good” and “bad” forms of media engagement as strongly inflected by class identity. The middle class setting works to shut out television-based media and media references, and values working on a computer, reading and writing text, and play that does not mobilize content derived from popular commercial culture. By contrast, the working class setting embraces a more active and informed attitude towards children’s media cultures. Scholars in media studies have challenged the cultural distinctions between active and passive media, arguing that television and popular media do provide opportunities for creative uptake and agency in local contexts of reception. Writing in the early years of digital media for children, Marsha Kinder (1991) suggested that video games and postmodern television genres provide opportunities for kids to “play with power” by piecing together narrative elements and genres rather than absorbing narratives holistically. Arguing against the view that commercial media stimulates imitation but not originality in children’s imaginings, Kinder points out the historical specificity of contemporary notions of creativity and originality. She suggests that children take up popular media in ways that were recognized as creative in other historical eras. “A child’s reworking of material from mass media can be seen as a form of parody (in the eighteenth-sense), or as a postmodernist form of pastiche, or as a form of Bakhtinian reenvoicement mediating between imitation and creativity” (1991, 60). In a similar vein, Anne Haas Dyson (1997) examines how elementary school children mobilize mass media characters within creative writing exercises. Like Seiter (1991), Dyson argues that commercial media provide the “common story material” for contemporary childhood, and that educators should acknowledge the mobilization of these materials as a form of literacy. “To fail to do so is to risk reinforcing societal divisions of gender and of socioeconomic class.” (1997, 7). These critiques of culturally dominant views of the “passivity” of children’s visual culture are increasingly well established at least in the cultural studies literature (for reviews, see Buckingham 2000; Jenkins 1998; Kinder 1999). Here I build on these critiques and propose frameworks for understanding the relation between media, the imagination, and everyday activity. Engagement with new media formats such as what we now find on the Internet, with post-Pokemon media mixes, and video games suggest alternative ways of understanding the relation between children and media that do not rely on a dichotomization of media production and consumption or between active and passive media forms. These binarisms were already being corroded by reception studies in the TV-centric era, and they are increasingly on shaky ground in the contemporary period. As digital and networked media have entered the mix, the more active and participatory dimensions of media engagement have been foregrounded to the point that longstanding distinctions about children’s relations to media are being fundamentally undermined. In their analysis of Pokemon, David Buckingham and Julian Sefton-Green (2004) suggest that Pokemon has continuities with early media forms and trends in 81
children’s popular culture. But they also suggest some important new dimensions. Their analysis is worth reproducing as it prefigures my arguments in the remainder of this essay. We take it for granted that audiences are ‘active’ (although we would agree that there is room for much more rigorous discussion about what that actually means). The key point for us is that the texts of Pokemon—or the other Pokemon ‘phenomenon’—positively require ‘activity.’ Activity of various kinds is not just essential for the production of meaning and pleasure; it is also the primary mechanism through which the phenomenon is sustained, and through which commercial profit is generated. It is in this sense that the notion of ‘audience’ seems quite inadequate. In other words, new convergent media such as Pokemon require a reconfigured conceptual apparatus that takes productive and creative activity at the “consumer” level as a given rather than as an addendum or an exception. One way of reconfiguring this conceptual terrain is through theories of participation that I derive primarily from two sources. The first is situated learning theory as put forth by Jean Leave and Etienne Wenger (1991). They suggest that learning be considered an act of participation in culture and social life rather than as a process of reception or internalization. My second source of theoretical capital is Jenkins’ idea of “participatory media cultures” which he originally used to describe fan communities in the seventies and eighties, and has recently revisited in relation to current trends in convergence culture (1992, 2006). Jenkins traces how fan practices established in the TV dominated era have become increasingly mainstream due to the convergence between traditional and digital media. Fans not only consume professionally produced media, but also produce their own meanings and media products, continuing to disrupt the culturally dominant distinctions between production and consumption. More recently, Natalie Jeremijenko (2002) and Joe Karaganis (Forthcoming) have proposed a concept of “structures of participation” to analyze different modes of relating to digital and interactive technologies. As a nod to cultural context and normative structures of practice, I have suggested a complimentary notion of “genres of participation” to suggests different modes or conventions for engaging with technology and the imagination. A notion of participation, as an alternative to “consumption,” has the advantage in not assuming that the child is passive or a mere “audience” to media content. It is agnostic as to the mode of engagement, and does not invoke one end of a binary between structure and agency, text and audience. It forces attention to the more ethnographic and practice based dimensions of media engagement (genres of participation), as well as the broader social and cultural contexts in which these activities are conducted (structures of participation). Jenkins writes, “Rather than talking about media producers and consumers occupying separate roles, we might now see them as both participants who interact with each other according to a new set of rules that none of us fully understands” (2006, 4). Putting participation at the core of the conceptual apparatus asserts that all media engagement is fundamentally 82
( 1&,1 $ /
social and active, though the specificities of activity and structure are highly variable. A critically informed notion of participation can also keep in view issues of power and stratification that are central to the classical distinctions between production and consumption. The structure of participation can be one that includes the relation between a large corporation and child, as well as the relation between different children as they mobilize media content within their peer networks. Notice that in this framing, the site of interest is not only the relation between child and text— the production/consumption and encoding/decoding relations (Hall 1993) that have guided much work in reception studies—but also the social relations between different people who are connected laterally, hierarchically, and in other ways. The research question has been recast from the more individualized, “How does a child interpret or localize a text?” to the collective question of “How do people organize around and with media texts?” Let me return this to creativity and the imagination. A notion of participation leads to a conceptualization of the imagination as collectively rather than individually experienced and produced. Following Arjun Appadurai, I treat the imagination as a “collective social fact,” built on the spread of certain media technologies at particular historical junctures (Appadurai 1996, 5). In an earlier era, Benedict Anderson (1991) argues that the printing press and standardized vernaculars were instrumental to the “imagined community” of the nation state. With the circulation of mass electronic media, Appadurai suggests that people have an even broader range of access to different shared imageries and narratives, whether in the form of popular music, television dramas, or cinema. Media images are now pervasive in our everyday lives, and form much of the material through with we imagine our world, relate to others, and engage in collective action, often in ways that depart from the relations and identities produced locally. More specifically, in children’s toys, Gary Cross (1997) has traced a shift in the past century from toys that mimicked real-world adult activities such as cooking, childcare, and construction, to the current dominance of toys that are based in fantasy environments such as outer space, magical lands, and cities visited by the supernatural. The current move towards convergent and digital media is one step along a much longer trajectory in the development of technologies and media that support a collective imaginative apparatus. At the same time, Appadurai posits that people are increasingly engaging with these imaginings in more agentive, mobilized, and selective ways as part of the creation of “communities of sentiment” (1996, 6-8). The rise of global communication and media networks is tied to an imagination that is more commercially driven, fantasy-based, widely shared, and central to our everyday lives at the same time as it is now becoming more amenable to local refashioning and mobilization in highly differentiated ways. Taking this longer view enables us to specify much of the current debate on children and media as defined by historically specific structures of participation in media culture. Until recently these structures of participation were clearly polarized 83
between commercial production and everyday consumption. Yochai Benkler (2006) argues that computer and the Internet are enabling a change in modes of cultural production and distribution that disrupts the dynamics of commercial media production. He lays out a wide range of cases such as Wikipedia, open source software development, and citizen science to argue that cultural production is becoming more widely distributed and coordinated in Internet enabled societies. While people have always produced local folks and amateur cultures, with the advent of low cost PCs and peer-to-peer global distribution over the Internet, high-end tools for producing and sharing knowledge and culture are more widely accessible. My argument about children’s culture parallels Benkler’s arguments. “Reception” is not only active and negotiated but is a productive act of creating a shared imagination and participating in a social world. The important question is not whether the everyday practices of children in media culture are “original” or “derivative,” “active” or “passive,” but rather the structure of the social world, the patterns of participation, and the content of the imagination that is produced through the active involvement of kids, media producers, and other social actors. This is a conceptual and attentional shift motivated by the emergent change in modes of cultural production.
Understanding New Media Drawing from theoretical frameworks of participation and collective imagination, I would like to outline in more detail my conceptual toolkit for understanding emergent changes in children’s media ecologies, and introduce the Japanese media mixes that are my topic of study. Digital or new media have entered the conversation about childhood culture holding out the enlightened promise of transforming “passive media consumption” into “active media engagement” and learning. Ever since the early eighties, when educators began experimenting with multimedia software for children, digital media have held out the promise of more engaged and childcentered forms of learning (Ito 2003). Although multimedia did not deliver on its promise to shake the foundations of educational practice, it is hard to ignore the steady spread of interactive media forms into children’s recreational lives. Electronic gaming has taken its seat as one of the dominant entertainment forms of the 21st century and even television and film have become more user-driven in the era of cable, DVDs, digital download, and TiVo. In addition to interactive media formats where users control characters and narrative, now the Internet supports a layer of social communication to the digital media ecology. Young people can reshape and customize commercial media, as well as exchange and discuss media in peer-to-peer networks through blogs, filesharing, social networking systems, and various messaging services. While there is generally shared recognition that new media of various kinds are resulting in a substantially altered media ecology, there is little consensus as to the broader social ramifications for the everyday lives of young people. In addressing these issues it is crucial to avoid the pitfalls of both hype and 84
( 1&,1 $ /
mistrust, or as Valentine and Holloway (2001) have described it, between the “boosters” and the “debunkers.” New technologies tend to be accompanied by a set of heightened expectations, followed by a precipitous fall from grace after failing to deliver on an unrealistic billing. In the case of technologies identified with youth subcultures, the fall is often accompanied by what Stanley Cohen (1972) has famously called a “moral panic,” the script of fear and crackdown that accompanies youth experimentation with new cultural forms and technologies. While the boosters, debunkers, and the panicked may seem to be operating under completely different frames of reference, what they share is the tendency to fetishize technology as a force with its own internal logic standing outside of history, society and culture. The problem with all of these stances is that they fail to recognize that technologies are in fact embodiments, stabilizations, and concretizations of existing social structure and cultural meanings, growing out of an unfolding history as part of a necessarily altered and contested future. The promises and the pitfalls of certain technological forms are realized only through active and ongoing struggle over their creation, uptake, and revision. I consider this recognition one of the core theoretical axioms of contemporary technology studies, and is foundational to the theoretical approach taken in this chapter. In this I draw from social studies of the technology that see technology as growing out of existing social contexts as much as it is productive of new ones (eg., Edwards 1995; eg., Hine 2000; Lessig 1999; Miller and Slater 2000). New media produced for and engaged with by young people is a site of contestation and construction of our technological futures and imaginaries. The cases described in this chapter are examples of practices that grow out existing media cultures and practices of play, but represent a trend toward digital, portable, and networked media forms becoming more accessible and pervasive in young people’s lives. I propose three conceptual constructs that define trends in new media form, production, and genres of participation: Convergence of old and new media forms; authoring through personalization and remix, and hypersociality as a genre of participation. These constructs are efforts to locate the ethnographic present of my cases within a set of unfolding historical trajectories of sociotechnical change. These characteristics have been historically present in engagement with earlier media forms, but now synergy between new media and the energies of young people has made these dimensions a more salient and pervasive dimension of the everyday lives of a rising generation. Let me sketch the outlines of these four constructs in turn before fleshing them out in my ethnographic cases. Contrary to what is suggested by the moniker of “new media,” contemporary media needs to be understood not as an entirely new set of media forms but rather as a convergence between more traditional media such as television, books, and film, and digital and networked media and communications. Convergent media involve the ability for consumers to select and engage with content in more mobilized ways, well as create lateral networks of communication and exchange at 85
the consumer level. Jenkins writes that convergence culture is “where old and new media intersect, where grassroots and corporate media collide, where the power of the media producer and the power of the media consumer interact in unpredictable ways” (Jenkins 2006, 2). In a related vein, I have used the term in popular currency in Japan, “media mix,” to describe how Japanese children’s media relies on a synergistic relationship between multiple media formats, particularly animation, comics, video games, and trading card games. The Japanese media mix in children’s culture highlights particular elements of convergence culture. Unlike with US origin media, which tends to be dominated by home based media such as the home entertainment center and the PC Internet, Japanese media mixes tend to have a stronger presence in portable media formats such as Game Boys, mobile phones, trading cards and character merchandise that make the imagination manifest in diverse contexts and locations outside of the home. Although the emphases are different, both Euro-American and Japanese children’s media are exhibiting the trend towards synergy between different media types and formats. Digital and networked media provide a mechanism not to wholly supplant the structures of traditional narrative media, but rather to provide alternative ways of engaging with these produced imaginaries. In children’s media cultures, the Japanese media mix has been central to a shift towards stronger connections between new interactive and traditional narrative forms. Children engaging with a media format like Pokemon can look to the television anime for character and backstory, create their own trajectories through the content through video games and trading card play, and go to the Internet to exchange information in what Sefton-Green has described as a “knowledge industry” (2004, 151). Convergent media also have a transnational dimension, as media can circulate between like-minded groups that cross national borders. The case of Japanese animation and media mixes are a particularly intriguing case in this respect, though the transnational dimension is not something that I will have space to address in this essay. These changing media forms are tied to the growing trend toward personalization and remix as genres of media engagement and production. Gaming, interactive media, digital authoring, Internet distribution, and networked communications enable a more customized relationship to collective imaginings as kids mobilize and remix media content to fit their local contexts of meaning. These kinds of activities certainly predate the digital age, as kids pretend to be superheroes with their friends or doodle pictures of their favorite characters on school notebooks. The difference is not the emergence of a new category of practice but rather the augmentation of these existing practices by media formats more explicitly designed to allow for userlevel reshuffling and reenactment. User-level personalization and remix is a precondition, rather than a side-effect of engaging with gaming formats and media mixes like Pokemon and Yugioh. When gaming formats are tied into the imaginary of narrative media such as television and comics, they become vehicles for manifesting these characters and narratives with greater fidelity and effect in everyday life. 86
( 1&,1 $ /
While the role of the collective imagination in children’s culture probably remains as strongly rooted in commercial culture as ever, the ability to personalize, remix, and mobilize this imaginative material is substantially augmented by the inclusion of digital media into the mix. At the level of everyday practice and social exchange, the tendency towards remix and personalization of media is also tied to the growth of deep and esoteric knowledge communities around media content. I have described the kind of social exchange that accompanies the traffic in information about new media mixes like Pokemon and Yugioh as hypersocial, social exchange augmented by the social mobilization of elements of the collective imagination. Hypersociality is about peer-topeer ecologies of cultural production and exchange (of information, objects, and money) pursued among geographically-local peer groups, among dispersed populations mediated by the Internet, and through organized gatherings such as conventions and tournaments. Popular cultural referents become a a shared language for young people’s conversations, activity, and social capital. This is a genre of participation in media culture that has historically strong roots in cultures of fandom, or in Japan, the media geekdoms of “otaku”(Greenfeld 1993; Kinsella 1998; Okada 1996; Tobin 2004). While otaku cultures are still considered subcultural among youth and adults, children have been at the forefront of the mainstreaming of these genres of participation It is unremarkable for children to be deeply immersed in intense otaku-like communities of interest surrounding media such as Pokemon, Digimon, or Teenage Mutant Ninja Turtles, though there is still a social stigma attached to adult fans of science fiction or anime.
Japan’s Media Mix Like otaku culture, the Japanese media mix is both culturally distinctive and increasingly global in its reach. A certain amount of convergence between different media types such as television, books, games, and film has been a relatively longstanding dimension of modern children’s media cultures in Japan as elsewhere. Japan-origin manga (comics), anime (animation), and game content are heterogeneous, spanning multiple media types and genres, yet still recognized as a cluster of linked cultural forms. Manga are generally (but not always) the primary texts of these media forms. They were the first component of the contemporary mix to emerge from in the postwar period in the sixties and seventies, eventually providing the characters and narratives that go on to populate games, anime, and merchandise. While electronic gaming was in a somewhat separate domain through the eighties, by the nineties it was well integrated in to the overall media mix of manga and anime characters, aided by the popularity of game-origin characters such as Mario and Pikachu. These media mixes are not limited to children’s media, and includes a wide range of adult-oriented material, but children’s media does dominate. Pokemon pushed the media mix equation into new directions. Rather than 87
being pursued serially, as in the case of manga being converted into anime, the media mix of Pokemon involved a more integrated and synergistic strategy where the same set of characters and narratives was manifest concurrently in multiple media types. Pokemon also set the precedent of locating the portable media formats of trading cards and handheld networked game play at the center rather than at the periphery of the media ecology. This had the effect of channeling media engagement into collective social settings both within and outside the home as they looked for opportunities to link up their game devices and play with and trade their Pokemon cards. Trading cards, Game Boys, and character merchandise create what Anne Allison has called “pocket fantasies,” “digitized icons … that children carry with them wherever they go,” and “that straddle the border between phantasm and everyday life” (Allison 2004, 42). This formula was groundbreaking and a global success; Pokemon became a cultural ambassador for Japanese popular culture and related genres of participation in media culture. Many other media mixes followed in the wake of Pokemon, reproducing and refining the formulas that Nintendo had established. My research was conducted in the wake of the Pokemon phenomenon. From 1998-2002, I conducted fieldwork in the greater Tokyo area among children, parents, and media industrialists, at the height of Yugioh’s popularity. My research focused on Yugioh as a case study, as it was the most popular series in currency at the time. My description is drawn from interviews with these various parties implicated in Yugioh, my own engagements with the various media forms, and participant observation at sites of player activity, including weekly tournaments at card shops, trade-shows, homes, and an afterschool center for elementary-aged children. Among girls, Hamtaro was the most popular children’s series at the time, so it became a secondary focus for my research. I also conducted research that was not content-specific, interviewing parents, participant observing a wide range of activities at the afterschool center, and reviewing diverse children’s media. I turn now to descriptions of Yugioh and Hamtaro at the levels of media form, authorship, and genres of participation to illustrate how these media mixes were mobilized in the everyday lives of children in Japan.
Yugioh Like other media mixes, Yugioh relies on cross referencing between serialized manga, a TV anime series, a card game, video games, occasional move releases, and a plethora of character merchandise. The manga ran for 343 installments between 1996 and 2004 in the weekly magazine Shonen Jump and is still continuing as an animated series. In 2001 the anime and card game was released in the US, and soon after in the UK and other parts of the world. The series centers on a boy, Mutoh Yugi, who is a game master, and gets involved in various adventures with a small cohort of friends and rivals. The narrative focuses on long sequences of card game 88
( 1&,1 $ /
duels, stitched together by an adventure narrative. Yugi and his friends engage in a card game derivative of the US-origin game Magic the Gathering, and the series is devoted to fantastic duels that function to explicate the detailed esoterica of the games, such as strategies and rules of game play, properties of the cards, and the fine points of card collecting and trading. The height of Yugioh’s popularity in Japan was between 1999 and 2001. A 2000 survey of three hundred students in a Kyoto elementary school indicated that by the third grade, every student owned some Yugioh cards (Asahi Shinbun 2001). Compared to Pokemon, where games are only loosely tied to the narrative media by character identification, with Yugioh the gaming comprises the central content of the narrative itself. In media mixes such as Pokemon and Digimon, the trading cards are a surrogate for “actual” monsters in the fantasy world: Pokemon trainers collect monsters, not cards. In Yugioh, Yugi and his friends collect and traffic in trading cards, just like the kids in “our world.” The activities of children in our world thus closely mimic the activities and materialities of children in Yugi’s world. They collect and trade the same cards and engage in play with the same strategies, rules, and material objects. Scenes in the anime depict Yugi frequenting card shops and buying card packs, enjoying the thrill of getting a rare card, dramatizing everyday moments of media consumption in addition to the highly stylized and fantastic dramas of the duels themselves. This is similar to a series like Beyblade that followed Yugioh, which involves kids collecting and battling with customized battle tops. The objects collected by the fantasy characters are the same as those collected by kids in real life. When I was conducting fieldwork, Yugioh cards were a pervasive fact of life, a fantasy world made manifest in the pockets and backpacks of millions of boys across the country. Personal authorship through collection and remix is at the center of participation with Yugioh. While many children, and most girls, orient primarily to the manga or anime series, game play and collection is the focus of both the narrative and the more high status forms of Yugioh engagement. Players can buy a “starter pack” or “structure deck” of cards that is ready to play, but none of the children I met in my fieldwork dueled with a preconfigured deck. Players will purchase “booster packs” which are released in different series that roughly correspond to different points in the narrative trajectory of Yugioh. The booster packs cost ¥150 (a little over $1US) for 5 randomly packaged cards from the series, making it a relatively lightweight purchase that is integrated into the everyday routines of kids stopping at local mom and pop shops on their way home from school, or accompanying their parents to a convenience store a bookstore—all locations which stock the most popular trading cards. The purchase of booster packs supports a collection and trading economy between players, because they quickly accumulate duplicate cards or cards that they do not want to keep or use. In duel spaces, players will buy, sell, and trade cards to one another in order to build their collections and design their own playing decks of forty or more cards. Since there are several thousand 89
'& ,&) ) 1'1,
different cards on the market now, the combinations are endless. Players I spoke to had a wide range of strategies that guided their collection and deck combinations. Some players orient toward the narrative content, creating decks and collections that mimic particular manga characters or based on themes such as dragon cards, insect cards, or occult cards. Serious players focus on building the most competitive deck, reading up on the deck combinations that won in the national and international tournaments, and pitting their deck against local peers. Others with more of a collector or entrepreneurial bent prioritize cards with a high degree of rarity. All cards have a rarity index that is closely tracked by Internet sites and hobby shops that buy and sell post-market single cards. While most children I played with or spoke to did not have easy access to Internet sites which are the clearinghouses for most esoteric collection knowledge—card lists, price lists, and rarity indexes—they were able to acquire knowledge by visiting hobby shops or through a network of peers which might include older children or young adults. Even young children would proudly show me their collections and discuss which were their favorite cards that reflected their personal taste and style. When I would walk into the afterschool center with a stack of cards I was quickly surrounded by groups of boys who riffled through my deck, asking questions about which cards were my own favorites, and engaging in ongoing commentary about the coolness and desirability of particular cards. While there is a great deal of reenactment and 90
( 1&,1 $ /
mimicking of existing narrative content in the practices of card collection and play, the subject positions enabled by the game are highly differentiated and variable. The series sports thousands of cards and dozens of duelist characters that Yugi has encountered in his many years on the air. The relation between the subjectivities of players and the commercially produced narrative apparatus of Yugioh is indicative of the mode of authorship of remix and personalization that I have been working to describe in this essay. Players draw from a massive collectively shared imagination as source material for producing local identities and performances. The practices of card collection and deck construction are closely tied to the modes of participation and sociability of Yugioh play. The structure of the media mix is built on the premise that play and exchange will happen in a group social setting rather than as an isolated instance of a child reading, watching, or playing with a game machine. It is nearly impossible to learn how to play the card game rules and strategy with out the coaching of more experienced players. My research assistants and I spend several weeks with the Yugioh starter pack, poring through the rule book and the instructional videotape and trying to figure out how to play. It was only after several game sessions with a group of fourth graders, followed by some coaching by some of the more patient adults at the card shops, that we slowly began to understand the basic game play as well as some of the fine points of collection, how cards are acquired, valued and traded. Among children, this learning process is part of their everyday peer relations, as they congregate after school in homes and parks, showing off their cards, hooking up their game boys to play against one another, trading cards and information. We found that kids generally develop certain conventions of play among their local peer group, negotiating rules locally, often on a duel-by-duel basis. They will collectively monitor the weekly manga release in Shonen Jump magazine, often sharing copies between friends. In addition to the weekly manga, the magazine also featured information about upcoming card releases, tournaments, and tournament results. The issues featuring the winning decks of tournament duelists are often the most avidly studied. When kids get together with their collections of Yugioh cards, there is a constant buzz of information exchange and deal-cutting, as kids debate the merits of different cards and seek to augment both their play deck and their broader card collection. This buzz of hypersocial exchange is the lifeblood of the Yugioh status economy, and what fuels the social jockeying for knowledge, position, and standing within the local peer network of play.
Hamtaro In contrast to boys, whose status economy often revolves on skill in competitive play, with girls, this is less central to their social lives. They tend to engage in a wide range of media and play that differs depending on their particular playmate. The girls I spoke to preferred the more subtly competitive exchange of stickers 91
to develop their connoisseurship and cement their friendship circles and did not participate as avidly in the hypersocial buzz of card trading. When Yugioh tournaments were held at the afterschool center I observed at, a handful of girls might participate, but they tended to watch in the sidelines even though they likely had their own stash of cards. None of this is news to people that have looked at the gendered dimensions of play. Although Pokemon crosses gender lines because of its cute characters, the same is not true for most Japanese media mixes. Overall, boys’ content is culturally dominant. It sets the trends in media mixing that girls’ content follows. But girls’ content is following. The trend is slower but as of the late nineties most popular girls content will find its way to Game Boy, though not to platforms like Nintendo consoles or Playstation. Otaku-like forms of character development and multi-year and multiply threaded narrative arcs are also becoming more common in series oriented towards girls. There is yet to be a popular trading card game based on girls content, but there are many collectible cards with content oriented to girls. The gender dynamics of the media mix is a complex topic that deserves more careful treatment than I can provide here. To give one example of how the dynamics of new media mixes is making its way girls content, I will describe the case of Tottoko Hamutarou (or Hamtaro, as it is known in English), the series that was most popular among girls during the period of my fieldwork. Hamtaro is an intrepid hamster owned by a little girl. The story originated in picture book form in the late nineties and became an animated series in 2000. This year, the anime series will pass the 300 episode mark. After being released as a television anime, Hamtaro attracted a wide following, quickly becoming the most popular licensed character for girls. It was released in the US, UK and other part of the world in 2002. Hamtaro is an interesting case because it is clearly coded as girls content, and the human protaganist is a girl. But the central character, Hamtaro is a boy. It has attracted a fairly wide following among boys as well as girls, though it was dwarfed by Yugioh in the boys’ market during the time that I was conducting my fieldwork. The story makes use of a formula that was developed by Pokemon, which is of a proliferating set of characters that create esoteric knowledge and domains of expertise. While not nearly as extensive as the Pokemon pantheon or Yugioh cards, Hamtaro is part of a group of about twenty hamster friends, each of which has a distinct personality and life situation. To date the series has introduced over 50 different quirky hamster characters, and complex narratives of different relationships, compatibilities, antagonisms, and rivalries. The formula is quite different from the classic one for girls’ manga or anime that has tended to have shorter runs and is tightly focused on a small band of characters including the heroine, friend, love interest, and rival. Instead, Hamtaro is a curious blend of multi-year soap opera and media mix esoterica, blending the girly focus on friendship and romance with otaku-like attention to details and a character-based knowledge industry. In addition to the narrative and character development that follows some of the formulas established by Pokemon, the series also exhibits the convergent 92
( 1&,1 $ /
'& ,&) ) !(
characteristics of the contemporary media mix. Hamtaro’s commercial success hinges on an incredibly wide array of licensed products that make him an intimate presence in girls’ lives even when he is not on the screen. These products include board games, clothing, curry packages and corn soup, in addition to the usual battery of pencils, stationary, stickers, toys, and stuffed animals. Another element important element of the Hamtaro media mix is game boy games. Five have been released so far. The first (never released overseas), Tomodachi Daisakusen Dechu (The Great Friendship Plan), was heavily promoted on television. Unlike most game commercials that focus on the content of the game, the spot featured two girls sitting on their bed with their Game Boys, discussing the game. The content of the game blends the traditionally girly content of relationships and fortune telling with certain formulas around collection and exchange developed in the boys media mix. Girls collect data on their friends and input their birthdays. The game then generates a match with a particular hamster character, and then predicts certain personality traits from that. The game also allows players to predict whether different people will get along as friends or as couples. Girls can also exchange data between game boy cartridges. The game builds on a model of collection and exchange that was established in the industry since Pokemon, but applied to a less overtly competitive girl-oriented exchange system. In Japan, Hamtaro even has a trading card game associated with it, though it pales in scope and complexity compared to those of Yugioh and Pokemon. When I spoke to girls about Hamtaro they delighted in telling me about the different characters, which was the cutest or sweetest, and which was their favorite. At the afterschool center, I often asked girls to draw pictures for me of media characters, one of many activities that the girls favored. Hamtaro characters were by far the most popular, followed by Pokemon. In each case, girls developed special drawing expertises and would proudly tell me how they were particularly good at 93
drawing a particular hamster or Pokemon. The authorship involved in this creations does not involve the same investments of card players and collectors, yet there are still dimensions of personalization and remix. The large stable of characters and the complex relational dynamics of the series encourages girls to form personalized identifications with particular hamsters, manifest in a sense of taste and connoisseurship of with drawing is just one manifestation. Girls develop investments in certain characters and relational combinations. If they mature into a more otakulike form of media engagement, these same girls will bring this sensibility to bear on series that feature human characters and more adult narratives of romance, betrayal, and friendship. The doujinshi (amateur comic) scene in Japan was popularized by young women depicting alternative relational scenarios and backstories to popular manga. Elsewhere I have discussed in more detail the role of doujinshi in popular youth cultures. Here I simply note that there Hamtaro engagements include an echo of the more hypersocial practices of remix, personalization, and connoisseurship that is more clearly manifest in boys’ popular cultures and practices.
Conclusions The cases of Yugioh and Hamtaro are examples of how broader trends in children and new media are manifest in Japan-origin media and media cultures that are becoming more and more global in their reach. Part of the international spread of Japanese media mixes is tied to the growth of more participatory forms of children’s media cultures around the world. At the same time, different national contexts have certain areas of specialization. For example, where Japan has led in the particular media mix formula I have described, the US media ecology continues to remain dominant in film and Internet based publication and communication. The conceptual categories—convergence in media form, personalization and remix in authorship, and hypersociality as a genre of engagement—were developed based on my ethnographic work with Japanese media mixes, but I believe also apply to the media and practices of young people in other parts of the post-industrial world. For example, Sonia Livingstone (2002, 108-116) describes trends in the UK towards “individualized lifestyled” tied to a diversification of media and forms of lifestyle expression among young people. In the US, Jenkins (2006) describes the highly activist cultures of fandoms expanding on the Internet. While a comparative look at these forms of participation is beyond the scope of this chapter, there are certainly indications of growing transnational linkages and resonances. If, as I have suggested, young people’s media cultures are moving towards more mobilized and differentiated modes of participation with an increasingly global collective imagination, then we need to revisit our frameworks for understanding the role of the imagination in everyday life. Assessed by more well-established standards of creativity, the forms of authorship and performance I have described would be deemed derivative and appropriative rather than truly original. It is also crucial 94
( 1&,1 $ /
that we keep in view the political economic implications of having young people’s personal identities and social lives so attuned and dependent on a commercial apparatus of imaginative production. At the same time, we need to take seriously the fact that cultural forms like Yugioh and Hamtaro have become the coin of the realm for the childhood imagination, and recognize them as important sources of knowledge, connoisseurship and cultural capital. Even as we look for ways of guiding these activities towards more broadly generative forms of authorship, we need to acknowledge Yugioh play as a source of creativity, joy, and self-actualization that often crosses traditional divides of status and class. Further, we need to reevaluate what authorship means in an era increasingly characterized by these remix and recombination as a mode of cultural production. Elsewhere I have written about the activities of older fans who compete in official tournaments and engage in the craftwork of producing amateur comics, costumes, and fiction based on anime narratives. While I do not have space to discuss these activities here, it is important to note that remix and alternative production extends into higher end production practices well beyond card collection, doodling, and everyday game play. Now as ever individuals produce new cultural material with shared cultural referents. The difference is the centrality of commercially produced source material and, more recently, the ability to easily recombine and exchange these materials locally and through peer-to-peer networks. For better and for worse popular media mixes have become an integral part of our common culture, and visual media referents are a central part of the language with which young people communicate and express themselves. It may seem ironic to suggest that these practices in convergence culture have resulted in a higher overall “production value” in what young people can say and produce on their own. Our usual lenses would insist that engagements with Yugioh or Hamtaro not only rely on cheap and debased cultural forms, but that they are highly derivative and unoriginal. What I have suggested in this essay, however, is that we broaden the lens through which we view these activities to one that keeps in view the social and collective outcomes of participation. While I am not suggesting that content is irrelevant to how we assess these emergent practices, I do believe that it is just one of many rubrics through which we can evaluate the role of new media in children’s lives. Acknowledging these participatory media cultures as creative and imaginative does not mean foreclosing critical intervention in these practices or abdicating our role as adult guides and mentors. The dominance of commercial interests in this space means that it is crucial for adults with other kinds of agendas to actively engage rather than write off these practices as trivial and purely consumptive. Many efforts in media literacy and youth media are exemplary in this respect, but I believe there is much more work to be done to make these recognitions take hold more broadly. Unless parents and educators share a basic understanding of the energies and motivations that young people are bringing to their recreational and social lives, these new media forms will produce an unfortunate generational gap. 95
Resisting the temptation to fall into moral panic, technical determinism, and easy distinctions between good and bad media is one step. Gaining an understanding of practice and participation from the point of view of young people is another step. From this foundation of respectful understanding we might be able to produce a collective imagination that ties young people’s practices into intergenerational structures and genres of participation in convergence culture.
Works Cited Allison, Anne. 2004. “Cuteness and Japan’s Millenial Product.” Pp. 34-52 in Pikachu’s Global Adventure: The Rise and Fall fo Pokémon, edited by J. Tobin. Durham: Duke University Press. Anderson, Benedict. 1991. Imagined Communities. New York: Verso. Appadurai, Arjun. 1996. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota Press. Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale University Press. Buckingham, David. 2000. After the Death of Childhood: Growing up in the Age of Electronic Media. Cambridge: Polity. Buckingham, David and Julian Sefton-Green. 2004. “Structure, Agency, and Pedagogy in Children’s Media Culture.” Pp. 12-33 in Pikachu’s Global Adventure: The Rise and Fall of Pokémon, edited by J. Tobin. Durham: Duke University Press. Cohen, Stanley. 1972. Folk Devils and Moral Panics. London: MacGibbon and Kee. Cross, Gary. 1997. Kids’ Stuff: Toys and the Changing World of American Child hood. Cambridge: Harvard University Press. Dyson, Anne Haas. 1997. Writing Superheroes: Contemporary Childhood, Popular Culture, and Classroom Literacy. New York: Teachers College Press. Edwards, Paul. 1995. “From “Impact” to Social Process: Computers in Society and Culture.” Pp. 257-285 in Handbook of Science and Technology Studies, edited by S. Jasanoff, G. E. Markle, J. C. Petersen, and T. Pinch. Thousand Oaks: Sage. Greenfeld, Karl Taro 1993. “The Incredibly Strange Mutant Creatures who Rule the Universe of Alienated Japanese Zombie Computer Nerds.” Wired. Hall, Stuart. 1993. “Encoding, Decoding.” Pp. 90-103 in The Cultural Studies Reader, edited by S. Durin. New York: Routledge. Hine, Christine. 2000. Virtual Ethnography. London: Sage. Ito, Mizuko. 2003. “Engineering Play: Children’s Software and the Productions of Everyday Life.” Anthropology, Stanford University, Stanford. —. Forthcoming. “Japanese Media Mixes and Amateur Cultural Exchange.” in Digital Generations, edited by D. Buckingham and R. Willett: Lawrence Erlbaum.
96
( 1&,1 $ / Jenkins, Henry. 1992. Textual Poachers: Television Fans and Participatory Culture. New York: Routledge. —. 1998. “Introduction: Childhood Innocence and Other Modern Myths.” Pp. 1-37 in The Children’s Culture Reader, edited by H. Jenkins. New York: NYU Press. —. 2006. Convergence Culture: Where Old and New Media Collide. New York: New York University Press. Jeremijenko, Natalie. 2002. “What’s New in New Media.” in Meta Mute: Culture and Politics After the Net. Karaganis, Joe. Forthcoming. “Introduction.” in Structures of Participation in Digital Culture, edited by J. Karaganis. Kinder, Marsha. 1991. Playing with Power in Movies, Television, and Video Games. Berkeley: University of California Press.—. 1999. “Kids’ Media Culture: An Introduction.” Pp. 1-12 in Kids’ Media Culture, edited by M. Kinder. Durham: Duke University Press. Kinsella, Sharon. 1998. “Japanese Subculture in the 1980s: Otaku and the Amateur Manga Movement.” Journal of Japanese Studies 24:289-316. Lave, Jean and Etienne Wenger. 1991. Situated Learning: Legitimate Peripheral Participation. New York: Cambridge University Press. Lessig, Lawrence. 1999. Code and Other Laws of Cyberspace. New York: Basic Books. Livingstone, Sonia. 2002. Young People and New Media. London: Sage Publications. Miller, Daniel and Don Slater. 2000. The Internet: An Ethnographic Approach. New York: Berg. Okada, Toshio. 1996. Otakugaku Nyuumon (Introduction to Otakuology). Tokyo: Ota Shuppan. Sefton-Green, Julian. 2004. “Initiation Rites: A Small Boy in a Poké-World.” Pp. 141-164 in Pikachu’s Global Adventures: The Rise and Fall of Pokémon, edited by J. Tobin. Durham: Duke University Press. Seiter, Ellen. 1999. “Power Rangers at Preschool: Negotiating Media in Child Care Settings.” Pp. 239-262 in Kids’ Media Culture, edited by M. Kinder. Durham: Duke University Press. Shinbun, Asahi. 2001. “Otousan datte Hamaru.” Pp. 24 in Asahi Shinbun. Tokyo. Tobin, Samuel. 2004. “Masculinity, Maturity, and the End of Pokémon.” Pp. 241256 in Pikachu’s Global Adventure: The Rise and Fall of Pokémon, edited by J. Tobin. Durham: Duke University Press. Valentine, Sarah Holloway and Gil. 2001. Cyberkids: Children in the Information Age. New York: Routledge.
97
Multiculturalism, Appropriation, and the New Media Literacies: Remixing Moby Dick
Project New Media Literacies emerged as part of an initiative of the MacArthur Foundation to better understand the kinds of learning (formal and informal) that takes place as young people engage in an ever-expanding array of cultural practices that have emerged from the affordances of digital media. In a white paper I wrote for the foundation, we identified a range of skills which we saw as fundamental for meaningful participation in this new participatory culture. A key skill for us was appropriation, which referred to the ability to meaningfully sample and remix the contents of our culture for new expressive purposes. Project New Media Literacies sought to translate the core findings of this white paper into the prototyping and deployment of new curricular materials. What follows are excerpts of a Teacher’s Strategy Guide we wrote about “Reading in a Participatory Culture,” which took appropriation and remixing as one of its central themes. In this project, we sought ways to rethink how canonical literature is being taught through schools, using as our core text, Herman Melville’s Moby-Dick, and a contemporary reworking of that narrative, Ricardo Pitts-Wiley’s “Moby-Dick: Then and Now.” Our choice was inspired by Pitts-Wiley’s remarkable story of how he developed this play through collaboration with incarcerated youth who were encouraged to reread and rethink Melville’s whaling story in the contemporary context of the drug trade. This African-American artist’s passion for Melville, his pedagogical approach, and his ethical commitments informed our project at every level. Through our collaboration with him, we were able to develop new materials—classroom activities, online videos, and critical essays—designed to empower classroom teachers to allow contemporary “remix” culture to inform how they teach literary classics. In this article, I have gathered together some segments from those materials which speak especially to issues around appropriation and transformation, including transcripts from our interviews with Pitts-Wiley as well as passages from our “expert voices” sections which are intended as background reading for the instructor. Our hope is that these excerpts will give a stronger sense of how we were able to develop a pedagogy around the principle of learning through remixing. The “expert voices” section was intended to model four different ways of reading the novel: as a creative artist (as modeled by Pitts-Wiley), as a performer (as modeled by young cast member Rudy), as a literary scholar (as modeled by Wyn Kelley) and as a media scholar (as modeled by Henry Jenkins). The “expert voices” document was intended to accompany documentary films produced by Deb Lui and curriculum developed by Jenna McWilliams. This first passage comes from the general introduction to the guide written by Henry
98
' ,6)) ) 6&%!+' Jenkins and Wyn Kelly and is intended to convey our perception of what literacy means in a participatory culture.
We take as fundamental the value of broadening and deepening our ideas about literacy by investigating its history, its varying meanings and functions, its different media, forms, and signs. An awareness of the many and surprising paths to literacy can be liberating for students, freeing them to find creative ways to learn and to express what they know. This guide is designed to help students and teachers reflect on what it means to read in a participatory culture. A participatory culture is one where there are relatively low barriers to artistic expression and civic engagement, where there is strong support for creating and sharing one’s creations with others, where there is some form of informal mentorship whereby what is known by experienced community members is passed along to novices, where each member believes their contributions matter, and where they feel some degree of social connection to each other. This description captures what is striking about a world where more and more people have the capacity to take media in their own hands and use powerful new tools to express themselves and to circulate their ideas to a public beyond their immediate friends and families. In such a world, the borders between reader and writer, consumer and producer, are starting to blur. Young people are at the heart of these changes. Fan fiction writers use existing media texts—including novels like the Harry Potter books—as springboards for creative explorations, writing short stories or full length novels which extend beyond the narrative or refocalize the story around secondary characters. Bloggers absorb and respond to ideas in circulation around them, claiming for themselves the right to participate actively in the central conversations of their culture. Young people on online forums are engaging in close reading practices directed towards popular music or cult television shows, sometimes engaging in prolonged and impassioned debate about what such works mean and how they convey their meanings. Young people are recording their impressions, including their reflections on what they read, through Live Journals and social network profiles, again turning the act of reading into the first step in a process of cultural participation. So, what does it mean to teach the canonical works at a time when so many young people feel empowered to become authors and to “broadcast” themselves to the world, as YouTube urges its contributors? One implication is certainly that we should focus greater attention on what it means to be an author, what it means to be a reader, how the two processes are bound up together, and how authors exist in dialogue with both those who come before and those who follow them. In this context, young people learn how to read in order to know how to create; the works they consume are resources for their own expressive lives. They seek to internalize meanings in order to transform, repurpose, and recirculate them, often in surprising new contexts. This may be one of the core insights we take from Ricardo Pitts-Wiley’s 99
Moby-Dick: Then and Now Project—nothing motivates readers like the prospect of becoming an author or a performer of your own new text. In this context, literacy is no longer read as a set of personal skills; rather, we understand the new media literacies as a set of social skills and cultural competencies, vitally connected to our increasingly public lives online and to the social networks through which we operate. The production of Moby-Dick: Then and Now was, as we will see, a deeply collaborative process fed by many different forms of expertise and skills, one where many creative minds worked together to achieve a shared goal. Just as authors are increasingly seen as sampling and remixing earlier works in their same tradition, creative expression, critical engagement, and intellectual argument is understood as part of an exchange that involves multiple minds, and as such, developing literacy is about learning how to read, think, critique, and create together. This guide proceeds with a deep respect for traditional forms of literacy: without the ability to do close reading or to express one’s ideas through written language, none of the other forms of participation we are describing here would be possible. But we also proceed with the understanding that a new cultural context shifts our understanding of the nature of literacy, as it has so many times in the past, and forces us to acquire new skills that were not necessarily part of the curriculum a few decades ago. In this section, transcribed and edited from our interviews with Pitts-Wiley, the playwright, director, and educator describes the process by which he produced his script for Moby-Dick: Then and Now.
Q: Moby-Dick: Then and Now emerged from the work you were doing with incarcerated youth through the Rhode Island Training Facility. How did that come about? Mr. Pitts-Wiley: I had already made a decision that I was going to do some type of adaptation of Moby-Dick. At that time I was in my mental process of trying to figure out how I was going to approach the novel. I knew going in that I thought a modern metaphor for Moby-Dick was the international cocaine cartel. From Melville’s novel, I had the color white, for sure. So whatever my parallel thing was going to be white had to be a big part of it. But, also, Moby-Dick, a white whale, a white sperm whale, is a natural anomaly. It’s not natural. It’s an oddity. Even the idea of cocaine is just odd. Before it gets to be white powder, it’s brown dust. There’s a process, an artificial process that makes it white. It is a product of nature, nonetheless. I had certain things that I wanted to be constant. In Melville’s novel when he talks about the societies of whales and the leaders, the herd leaders, and the harems and the families and the structure of the whaling community, the same had to be true for the community of the antagonists of my story, also. The community of the drug industry is a tight-knit, very often it is a very well-organized community. It involves every aspect of the ocean of business and life. Everything is 100
' ,6)) ) 6&%!+'
involved. So whatever I was going to deal with had to be that complete in its scope. The whale never defines itself. Ahab defines the whale. Ahab tells the reader, “The whale is malice inscrutable.” Malice inscrutable, my God. Cocaine was really my first idea, but I set out to make it fall apart, to try to say, if it’s not going to work I’m going to find out early…. Every step of the way, my original idea was supported in some way. If I had found a roadblock where it wasn’t supported, I would have changed it. But it was all there. So I had a whale of my own, in a sense. I can’t say the kids’ names. Not only is it illegal, it’s not something I would do anyway. But they were all boys, all young men, in for a variety of offenses, many of them drug-related, some of them violence, some violence against women, which is just—that’s a hard one for me…. In those situations you never ask the kids why they’re in there because they’re there and it doesn’t really matter. They will tell you, but I never ask them. They tell you in subtle ways, sometimes, and sometimes very specifically. The stronger your relationship with them, the more they want to tell you their story…. I had one kid who had read Moby-Dick before. He was so enthusiastic, he helped me bring the rest of the kids in. When you’re incarcerated like that, you have time on your hands and you want something that you can get excited about. It took a little while, but eventually they did get excited about Moby-Dick and they found a way that they could tell their own stories via Moby-Dick. That was important to them. As their imaginations exploded around Moby-Dick, I think mine did, too. It forced me to look at the novel very differently and appreciate it and love it as much as I loved them. …My message to them was don’t follow a bad leader and don’t go back. When you get out of here, never come back. There’s an economics to their lives that Moby-Dick talks about a bit, too, because it’s all about the money. When Ahab wants to get the crew truly motivated, when he’s rallied them and he doesn’t want to leave anything to doubt, he nails the gold doubloon to the mast. It’s about the money…. The crew of the Pequod was essentially made up of disposable people; and the drug industry depends on a certain hierarchy and lower-archy, but the foundation of the pyramid of the drug industry is a massive disposable population, people who don’t matter…. Working with the young people where drugs and violence have been a part of all of their lives, we were able to approach the material not from the outside looking in, but from the inside out. They could honestly say, “I’ve been involved,” or, “I lost friends.” But, also, I had an opportunity—and this was probably the best part of the experience for me—as a teacher to release their imaginations. Boy oh boy, no matter how much I write I’ll never be able to fully capture the degree to which their imaginations were released, and they released me, too, to say you don’t have to play by the ABC game. You don’t have to go by the numbers. You can rethink these characters and it’s okay, and you can honor them and rethink them at the same 101
time. When we started the writing process, I started by saying, “Pick a character and write a story about the character.” They all chose their favorite character in the novel and wrote a story about just their character. One of the young men who chose Ahab—it was a great story, too! Ahab was at home. He had just come back from a very successful voyage of drug dealing for WhiteThing, his boss. It was so successful that he worried that he was now a threat to the great omnipotent WhiteThing. He was making some decisions that it was time for him to either challenge the boss for control or to get out of the business. He’s home, he’s got this young wife, she’s pregnant, and the drug lord sends agents looking for him. In looking for him, they kill his wife and unborn child. They don’t get him. His revenge is based on what they did to him. Another one chose Elijah, the prophet, and the awful dilemma of being able to see the future and no one believing or understanding what you’re trying to tell them. “I’m going to warn you about this, but if you don’t heed my warning this is what’s going to happen,” and the awful dilemma that you face. His story was about 9/11. “I’m trying to tell you this is going to happen,” and then nobody listened, and how awful he felt that he knew and couldn’t stop it. Another one chose Stubb, who is kind of cantankerous. He started his story, “I’m Stubb, linebacker, middle linebacker.” That just was so right. I mean, you take a character and you sum it up just like that. He’s playing a football game. His girlfriend, a cheerleader, gunned down on the sideline, drive-by. Another one chose Queequeg and he made him a pimp. Wow, why a pimp? He says, “Well, when we meet Queequeg he’s selling human heads, shrunken heads,” so he’s a peddler in human flesh. He’s exotic. He’s tall. He’s good looking, and fiercely loyal and dangerous. That’s a pimp. Another kid chose Ishmael. He started off by saying, “Ishmael was a Navy Seal who was so high strung they kicked him out of the Navy.” If you know anything about Navy Seals, I don’t know how it’s possible to be too high strung, but he was. Then you go back and you see he read that first chapter where Ishmael is saying, “I feel like I’m following behind funeral processions. I feel like I need to get into a fight with somebody. I better get out of here and go handle my own anxiety before I either commit suicide or lay a whole community of people to waste because I’m mad. Time to get out. Time to go to sea. I’ll get away.” It’s a brilliant description: he was a Navy Seal who was too high strung so they kicked him out. That’s exactly what Ishmael is. If you go back to Ishmael in the Bible, the discarded son, the one who got nothing, it makes a lot of sense. Those are just examples. They were extreme, but at the same time the more extreme they got, the closer they got back to the root of the characters. And they met at the Spouter’s Inn. Ultimately all these characters met at the Spouter’s Inn and they rallied around Ahab who had been wronged and they knew it. In his story Pip was a soul singer, an entertainer, and they all came. He was there, but everybody thought Pip was crazy, but they took him on the voyage because they needed lev102
' ,6)) ) 6&%!+'
ity and entertainment even though they recognized that there was a message in his music, so to speak. These young men liberated my thinking. Through their eyes I was able to see Ahab and his crew as ultra-human beings who were aware of every moment of their lives. I was able to connect with the world that many of my students came from. Theirs was a world that was full of life, color and excitement. That world was also violent, remorseless and devoid of discipline. The kids kept giving me a sense that ultimately Moby-Dick’s about revenge and why are we seeking revenge. With that kind of insight from the young men I was able to go in and fashion a story. In my actual writing process of the play, I don’t use any of the stories because they’re really theirs. In some ways even when we cobbled their stories together to make our final presentation, it was a very different kind of story. Their story was far more contemporary, far more violent, far less redemptive in many ways, and undoubtedly more true than Moby-Dick: Then and Now. But if you kept looking inside their stories, they were all saying, “Don’t ignore me. I can’t be forgotten. Listen to me. I do think. I do care about some things,” or in some cases, “I have been so stripped of my own humanity. I care about nothing.” There was murder in every single one of the stories. There was violence of the highest order. But they were always able to, in some way, justify it. Ultimately a gift that they really gave me was I knew that my young crew, when I dealt with a contemporary story, were not blameless. I felt no obligation to make my young crew some heroes of young culture that we want to cheer for. Yes, we want to cheer for them on some level because they’re young. But Ahab was in the whaling business and they were in the drug business, and there was an acceptance that Ahab should have had that if you’re in the whaling business there are occupational risks. The kids very clearly said, “We’re in the drug business. There are occupational risks involved.” … They understood Melville’s text on a visceral level probably as well or better than any group that I encountered, including the Melville Scholars. There were certain times they were trying to deal with the literary metaphor, they were saying, “No, the real deal is this. We don’t have to coat this or surround this with academia and learned insight. Here’s the bottom line.” That was another way they liberated me. Sometimes I would have to write, “This is the way it is, period. We don’t have to sugarcoat it.” The more you cull down Melville—in order to get to the action of the play in the writing process you had to cull and strip away—you strip away everything that’s not really important. When you get down to what’s really important it’s pretty simple: we want to exercise our humanity in some capacity and our humanness. Our humanity and our humanness are very different things. Sometimes Melville wrote about the physical nature of it. You have to use your eyes, your ears, your nose, your hands, every sinew of your body. That was as important to these men as making money. In fact, it was probably more important because they didn’t make that much money. But they got to use every aspect of their humanness. 103
In this next section, media scholar Henry Jenkins offers some insights into his motives for participating in this project and how a cultural studies perspective might inform the teaching of canonical literature.
A cartoon captures some of the contradictions surrounding the relations between Media Studies and the ways that Literature is most often taught in American high schools: A glowering teacher confronts a student, who is holding a paper which displays his poor grade on an assignment: “The tip off was your reference to Gregory Peck’s obsession with the Great White Whale.” I could have been that student standing there. I have to confess that when I was assigned Moby-Dick in my high school English class, I never got past “The Whiteness of the Whale.” It wasn’t because I wasn’t interested in the story. By that point, I had read a children’s illustrated edition, had devoured the Classics Illustrated comics version, and had seen the Gregory Peck film several times. I loved Moby-Dick, though my teacher, Mrs. Hopkins, did little to capitalize on our existing familiarity with the story through other media. Like the teacher in the cartoon, Mrs. Hopkins viewed those other media versions with suspicion—consuming them was cheating, trying to get away without reading the book, and little else. Needless to say, we didn’t see eye to eye. Somehow reading Moby-Dick was very different from simply experiencing the story. Melville kept interrupting the adventure story elements that I knew from the other versions with ponderous mediations, encyclopedic discussions of whales and whaling, sermons, bits of theatrical scripts, and detailed character descriptions. I had no clue what to do with this other material and my teacher didn’t provide much help. She went on and on about what a great novel Moby-Dick is but she never really told us why we were reading it or how we were supposed to make sense of all the numerous digressions. We were assigned the task of writing a paper on Biblical allusions in the novel, but I hadn’t really been taught what an allusion was or what one might say about them. I never finished reading the novel. I wrote what was probably the worst essay I ever wrote in high school; I got a bad grade on it; and I decided I never wanted to read that stupid book again! All of that changed when I met Ricardo Pitts-Wiley and listened to him talk about his plans to stage a contemporary version of Moby-Dick, involving youth actors, set against the drug trade. For one thing, Ricardo is a hard guy to say no to. He speaks with a deep booming voice which carries an enormous amount of passion and conviction. For another, his vision of getting young people to read and rewrite Moby-Dick was very much in line with my own strongly held belief that in order to reach the current generation, for whom mashup videos and resampled music have become defining aesthetic experiences, we need to help them learn through remixing. So many MIT students through the years have told me that they learned about technology by taking things apart, tinkering with them, putting them back together, and trying to figure out what makes them work And I’ve found myself 104
' ,6)) ) 6&%!+'
wondering how we can carry some of those same experiences into the humanities classroom. As a media scholar, I’ve been studying how readers read and what fans do with their favorite programs for more than twenty years. For much of that time, I have been researching the phenomenon of fan fiction. Fans dig deep into their favorite television programs, films, and novels, draw out interesting elements and elaborate upon them through original stories and novels. I had watched a growing number of people getting into the fan writing world by the time they reached middle or high school. I’ve spoken with 14 and 15 year olds who have, for example, written full length Harry Potter novels which they post to the Web and which get hundreds of comments from readers around the world. I’ve argued that writing fan fiction represents a particularly valuable form of criticism, one which breaks all of the rules I was taught in school—getting inside the heads of the characters rather than the author, speculating beyond the ending rather than taking the text as given, asserting one’s own fantasies and interests rather than working to recover hidden meanings. Yet, it was also a form that led to new insights and discoveries, that got young writers doing close readings and debating interpretations, mobilizing passages from the text in the process. It was a form of criticism that saw the original work as the starting point for a conversation, one which, as Mrs. Hopkins might have put it, saw the original story as a “living” element in our contemporary culture. If young writers could do this with Harry Potter or Naruto, with Lord of the Rings or X-Men, then why can’t they do it with Melville, Hawthorne, Shakespeare, Morrison or Austen? And that’s precisely what Pitts-Wiley is doing, working with students who are not only “at risk” but already incarcerated, the kids that most people have already given up on? If he could get those young convicts to read MobyDick, then what excuse do I have as an adult academic for not reading it? So, I dug out the same battered old Bantam paperback edition that defeated me in high school, and I read through it, chapter by chapter, the yellowed pages peeling off in my hands. I discovered a very different book than I remembered. Moby-Dick made much more sense understood not as a classically constructed work but rather as a mashup. Melville absorbed as much of 19th century whaling lore as he could, mixed it with elements borrowed from the Bible, Milton, Homer, Shakespeare, and countless other writers, and produced something that shifted between genres, adopted a range of different voices and perspectives, and refused to deliver what we anticipate from one page to the next. Understanding Moby-Dick as a mashup helped me to appreciate elements of the novel that had made no sense to me when I had read it years before, anticipating a simple, straightforward saga of men on ships hunting whales. Some of this has to do with the particular qualities of Moby-Dick as a novel and Herman Melville as a writer. Yet, don’t stop there. The Russian literary critic Mikhail Bakhtin tells us that writers don’t take their words from the dictionary; 105
they extract them from other people’s mouths and they come to us still dripping with saliva. As my mother used to say, “put that down. You don’t know where it’s been.” But in this case, we do know where it’s been. It has a history. It’s already part of our culture. Writers don’t create in a vacuum. For all of our celebration of originality, authors draw heavily on stories, images, ideas that are circulating all around them. They take inspiration from other books, just as fan fiction writers take inspiration from J.K. Rowling. Indeed, that’s what my teacher was trying to get me to understand when she asked us to write about Biblical allusions in Moby-Dick. She wanted us to see how Melville was retelling stories from the Bible, giving them new meanings, and inserting them into new contexts. Melville was a great writer and a gifted storyteller, but it didn’t mean he made everything up out of his head. Melville and the other writers we study in high school literature classes borrowed from everything they had ever read, yet in the process of remixing those elements, retooling that language, and retelling those stories, they created something that felt fresh and original to their readers. Bakhtin tells us that writers have to battle with their materials, forcing them to mean what they want them to mean, trying to shed some associations and accent others. The borrowed material is never fully theirs; it leaves traces of its previous use, traces we can follow like so many bread crumbs back to their sources and in the process, we can see Melville and these other authors speak to and about what came before…. Beginning writers need to draw models and take inspiration from other stories they have read, but the dirty little secret is that so do gifted writers. They aren’t involved in some alchemical process weaving straw into gold, creating something from nothing. They are taking materials from their culture and deploying them as raw materials to manufacture something new. Thinking about authorship in those terms, as a cultural process, allows us to revitalize some of the things literature scholars have always done—talking about sources, exploring allusions, comparing different works within the same genre, watching an author’s vision take shape over multiple works. All of these approaches help us to see that writers are also readers and that understanding their acts of reading can help us to better understand their writing. If we can follow this process backwards in time, tracing how Melville read and reworked material from the Bible to create Moby-Dick, we can also trace it forward in time, looking at how other creators, working in a range of different media, took elements from Moby-Dick and used them as inspiration for their own creative acts. That’s what Ricardo Pitts-Wiley did when he wrote and staged Moby-Dick: Then and Now. It’s also what the incarcerated youth did when they participated in his workshop and learned how to read and rewrite Moby-Dick. Talking with PittsWiley, it is clear that he didn’t see remixing as a matter of turning kids loose with the text to do what they want with it; he insisted that remixing must begin with a close reading and deep understanding of the original work. That’s why we think 106
' ,6)) ) 6&%!+'
that we can reconcile the goals of appropriation and transformation, which are part of the new media literacies, with a respect for the traditions of close reading that have always informed the teaching of literature. In this next section, Jenkins refines our definition of appropriation and its relationship to creative practice.
Is It Appropriate to Appropriate? The process of digitization—that is, the converting of sounds, texts, and images (both still and moving) into digital bytes of information—has paved the way for more and more of us to create new works by manipulating, appropriating, transforming, and recirculating existing media content. Such processes are becoming accessible to more and more people, including many teenagers, as tools which support music sampling or video editing. A new aesthetic based on remixing and repurposing media content has flowed across the culture—from work done by media professionals in Hollywood or artists working in top museums to teenagers mashing up their favorite anime series on their home computers or hip hop DJs mixing and matching musical elements across different genres. Journalists have frequently used the term “Napster generation” to describe the young people who have come of age in this era of participatory culture, but this label reduces their complex and diverse forms of appropriation to the simple, arguably illegal action of ripping and burning someone else’s music for the purpose of file sharing. Owen Gallagher, a twenty-something who runs Totalrecut.com, suggests that his generation’s embrace of remix practices may go back to the nursery floor: My brother and I were the proud owners of many Star Wars figures and vehicles, Transformers, Thundercats, MASK, He-Man, G.I. Joe, Action Man and a whole host of other toys from various movies and TV shows. Our games always consisted of us combining these different realities and storylines, mixing them up and making up our own new narratives. It was not unusual to have Optimus Prime fighting side by side with Luke Skywalker against Mumm-Ra and Skeletor. So, from a very early age it seemed completely normal for me to combine the things I loved in new ways that seemed entertaining to me. I think that my generation and those younger than me have grown up expecting this sort of interaction with their media, on their own terms. (http://henryjenkins.org/2008/06/)
Such early play experiences taught contemporary youth how to transform elements from popular media into resources of their own fantasy, play, and creative expression. As they have embraced new digital tools, they have been able to manipulate this source material with equal ease. 107
Most forms of human creative expression have historically built on borrowed materials, tapping a larger cultural “reservoir” or “commons” understood to be shared by all. Our contemporary focus on “originality” as a measurement of creativity is relatively new (largely a product of the Romantic era) and relatively local (much more the case in the West than in other parts of the world.) This ideal of “originality” didn’t exist in the era of ancient bards, out of which sprang the works of Homer; historians who work on oral culture tell us that bards composed by drawing heavily on stories and characters already familiar to their listeners and often built up their oral epics from fragments of language shared by many storytellers. The ideal of “originality” only partially explains the works of someone like Shakespeare, who drew on the material of other playwrights and fiction writers for plots, characters, themes, and turns of phrase. Elizabeth Eisenstein, an important historian of print culture, has called our attention to a medieval text which offered four different conceptions of the author, none of which presumed totally original creation: A man might: write the works of others, adding and changing nothing, in which case he is simply called a ‘scribe’ (scriptor). Another writes the work of others with additions which are not his own; and he is called a ‘compiler’ (compilator). Another writes both others’ work and his own, but with others’ work in principal place, adding his own for purposes of explanation; and he is called a commentator (commentator).... Another writes both his own work and others’, but with his own work in principal place, adding others for purposes of confirmation; and such a man should be called an ‘author’ (auctor) ….
Our focus on autonomous creative expression falsifies the actual process by which meaning gets generated and new works get produced. Many core works of the Western canon emerged through a process of retelling and elaboration: the figure of King Arthur goes from an obscure footnote in an early chronicle into the full blown text of Morte D’Arthur in a few centuries as the original story gets built upon by many generations of storytellers. None of these authors saw what they wrote as the starting point in a creative process, acknowledging inspirations and influences from the past. And none of them saw what they wrote as the end point of a creative process, recognizing that their characters, stories, words, and images would be taken up by subsequent generations of creators. In fact, there were more than two hundred alternative versions of Alice in Wonderland published commercially in the twenty years following the book’s original release, including important first or early works by a number of significant children’s book writers and including versions which used Wonderland’s denizens to express everything from support for women’s suffrage to opposition to socialism. Carolyn Sigler has argued that this quick and widespread appropriation helped to cement the book’s place as one of the most oft108
' ,6)) ) 6&%!+'
quoted works in the English language. So, we are making two seemingly contradictory claims here: first, that the digital era has refocused our attention on the expressive potential of borrowing and remixing, expanding who gets to be an author and what counts as authorship, but second, that this new model of authorship is not that radical when read against a larger backdrop of human history, though it flies in the face of some of the most persistent myths about creative genius and intellectual property that have held sway since the Romantic era. Both ideas are important to communicate to students. We need to help them to understand the growing centrality of remix practices to our contemporary conception of creative expression, and we need to help them to understand how modern remix relates to much older models of authorship. Neil Gaiman is a storyteller famous for reworking classic myths, folktales, and fairy tales, whether in the form of comics (The Sandman series), novels (American Gods), films (Beowulf) or television series (Neverwhere). During an interview, Gaiman asserted, “We have the right, and the obligation, to tell old stories in our own ways, because they are our stories”. This statement offers an interesting starting point for talking with your students about appropriation. In what sense does a culture have a “right” to retell stories that are part of its traditions? In what sense are they “our stories” rather than the legal property of the people who first created them? After all, much contemporary discussion of copyright starts from an assumption that authors have rights while readers do not. Gaiman’s statement pushes us further, though, since he asserts not simply a “right” but also an “obligation”. In other words, retelling these old stories for contemporary audiences is a way of keeping their influence alive within the culture. It is something we owe the past—to carry their ideas forward into the next generation. As we retell these stories, we necessarily change them, adding or extracting elements in order to emphasize those themes that matter most to our listeners, much as an ancient bard would expand or compress a particular telling of a story depending on listener response…. Literature teachers have already been trained to think about remix practices: they often teach about a writer’s sources of inspiration or allusions to other works. The new emphasis on remix culture among contemporary youths may give you an opportunity to revitalize some concepts central to your discipline and to talk with students about cultural practices that are part of their own everyday experience. Seeing remix as another way into thinking about allusion suggests an answer to a question we often receive from teachers: How can you tell if a remix is good? How can you tell if an allusion is good? An allusion is good when it is generative, when it extends the original work’s potential meaningfulness, when it taps the power of the original source to add new depth to your emotional experience of the current work. The same claims would hold true for other kinds of remix practices: as a general rule, a remix is valuable if it is generative and meaningful rather than arbitrary and superficial….. 109
Our interview with Pitts-Wiley yielded some important insights about his ethical concerns around the term ‚appropriation’ and his own creative practices in building productions around canonical texts.
When I came in contact with the new media literacies, many of the concepts were new to me, like the fascinating concept of remixing and appropriation. That’s an incredible choice of words to use in this new field: appropriation. I have spent much of my creative life trying not to appropriate things. I write a lot about African-ness—African culture and black people and this country’s relationship to Africa. I’ve never been to Africa, but I have a sense of its culture and its people from things I’ve read and seen. I believe in spiritual villages, villages of connection. If you write a poem it’s a key to the village of poets. It’ll let you in. Once you’re in, all the poets are there. It doesn’t mean that you are going to be heralded and recognized as great or anything like that. All it means is you have a key to the village. I’ve always felt I’ve had a key to the village of African culture. But I was very determined to never, for instance, write a play in which I said, “I am a product of the Mandinka people,” or, “the Zulu people,” or I’m going to use their language as if I truly understand it. No, I don’t. But I had a sense of the humanity and the cultural connection, and I had to go to the village of the elders and say, “I have this word and I think it means this. What do you think?” Sometimes in that spiritual place the elders would say, “It’s a good word, you may use it.” Sometimes they would say, “It’s not a good word, it has no value.” So when I came across the word “appropriation” in the new media literacies I thought to myself, I’m a product of a black culture where so much of what we’ve created has been appropriated and not necessarily for our benefit. The great jazz artists were not necessarily making money off of jazz. The record companies were making money. Our dance forms, our music, our lingo, all of those things have been appropriated many, many times and not necessarily in a way in which we profited. So when I saw the term used I had a lot of concern about it. I still have a lot of concern about it, because does that mean that everything is fair game whether or not you understand its value? Can you just use whatever you want because it’s out there? Before you take something and use it, understand it. What does it mean to the people? Where was it born? It doesn’t mean that it’s not there to be used. It’s like music in the air: it’s there for everyone to hear it. But don’t just assume because you have a computer and I can download a Polynesian rhythm and an African rhythm and a Norwegian rhythm that I don’t have a responsibility to understand from whence they came; if I’m going to use gospel music I have a responsibility to understand that it’s born of a people and a condition that must be acknowledged. Of course, in writing my adaptation of Moby-Dick it became very important that I didn’t appropriate anything that wasn’t in the novel from the beginning. People ask me, “Why Moby-Dick?” Because everybody was there, so I didn’t have to invent any people. It would have been different if I had to invent a whole race 110
' ,6)) ) 6&%!+'
of people where I would make a decision that I’m going to set it in South Africa in 1700. I don’t necessarily understand South African culture so I wouldn’t have done that. On the other hand, I had a real concern about appropriating hip-hop culture and putting it into what we were doing because I’m not a product of the hip-hop generation. I’m very much an admirer of it. There I really had to go the source and ask the young people, “This is what I’m thinking. Is it appropriate? Is it real? Is it based in any kind of truth, in any kind of reality? What are your thoughts on this?” If I could make any contribution to the new media literacies, it would have been to say to the appropriators, “Find the truth. Find the people. Go ask. Go talk to somebody. Do not count on a non-human experience in order to make a complete creation of anything.” So in remixing I was concerned also with who had access to appropriate things. If you’re media savvy, if you’re on the whichever side, left or right side, of the digital divide, you have access to unlimited knowledge. But does that mean that you know how to use that knowledge and you are respectful of its source?
Advice on Remixing Literary Texts The first step in remixing novels is to stay honest to the original text. Put a value on that, understand it, appreciate it, and then start the remixing process. Edit down to the big questions. Why? What? Why is it important now? And then take the reins off, take the leash off, take the bit out of the mouth and let imaginations run wild, and be careful not to censor too harshly. I think censorship for respect, not necessarily of the original text, but censorship for respect of the reader so you don’t write in a vacuum. You write for things to be read, and I read things, “Well you didn’t care about anybody but yourself.” That’s not the purpose. This novel that we are working from was written to be read by others. Somehow you have to create, not for yourself, but for others, and allow the students to find their own honesty. Encourage them to always go back to the original text, keep going back to the original text. That’s where the message is, that’s where there’s a certain amount of the truth. Otherwise all you’ve done is written your own story. You haven’t studied; you haven’t learned necessarily; you’ve just written your own text, and there’s a place for that, too. That’s important, to keep going back to the original text. There’s great stuff in the original text. In Frankenstein, Moby-Dick, Invisible Man, you keep going back and you’ll find that those people really had an idea about what they wanted to write about. Don’t copy them. When I write music, I’ll hear a song and I’ll go, “Wow, wish I had written that song. I like that song that much. I wish I had written it.” And then when I sit down to write I came early on to a realization that it wasn’t the melody or the lyric that I wanted to replicate, it was the feeling that the song gave me. So Moby-Dick gave me a feeling, and I was able to invest those feelings in my young company much more than I was in the original text. But Melville gave me such great feelings to work 111
with, this is what I’m feeling, write that. Write that…. When we follow Pitts-Wiley’s advice, or for that matter, the example provided by fan fiction, rewriting and remixing become extensions of the process of close reading and textual analysis, skills that are central to the teaching of literature. In this section, Jenkins explores what it would mean to read – and rewrite—Moby-Dick as a fan.
Schools have historically taught students how to read with the goal of producing a critical response. In a participatory culture, however, any given work represents a provocation for further creative responses. When we read a blog or a post on a forum, when we watch a video on YouTube, the possibility exists for us to respond— either critically or creatively. We can write a fierce rebuttal of an argument with which we disagree or we can create a new work which better reflects our point of view.... Yochai Benkler argues that we look at the world differently in a participatory culture; we look at it through the eyes of someone who can participate. Just as …we read for different things depending on our goals, we also watch for different things depending if we want to use the experience of reading as the starting point for writing criticism or as a springboard for creative expression. At its worst, reading critically teaches us to write off texts with which we disagree. At its best, reading creatively empowers us to rewrite texts that don’t fully satisfy our interests. Keep in mind that we may rewrite a text out of fascination or out of frustration, though many writers are motivated by a complex merger of the two…. In her book, The Democratic Art, poet Sheenagh Pugh discusses what motivates large numbers of women to write fan fiction. She suggests that some fans want “more from” the original source material because they felt something was missing and some write because they want “more of ” the original source material, because the story raises expectations that are not fulfilled. Pugh discusses stories as addressing two related questions—“what if ” and “what else.” Pugh’s discussion moves between fans writing about science fiction or cop shows and fans writing about literary classics (for example, Jane Austen’s novels). She focuses mostly on the work of amateur writers yet she also acknowledges that a growing number of professional writers are turning their lenses on canonical literature and extending it in new directions. She opens her book, for example, with a discussion of John Reed’s Snowball’s Chance (2001), which rewrites George Orwell’s Animal Farm. Other examples might include Isabelle Allende’s Zorro (based on a pulp magazine character), Gregory Maguire’s Wicked (The Wizard of Oz), Jean Rhys’s Wide Sargasso Sea (Jane Eyre), Tom Stoppard’s Rosencrantz and Guildenstern Are Dead (Hamlet), J.M. Coetzee’s Foe (Robinson Crusoe), Linda Berdoll’s Mr. Darcy Takes a Wife (Pride and Prejudice), Nicholas Meyer’s Seven Percent Solution (Sherlock Holmes), Alice Randall’s The Wind Done Gone (Gone With the Wind), and Sena Jeter Naslund’s Ahab’s Wife (Moby-Dick)….. Fans are searching for unrealized potentials in the story that might provide a 112
' ,6)) ) 6&%!+'
springboard for their own creative activities. We might identify at least five basic elements in a text that can inspire fan interventions. Learning to read as a fan often involves learning to find such openings for speculation and creative extension. Kernels—pieces of information introduced into a narrative to hint at a larger world but not fully developed within the story itself. Kernels typically pull us away from the core plot line and introduce other possible stories to explore. For example, consider the meeting between the captains of the Pequod and the Rachel which occurs near the end of Melville’s novel (Chapter cxxviii). Captain Gardiner of the Rachel is searching for a missing boat, lost the night before, which has his own son aboard. He solicits Ahab’s help in the search. In doing so, he tells Ahab, “For you too have a boy, Captain Ahab—though but a child, and nestling safely at home now—a child of your old age too.” The detail is added here to show how much Ahab is turning his back on all that is human in himself. Yet, this one phrase contains the seeds of an entire story of how and why Ahab had a son at such a late age, what kind of father Ahab might have been, and so forth. We may also wonder how Gardiner knows about Ahab’s son, since the book describes him as a “stranger.” The John Huston film version goes so far as to suggest that Gardiner was also from New Bedford, which opens up the possibility that the two men knew each other in the past. What might their previous relationship have looked like? Were they boyhood friends or bitter rivals? Were their wives sisters or friends? Did the two sons know each other? Might Ahab’s wife have baby-sat for Gardiner’s son? Soon, we have the seeds of a new story about the relationship between these two men. Holes—plot elements which readers perceive as missing from the narrative but central to their understanding of its characters. Holes typically impact the primary plot. In some cases, “holes” simply reflect the different priorities for writers and readers who may have different motives and interests. For example, consider the story of how Ahab lost his leg. In many ways, this story is central to the trajectory of the novel but we receive only fragmentary bits of information about what actually happened and why this event has had such a transformative impact on Ahab, while other seamen we meet have adjusted more fully to the losses of life and limb that are to be expected in pursuing such a dangerous profession. What assumptions do you make as a reader about who Ahab was—already a captain, a young crewmember on board someone else’s ship—or where he was when this incident occurred? In fandom, one could imagine a large number of different stories emerging to explain what happened, and each version might reflect a different interpretation of Ahab’s character and motives. Contradictions—Two or more elements in the narrative which, intentionally or unintentionally, suggest alternative possibilities for the characters. Are the characters in Moby-Dick doomed from the start, as might be suggested by the prophecies of Elijah and Gabriel? Does this suggest some model of fate or divine retribution, as might be implied by Father Mapple’s sermon about Jonah? Or might we see the characters as exerting a greater control over what happens to them, having 113
the chance to make a choice which might alter the course of events, as is implied by some of the exchanges between Ahab and Starbuck? Different writers could construct different stories from the plot of Moby-Dick depending on how they responded to this core philosophical question about the nature of free will. And we can imagine several stories emerging around the mysterious figure of Elijah. Is Elijah someone gifted with extraordinary visions? Is he a mad man? Does he have a history with Ahab that might allow him insights into the Captain’s character and thus allow Elijah to anticipate what choices Ahab is likely to make? Silences—Elements that were systematically excluded from the narrative with ideological consequences. Many writers have complained about the absence of female characters in Moby-Dick, suggesting that we cannot fully understand the world of men without also understanding the experience of women. Some works— such as the John Huston version—call attention to the place of women in whaling culture, if only incidentally. Melville hints at this culture only through a few scattered references to the families that Ahab and Starbuck left behind. These references can provide the starting point for a different story, as occurs in Sena Jeter Naslund’s novel, Ahab’s Wife; we might imagine another version of the story where Ahab was female, as occurs in Moby-Dick: Then and Now, or we might use the plot of MobyDick as the starting point for creating a totally different story set in another kind of world where women can play the same kind of roles as the men play in Melville’s novel, as occurs in the Battlestar Galactica episode, “Scar.” Potentials—Projections about what might have happened to the characters that extend beyond the borders of the narrative. Many readers finish a novel and find themselves wanting to speculate about “what happens next.” As Pugh writes, “Whenever a canon closes, someone somewhere will mourn it enough to reopen it.... Even though we may feel that the canonical ending is ‘right’ artistically, if we liked the story we may still not be ready for it to end, for the characters and milieu that have become real to us to be folded up and put back in the puppeteer’s box.” For example, we might well wonder what kind of person Ishmael becomes after being rescued. Melville offers us some hints—even if only because Ishmael chooses to tell this story in the first place. Yet, in our world, someone like Ishmael might be wracked with “survivor guilt,” feeling responsibility for the deaths of his friends, or wondering why he alone made it through alive. How might Ishmael have dealt with these powerful emotions? How might these events have changed him from the character we see at the start of the novel? Might we imagine some future romance helping to “comfort” and “nurse” him through his “hurts”? The examples above suggest several additional aspects of reading a narrative as a fan. First, fans generally focus on characters and their relationships as their point of entry. Clearly, Melville’s novel, with its digressions and fragmentation, raises many more character issues than it resolves—for example, the richly drawn but only occasionally explored friendship between Ishmael and Queequeg or for that matter, the comradeship between Queequeg, Daggoo, and Tashtego, or the relationship 114
' ,6)) ) 6&%!+'
between Ahab and Fedallah or.... Second, fans look for worlds that are richer, have greater potentials, than can be used up within a single story. They are particularly interested in back story—the untold narratives that explain how the characters became the people we encounter within a particular story. Many contemporary television series reward this fan interest by parceling out bits and fragments of back story over time. Here, again, part of the pleasure of reading Moby-Dick is absorbing all of the incidental details about the ship, its crew, the other ships, and life in New Bedford, and through chapters such as “The Town-Ho’s Story,” Melville tells us again and again that this world is full of stories beyond the ones the novel tells. For the most part, fan reading practices are directed at popular television series or films, but there’s no reason why they can’t be applied to works from the literary canon. Teachers might find that students respond well to being asked to look at Moby-Dick and other literary texts through this lens. Here’s a process you might follow: t t t t t t t
Encourage students to find examples of Kernels, Holes, Contradictions, Silences, and Potentials. "TLUIFNUPDPOTJEFSXIBUQVSQPTFTUIFTFFMFNFOUTQMBZXJUIJOUIFPSJHJOBM novel. *OWJUF UIFN UP TQFDVMBUF PO IPX UIFTF FMFNFOUT NJHIU QSPWJEF UIF CBTJT GPS additional stories. 5FMMUIFNUPëOEPUIFSQBTTBHFTUIBUTIFEJOTJHIUJOUPUIFDPSFDIBSBDUFSSFMB tionships here. %JTDVTTXIBUFMFNFOUTXPVMEOFFEUPCFJOQMBDFGPSBOFXTUPSZUPGFFMMJLFJU belongs in this fictional world. )BWFTUVEFOUTXSJUFTUPSJFTSFìFDUJOHUIFJSJOTJHIUT 4IBSFTUPSJFTCFUXFFOTUVEFOUT FTQFDJBMMZUIPTFXPSLJOHXJUIUIFTBNFFMF ments, so that they have a sense of the very different ways writers might build upon these same starting points.
Ricardo Pitts-Wiley took a very similar approach with the students in the Rhode Island correctional program, asking them to select a character and explore the novel from their point of view. Students were encouraged to develop a character sketch which described what kind of person the character would be if he or she were alive today. These character sketches were then combined to construct a plot in which these characters met at the Spouter Inn and set out on a quest together. Fan stories are not simply “extensions” or “continuations” of the original series. They are constructing arguments through new stories rather than critical essays. Just as a literary essay uses text to respond to text, fan fiction uses fiction to respond to fiction. You will find all kinds of argumentation about interpretation woven through most fan-produced stories. A good fan story references key events or bits of dialogue as evidence to support its particular interpretation of the characters’ 115
motives and actions. Secondary details are deployed to suggest the story might have plausibly occurred in the fictional world depicted in the original. There are certainly bad stories that don’t dig deeply into the characters or which fall back on fairly banal interpretations, but good fan fiction emerges from a deep respect for the original work and reflects a desire to explore some aspect of it that has sparked the fan writer’s imagination or curiosity. Fan fiction is speculative but it is also interpretative. And more than this, it is creative. The fan writer wants to create a new story that is entertaining in its own right and offer it to perhaps the most demanding audience you could imagine—other readers who are deeply invested experts about the original work. The new story may operate within any number of genres that have emerged from the realm of fan fiction and which represent shared ways of reading and rewriting favorite works. Using Herman Melville as her starting point, literary scholar Wyn Kelly offered her own perspective on the ethics of remixing and the relationship between creative appropriation and plagiarism. Her contribution helps to place our contemporary cultural practices in a much larger historical context.
Students may rightly ask, and often do, why such borrowing is seen as “creative,” imaginative,” and “inspired” when Herman Melville does it but is called “plagiarism” when they remix the materials they find online. We need to take this question seriously. For years I would start my classes by dutifully reading out loud a stern definition of plagiarism and the dire results that would ensue if students tried it. My department had adopted a policy and language to be used by all, with the idea that on this issue, at least, there could be no ambiguity. If students were properly instructed and warned, then instructors could in good conscience impose the requisite penalties when the implicit contract broke down. One year I looked up from my reading to see the students looking suitably bored, if not affronted. “Okay,” I said. “What does this paper say?” “Plagiarism is bad,” they said, almost in unison. “Don’t do it.” Clearly students have gotten the message, or at least one message about plagiarism. When I pressed them, they seemed to understand the two basic points of any typical discussion of the topic. First, we have copyright laws to protect artists whose creative work might be stolen, writers whose intellectual property could be violated, scholars whose discoveries risk being purloined and exploited. Second, as writers we use citations to invite readers into the conversation we implicitly hold with people whose ideas we have read and reflected on in making a creative, intellectual, or scholarly work. Footnotes advertise the depth of our research (hence the common abuse of padding bibliographies), but they also acknowledge that thought does not take place in a vacuum, that it grows out of and depends on the ideas and findings of others. When we credit those sources or inspirations or mentors, we make their materials available for other readers to interpret and comment on too. 116
' ,6)) ) 6&%!+'
In this model, creativity and scholarship resemble the ideals of democratic society, to create free access and opportunity for all…. For many of us working with new media as well as traditional texts, the arguments against plagiarism can sound rigid and narrow-minded. How can we save a set of practices that are already fast disappearing in hundreds of thousands of publications in different media online? How can we inspire a sense of responsibility for individual intellectual or creative property in a culture that celebrates ideas and expressions shared worldwide? Herman Melville offers a couple of ways to think about this problem, a problem that in our current media environment remains open rather than completely resolved. The first is historical, the second philosophical. The historical context for thinking about Melville’s borrowing, as well as that of any author writing before the codification of copyright law in the late nineteenth century, suggests that the modern concept of protecting intellectual property is a relatively recent invention…. When Melville began writing in the 1840s, most American authors did not automatically receive copyright protection by printing their works in the U.S. They had to publish them first in England and then sell them to American printers; and even then they might see cheap pirated versions of their works being sold by other printers. In this environment, authors could not always rely on protection of their own intellectual or creative property, nor did they always observe the boundaries of other people’s. Melville refers to this problem throughout Moby-Dick, often in oblique and humorous ways. For example, for his chapter on whales, “Cetology,” Melville mixes materials from known whaling authorities like Thomas Beale, William Scoresby, and J. Ross Browne, along with a host of historians, scientists, and philosophers, without in general identifying his sources. Then he creates fictional authors, throws in what a few of his seafaring friends had to say, and slyly mocks the whole matter of scholarly authority itself by arranging his whales according to size, as if they were books on a shelf (folios, octavos, etc.) rather than species and genera. Such blithe disregard for the principles of scientific discourse and taxonomy, while showing more respect for the opinions of “Simeon Macey and Charley Coffin, of Nantucket” than for Linnaeus, shows how fluidly writers might borrow from sources in Melville’s period. We see similar kinds of pastiche and collage in the works of Frederick Douglass, Harriet Beecher Stowe, William Wells Brown, Mark Twain, and many others throughout the nineteenth century. A second, more personal and philosophical framework for thinking about how Melville viewed borrowing appears in a set of letters he exchanged with Nathaniel Hawthorne (and which I have discussed at length in my book Herman Melville: An Introduction [Blackwell, 2008]). Melville took a trip to Nantucket, where he heard the story of Agatha Hatch, a woman who rescued a shipwrecked sailor, nursed him back to health, married him, and then, when he abandoned her before the birth of a daughter, waited patiently seventeen years for his return. Melville thought
117
Hawthorne should write the story, and he sent him the materials he would need for doing so, along with a detailed account of how Hawthorne ought to write it. Eventually Hawthorne declined, and Melville wrote the story himself, although it does not survive. At one point, in insisting that Hawthorne take up the story, Melville claimed that it was never his own; it was always Hawthorne’s. “I do not therefore, My Dear Hawthorne, at all imagine that you will think that I am so silly as to flatter myself that I am giving you anything of my own. I am but restoring to you your own property—which you would have quickly enough have identified for yourself—had you but been on the spot as I happened to be.” Melville implies that the story belongs to Hawthorne because Hawthorne, given his writing style and interests, is the person to produce it. He also implies that literary property can travel fluidly between one author and another, between writer and reader. Melville’s “borrowing” of Agatha’s story, first from the person who told it to him and then, while he pondered its details, from Hawthorne, to whom he considered that it rightfully belonged, greatly stretches our notion of literary borrowing as happening when someone simply takes details from one source and puts them, or remixes them into another work. Melville’s idea of borrowing involves a creative dialogue between different writers, a collaboration between writers and readers, in an endless process in which the finished product seems secondary to the fascinating relationships that evolve along the way. This concept suggests a far less goal- and object-oriented notion of literary property than our modern notions would emphasize. I would want my students, then, to recognize that plagiarism is a somewhat narrow legal concept within a much broader and older tradition of literary, intellectual, creative, and scholarly borrowing and appropriation. It may seem hard for students to maintain two such conflicting views of borrowing at the same time. I would remind them, though, that we live with such a double awareness all the time. My father learned to drive in rural South Carolina during a time when ten-yearolds commonly drove cars. There was no legal driving age. Our traffic laws, which mandate that we observe speed limits, protect the bodies and properties of other drivers, and strive to maintain safe highways, have followed fairly recently from a period in which popular culture enshrined the automobile as an icon of speed and danger. We still live with those conflicting messages. So it should not surprise us to learn that plagiarism is bad—don’t do it—and at the same time that literary borrowing is a sign of creativity, and that the best writers can be the worst offenders. As these samples from the Teacher’s Strategy Guide suggest, our understanding of appropriation works on multiple levels: Melville is understood as a master “remixer” who absorbed and transformed influences from his own time; Pitts-Wiley is understood as an artist who has appropriated and remixed Melville in the process of creating Moby-Dick: Then and Now; Pitts-Wiley and Project NML have been developing pedagogical approaches which encourage students to critically engage with literary texts as part of the process of creatively
118
' ,6)) ) 6&%!+' and meaningfully reworking them. The guide engages with remixing on other levels: Wyn Kelly includes discussions of the representations of reading and appropriation within the novel itself, seeing the characters as mixing and matching resources in constructing their own identities and finding their way through whaling culture. At the same time, we are encouraging educators to take a “remixing” perspective on our materials. While our focus is on Moby-Dick, we intended the guide to offer a framework which teachers can apply to a much broader range of literary and cultural texts, and this process has already started, even as we are field testing the prototype. At every stage, then, the principle of “learning through remixing” has governed this curricular intervention. If you have found these materials useful, we would encourage you to visit our website, where you can access the full study guide, including both the Expert Voices section sampled here and the actual classroom activities and lesson plans we developed.
119
Sexton Blake & the Virtual Culture of Rosario: A Biji !"
The discovery of the alphabet will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves… You give your disciples not truth but only the semblance of truth. They will appear to be omniscient and will generally know nothing. - from Socrates’ Phaedrus The Empire never ended
I have worked as a community artist and published a book about it, called Community, Art & The State. I have worked as a digital artist and consultant, and published a book called Digital Creativity. I currently teach online media at Arcada, a university of applied science, in Helsinki, Finland, while pursuing doctoral research in ePedagogy at the University of Art & Design Helsinki. Epedagogy represents an attempt to move beyond what have become the traditional focuses on “learning styles” to develop a pedagogy that engages fully with the ubiquitous networked learning that is becoming wired into our environment. The merging and reflowing of the media landscape requires us to devise strategies that move beyond a theoretical analysis of learning styles and their applications in the classroom, to the study of practical programmes for terraforming learning worlds. In theory, practice and theory are the same. In practice, they’re not. - Yogi Bera
Increasingly the one thing that remains true is that things do not remain the same. We may have the tools that Illich, Goodman and Holt lacked, when they proposed learning networks in the 1960’s, but we are no longer looking at the same landscape that they were looking at. We are now very well equipped to fight yesterday’s battle. However, the secret world that was hidden within yesterday’s world has now unfolded and become the public space in which we live. Before we congratulate ourselves on our new digital tools, we need to ask: what implicate order lies occluded within our world? How, in McLuhan’s terms, can we use our tools to probe for it? In order to understand the possible futures that we face, we need to understand our possible present, and our possible pasts. In other words, to know where we are heading we need to know where we are; and to know where we are we need to know where we have come from. Even then we will fail, but maybe in an interesting way. The
120
*&- A
, ! "# $ ?
future has always caught us out. - from Driving on Full Beam by Dave “Dave Cutlass” Cutlass
This is the story of a five-year project that attempted to address some of these issues by evolving a virtual culture that could serve as the basis of a media education curriculum. Camie Lindeberg and I began this project at Arcada, in the spring of 2002. Our aim was to develop a mythos that could be used both as a metaphorical platform for learning and as an environment within which we could find virtual clients. Online media students at Arcada work, as far as possible, in real-life situations – and the people of Rosario, and its capital Marinetta, proved willing, time and time again, to conjure up work for our students. Some of it ended up on billboards in Rosario. Some of it ended up on the Web. According to the initial project documentation: The Marinetta Ombro project began in spring 2002. The Media Department at Arcada in Helsinki, Finland, began to develop a long-term project to consolidate the online media course. This project was intended to act as a laboratory within which students could test their ideas; improve their planning, design and programming skills; and then watch as their experiments had real and lasting effects. It was agreed that this project should be innovative, and capable of attracting international users. It was also decided that it should also exhibit commercial potential.
After lengthy discussion between staff and students it was determined that the 121
!"
% $ &'
project should take the form of a detailed and realistic synthetic world. This world would live on the World Wide Web, be designed to grow and develop in unpredictable ways, and be designed from the outset as an international partnership. Following an approach from La Kolegio Ilana, in the Mediterranean island state of Rosario, it was agreed that the synthetic world would be based upon Marinetta, the capital city of Rosario. It would have the joint aims of providing an educational resource and a pedagogical platform for Arcada, and a philosophical laboratory and an “ontological petri dish” for La Kolegio Ilana. The official website stated that the initial form of the project was designed to “construct La Mentala Rosario, an online representation of the history, culture and commerce of the Mediterranean island of Rosario. Its purpose was to explore the theoretical and practical pedagogical possibilities such a simulation might provide.” The process of building Rosario began as a set of mental exercises intended to locate the island and then situate it within its environment. Our goal was to imagine a culture and then find realistic ways of bringing it to life. The overarching pedagogical aims were to weave various threads of the online media course together; to provide a hub project to which students could contribute; to use this to provide a public face to the students’ work; and finally to use this to simulate more closely a realistic working environment. The initial starting point for imagining was literary: specifically an anonymous novella called Sexton Blake and the Time-Killer, published in Union Jack, issue 1,071, on 19 April 1924, and subsequently republished in Shadows of Sherlock Holmes (Wordsworth Editions Ltd, 23 April 1998). SextonBlake.co.uk describes this as a story in which “Sexton Blake encounters a ghostly hound on London’s underground, is commissioned to find stolen microbes, searches for a lost Lord and a Trade Union Leader, and ends up on a very mysterious island.” The island is described in enough detail to provide a clear framework for further imagining, but briefly enough to leave a lot of room for serious playfulness. Using this as a basis, staff and students began asking geographical and historical 122
*&- A
questions. Questions included: what animals and plants would live on the island, and what industry would we find there? What happened to Rosario in the First World War? Is Rosario primarily European or African? What god(s) do the Rosarians worship and why? Students were encouraged to raise questions about the specificity of place and time. What makes Paris parisian? Where do we find the nineteen-fifties-ness of the 1950’s? These questions led us to draw further inspiration from the loose trilogy formed by the final three works of Philip K. Dick (Valis, The Divine Invasion and The Transmigration of Timothy Archer), as well as the published fragments of his Exegisis; and two privately printed pamphlets from the late 1950’s by Dave “Dave Cutlass” Cutlass (Driving on Full Beam and Honk If You’re Honking), among much other work. From the outset, for a number of reasons (partly to do with the mythology of Sexton Blake, partly to do with the nature of science on Rosario, and partly to do with technical limitations in the software we decided to use), fog played an important role in the mythology of Rosario. The process of shrouding the familiar in fog only to have it revealed again later became a key to our understanding of Rosarian history and Rosarian culture. La Nebulo, the eruption of the volcano and the smothering of the island in dense fog in 1348, provided Rosarians with their own creation myth, which in turn formed the core of their national identity and the emotional landscape upon which Rosarian culture was built. In many ways, in Rosario, the world of Victorian London, the London of the British Empire, with its secret codes and in-built mysteries, never ended. I had always heard that to be out in a London fog was the most wonderful experience, and I was curious to investigate one for myself. My friend went with me to his front door, and laid down a course for me to follow. I was first to walk straight across the street to the brick wall of the Knightsbridge Barracks. I was then to feel my way along the wall until I came to a row of houses set back from the sidewalk. They would bring me to a cross street. On the other side of this street was a row of shops that I was to follow until they joined the iron railings of Hyde Park. - from In The Fog by Richard Harding Davis (RH Russell, New York, 1901)
We created digital versions of many of the key artefacts of Rosarian culture: 1. 2. 3.
Severo Gonzales (1452-1501) Madonna e Nebulo (Madonna and Fog) - the iconic image of Ilana, and one of only three images known to depict her; Piro Arduini (1463-1532) Voyaji pri la Magi (Journeys concerning the Magi) - a painting in the Rosarian gnostic religious tradition; Lara Poloni (1718-1792) La peskanta viri retrovenas kun vakua saki (The fishermen return with empty bags);
123
!" 4.
Tina Malko (1911-1994) Mondo ek Rosario - one of her later, almost pop art, works.
We also translated Dek Manto’s La Marcharo Vers Hemo. This book was originally published in Rosarian in 1958 and is often said to be the Rosarian novel of existentialism. Its title can be translated into English as The Walk Home, and it has produced several popular quotations. The book was written in the style of a traditional Chinese zalu or zazu: a notebook not unlike the European commonplace book in form in that it contains many disparate kinds of content. Unlike the commonplace book, though, the content combines by stealth into a narrative or worldview. Douglas Coupland has written a modern zalu (sometimes known as a biji) under the title Survivor. This was published in summer 2009 in issue 31 of Timothy McSweeney’s Quarterly Concern.
According to the Marinetta Ombro documentation: In January 2002, Courtney Mojo announced that the committee established by Pluvego XI in 1993 had completed its work. She then produced a mighty fiction that astounded most people who heard it through its exuberant reworking of the traditional combination of wild audacity and faint plausibility. She suggested that the only way to finally end the threat of the malperigeno gas was to transplant Marinetta, brick by brick, onto the Internet. Moldovak had argued vehemently that the gas could not be removed from the island but perhaps, Mojo suggested, the island could be removed from the gas.
A team from the Institute of Media at Arcada subsequently applied for the position of Design Strategists. Having been appointed in the early autumn, they began work on Marinetta Ombro – the first great scientific adventure of the twenty-first century. The project is intended to be in two phases. During the first phase the Institute of Media at Arcada would work with the Rosarian Project Team to construct a Shadow Marinetta on the Web. The two teams would then present a report to Pluvego XII in December, 2004. If the first phase is deemed successful, and the recommendations that result from it are accepted, then work will begin on the second phase: making the shadow solid. Worst may be better than best; better may be worse than good. Fantasy may become reality. Change the angle of view and wait for figure and ground to shift in relation to each other. - from Driving on Full Beam by Dave “Dave Cutlass” Cutlass
124
*&- A
Meanwhile, at approximately the same time in North Carolina, the College of Education at Appalachian State University, after researching the cost and performance of Activeworlds, “acquired a one world Activeworlds server that we called AppEdTech.” This had 1 million square metres of space and allowed 50 simultaneous users. We spent a great deal of time that spring and summer learning about it and building the first course, called Hypermedia in Instruction. We also developed a central plaza, a library building (empty), an alumni center (early on had the mindset that we would not exclude students after they graduated), and a Tele Port. We had no idea what kinds of space course areas would need so the courses were built as far out in the world as we could move them; thus we needed an “airport” to get students out to the course areas.
Dick Riedl and John Tashner at Appalachian State University had had an apparently similar vision concerning the possibility of tailoring a synthetic world for use as an educational environment. They were to become close friends, and an important source of ideas and comfort. They were soon joined by Steve Bronack and Rob Sanders, and together we founded an annual colloquium called The League of Worlds that became a vital forum for exchanging ideas and information. That this happened was interesting. That it happened as it did was almost miraculous. As our projects developed it became clear that we were working towards very different goals, but for some reason this only served to strengthen our collaboration. We were both fascinated with each other’s aims, while clear why they differed. We became a valuable counterpoint for each other. It was not that either world was “better” than the other: rather both were vivid and both highlighted otherwise hidden aspects of the other. Both offered tantalising rumours about work taking place in another part of the world. RUMOUR - Believe all you hear. Your world might not be a better one than the one the blocks live in but it’ll be a sight more vivid. - from The Hipcrime Vocab by Chad C. Mulligan
By 2004, Arcada had a working virtual world, built using French software called SCOL. It was available to anybody prepared to download the player application, which was practically nobody. We also had two working websites. One of these detailed the ongoing programme of the Marinetta Ombro project, as it had become known officially. The other extolled the virtues of the actual Mediterranean island, as both a holiday destination and a source of sardine-based cuisine. Marinetta Ombro means “the shadow of Marinetta” in Rosarian, a language that, for historical reasons, bears an almost uncanny resemblance to the artificial 125
!"
language Ido. At this point Camie and I attended a one-day seminar at the University of Helsinki during which we made a presentation about the Marinetta Ombro project, its aims and development to date, and showed the websites. The question period rapidly turned into a prolonged rant by a philosopher and an art historian who together accused us of “brazen immorality”. Our crime lay in what they claimed was a deceitful attempt to dupe innocent people into mistaking fiction for fact. They argued that the island of Rosario did not “really” exist, and that the websites might therefore lead “an unwary American tourist” (their example) into attempting to book a holiday there – an attempt that would, at the very least, cause them embarrassment and potential humiliation, and might somehow end up costing them money. Our problem in addressing this issue did not lie in the fact that we had not considered it. We had considered the razor edge between “fact” and “fiction” at great length during the early planning process. Our problem lay in trying to remember why we had finally dismissed the issue as ridiculous. In 1719 Daniel Defoe published Robinson Crusoe. It was not published as fiction, but as the autobiography of a long-suffering traveller. In 1792 Jonathan Swift published A Modest Proposal: For Preventing the Children of Poor People in Ireland from Being a Burden to Their Parents or Country, and for Making Them Beneficial to the Publick anonymously. It was not published as fiction, nor satire, but as a political pamphlet suggesting cannibalism as a prudent solution to the Irish famine. In 1967 a group of hippies tried to exorcise the Pentagon. “The brainchild of Abbie Hoffman, the plan was for people to sing and chant until it levitated and turned orange, driving out the evil spirits and ending the war in Viet Nam. The Pentagon didn’t move.” This was not promoted as a prank or as satire, but as a serious political action.
The earliest recorded historical event on Rosario occurred in approximately 1452 BC, when Emperor Tutmosis III, husband of Queen Hatshepsut, ascended to the throne after her death in 1480 BC, and began the great territorial expansion of the Egyptian New Kingdom. Phoenicia and Palestine were conquered, and the island of Rosario was subsumed into the Empire, serving as a convenient port and supply base. After the First Ecumenical Council of the Catholic Church, held at Nicaea in 325, on the occasion of the heresy of Arius, the island was briefly occupied by fleeing Gnostics who found a safe haven among the curious, intelligent but oddly vague Rosarians. The Rosarians embraced the very qualities that Ireneaus had complained about in his classic refutation of Gnosticism, when he claimed that “every one of them generates something new, day by day, according to his ability; for no 126
*&- A
one is deemed perfect [or, mature], who does not develop... some mighty fiction”. The Gnostics may be said to have completed the cultural temperament of the Rosarian people. From the time of their arrival Rosarians did their best to generate what they termed mighty fictions, and they developed this, and the spreading of rumour, into the island’s indigenous performing art forms. In 787 the Second Council of Nicaea met for the first time on 1 August, at the command of Empress Irene, then acting as regent for her son Emperor Constantine VI who was still a minor. She had been petitioned both by Patriarch Paul IV of Constantinople before his abdication from the see in 784 and by his successor as patriarch, Tarasius. The aim of the Council was to unite the church and to condemn the decrees passed by the council of 338 bishops held at Hiereia and St. Mary of Blachernae in 754. At the same time a public assembly took place in Stelistoturo on the island of Rosario, on behalf of the Tripartite Church, should it exist. The Church was said to hold syncretic Gnostic beliefs. People arrived from North Africa, the Eastern Mediterranean, Southern Europe and the Middle East to confirm the primacy of the teachings known collectively as the Rosarian Gospels. These were contained in four books known as the Tripartite Testimony; The Second Treatise of the Great Seth; the Gospel of the Egyptians; and The Book of Zostrianos. They had been brought to the island for safekeeping during, or shortly after, the first Council of Nicaea in 325. The church declared itself to have no leaders, to have no members, and to recognise none. It decreed that “the Father is a unity like a number, for he is the first and that which he alone is. The soul of the first man though is from the Logos while the creator thinks that it is his”. They kept the term KING FELIX as “an empty vessel”; in modern terms, a hermetically suspended signifier with no signified. Or they believed that they did. Richard Harding Davis... was an American war correspondent who, in the Victorian age, covered the Greco-Turkish, Spanish-American and Boer wars and, later on, the Russo-Japanese war and the Great War, until his death in 1916. It is a reminder of how many wars were going on during the supposedly peaceful reign of Queen Victoria, and writers of detective stories often seemed to get mixed up in them. - from Victorian Villainies selected by Graham Greene and Hugh Greene (Penguin, 1984)
The problem of verisimilitude had concerned us from the outset of the project. We had rejected any suggestions that we should build a “virtual Arcada” on the grounds that the imitation would always be inferior to the original and would be pointless even if it were perfect. In fact it would be especially pointless if it were a perfect simulacrum, because it would then be the equivalent of a 1:1 scale map. We had 127
!"
also rejected suggestions that we should build a “planet on a galaxy far, far away, where alien tribes with shifting loyalties engage in perpetual war”, on the grounds that little meaningful discussion or judgement could take place in a world in which anything goes. If the rules are infinitely elastic then you cannot bend them; everything is beautiful in its own way; and student-client relationships are whatever anybody manages to claim they are. We wanted a virtual world that had its own logical centre that related to, but was not the same as, the culture within which we “really” lived and studied. We wanted to describe a culture that was either real or virtual, or both, according to your angle of view. We could cite a lot of prior art for the position we were adopting. Marcel Duchamp’s readymades claimed pre-existing industrial objects as art without claiming that they were art. John Cage claimed the ambient noise in an apparent silence as music that could be enjoyed as if it had been performed by the pianist not performing it. Artists from Robert Smithson to Richard Long have changed landscapes in larger or smaller ways, and left them for people to notice or ignore as though they were “real” landscapes. Garrison Keillor’s Prairie Home Companion was neither a parody nor an evocation. It was the thing itself, had the thing itself existed. In 2000 the writer Daniel O’Mara wrote a series of eloquent letters to leaders of Fortune 500 companies in the guise of an irritated dog, which he wasn’t, that were later collected and published in issue 5 of Timothy McSweeney’s Quarterly Concern. Our problem in discussing our approach at a seminar, though, was that “art” was not apparently a philosophically or scientifically respectable defence when discussing the development of a pedagogy deliberately built upon a fictional edifice – even when addressing art historians. According to Dick Riedl and John Tashner,
The conceptual framework for the College of Education at Appalachian State University speaks to the social construction of knowledge and the need to develop a community of practice. Thus, any effort to develop distance education in the College of Education absolutely must consider the ways in which the participants become part of the community of practice and are able to construct knowledge in a social context. In Building Learning Communities in Cyberspace, Rena Palloff and Keith Pratt (1999) argue that the development of community in online settings is critical to the success of distance education. And they argue that the online community must pay careful attention to human needs that extend outside the specific course content. Gibson (2003) also notes the growing interest in the development of learning communities in online settings and introduces several forms that a community may take. But the question that remains for planners of online courses is what their community looks like – what kinds of interactions are necessary to develop a successful learning
128
*&- A community? The primary tools for community interactions are email (including listserves), chats, blogs, wikis, discussion boards and other Web 2.0 applications. They present opportunities for participants to interact in an online social context. But, if we look carefully at the social context of on-campus programs we will see that it and any resulting community of practice extend well outside the bounds of the actual classroom meeting time and includes far more than the content of the class. Students, faculty, and other members of the community of practice interact on many levels and in many ways, more often in unstructured settings than not and often in situations that arise by chance. It may not be enough to assume that tools lodged in the context of a class provide ample opportunity to develop a community of practice. It is with these thoughts in mind that our program decided to see if an online community could include more than just the instructional elements of the class. We wanted to know if it could include opportunities to do other, non-class activities, have chance encounters with other students, who may or may not be in the same classes, with faculty, and with anybody else that might be part of the broader community that is found on a college campus. It seemed that the first challenge toward attempting to create this vision of an online community was to develop a means for participants to be substantially aware of the presence of others.
In the early part of the twentieth century the college in Lampo (now known as Marinetta) began to attract a number of scientists, particularly those who were on the fringes of scientific respectability. They pursued research that the outside world rejected or ignored: research into the cosmic ether, ectoplasm, and difference engines. In 1923 a Welshman, Professor Rufus Llewellyn, came to the island, which he claimed was “the secret science capital of the world”. He had invented what he termed “z tubes for overcoming the atmospherics in wireless”. These had been greeted with such scepticism and hostility that he had thought it wise to disappear from public view for some time. He immediately became friends with another immigrant named TI Moldovak, a former colleague of Nikola Tesla who had also become an unwelcome presence in the halls of science. Together with Llewellyn, Moldovak began a series of experiments inspired by the work of Tesla. Later they switched to biological research. They planted rare bacilli in the water of Rosario, which appeared to make time slow down. Many Rosarians returning from abroad now claim that they can “no longer be bothered to die”, and that Rosario is the island where time stands still. Tourists often assume that this indicates a naïve belief in the lingering power of the bacilli. Those in the know understand that it merely implies an unspoken allegiance to The Tripartite Church, should it exist, and an expectation that the Gift of Return is one unfolding of the long awaited anamnesia. 129
!" Nothing is going on and nobody knows what it is. Nobody is concealing anything except the fact that he does not understand anything anymore and wishes he could go home. The universe is information and we are stationary in it, not three-dimensional and not in space or time. The information fed us we hypostatize into the phenomenal world. Real time ceased in 70 C.E. with the fall of the temple at Jerusalem. It began again in 1974 C.E. The intervening period was a perfect spurious interpolation aping the creation of the Mind. “The Empire never ended,” but in 1974 a cypher was sent out as a signal that the Age of Iron was over; the cypher consisted of two words KING FELIX, which refers to the Happy (or Rightful) King. If the centuries of spurious time are excised, the true date is not 1978 C.E. but 103 C.E. Therefore the New Testament says that the Kingdom of the Spirit will come before “some now living die.” We are living, therefore in apostolic times. - from the (mostly unpublished) exegisis of Philip K. Dick
Some people continued to argue that there was no reason for any course like ours to have a virtual world as its home, but their number lessened considerably when, in 2005, we opened the third version of Rosario. This time we chose to house it in Second Life. We debuted the new island officially in December 2005, after three months of intense learning and experimentation. Moving to Second Life gave us immediate advantages, in terms of our long-term plans. We were able to outsource the burden of maintaining the world itself to somebody else. Keeping the servers running and debugging the 3D engine were no longer our problem. Instead we could concentrate on visualising the complex culture of which we were now part. We also gained a large community within which to house our work, and with whom to communicate. The existence of the larger community in Second Life became central to our work. We began to engage this wider community within our pedagogy. The fact that they had no interest in us made them realistic customers for the products our students produced and the research they undertook. Tourism students, for example, travelled Second Life, researching what users do and how they spend their time and money. They then drew up a detailed tourism strategy for Rosario, suggesting the kinds of facilities the online media students should create in order to attract more users. We were able to use their projections to measure the success or failure of the new facilities. Other departments produced business plans for Rosarian companies; community health plans that used the population figures that had been calculated over the previous years; inworld security features, and an XML application for marking up ebooks documenting Rosarian culture. Once we began working in this way Rosario became a vital philosophical laboratory and “ontological petri dish” for Arcada as well as for La Kolegio Ilana. One important aspect of this was the controlled chaos that inevitably resulted 130
*&- A
from building and maintaining a public island: trying to attract for one purpose people who are in the world for entire different reasons. At its best this produced a vibrant process of creative interference, which greatly enriched our existing strategies. Camie Lindeberg and I have both written and given presentations elsewhere about the effects of uninvolved and involuntary participants, and creative interference. Camie Lindeberg and I have both written and given presentations elsewhere about the effects of uninvolved and involuntary participants, and creative interference. See, for example Unifying the curriculum in a digital playground, Owen Kelly and Camilla Lindeberg (March 2004); Ghost Towns & Virtual Worlds, Owen Kelly (May 2004); The Meeting of Two Classrooms, Camilla Lindeberg (November 2005); Abstraction Haunted by Reality, Owen Kelly (June 2006). Important? Me? I looked at him in surprise, and wondered why he was asking. I was lost but I had only just realised how lost I was. I was Rosarian. Of course I was important. But so was he. And so was everyone else I would meet on the walk home. - from La Marcharo Vers Hemo by Dek Manto
Steve Bronack, Dick Riedl and John Tashner have pointed out that Watching a play as recorded by a moving picture camera is not the same as watching (and hearing, of course, since the moving picture camera had not yet learned to record and synchronize sound) a play performed live in the theatre. As long as the moving picture camera was used in a fixed location (as if the viewer were sitting in the center of the 5th row) to record an event as it occurred from beginning to end it would continue to be a poor substitute for the real experience. Nearly fifty years after the moving picture camera was invented, Sergei Eisenstein, DW Griffith and others began to experiment. They moved the camera to give different views of the scene. They interspersed close-up shots among the longer shots of the scene. They introduced cuts to different locations to show action that was taking place simultaneously and used flashbacks to show action that had happened previously. Through these experiments with cinematic staples we take for granted in today’s movie theaters, Griffith and others began the process of inventing the movie. As they did, the old problems cinematographers were trying to solve with the new technology were displaced – and new problems emerged. We find ourselves now in a similar situation in education. We have powerful new computing and networking technologies that provide new ways to teach and to “do learning”. But in the process we more often recreate what is familiar to us using these technologies, than creating anything truly different. Most of our distance education settings look like traditional face-to-face classes transferred to the Web, or to video, or to whichever medium is employed. But they are not the same; just as watching a
131
!" play recorded with a moving picture camera clearly is not the same as attending a play. As educators we want to reach out to populations who need and desire an education and we see these new tools as a means for doing so. Unfortunately, we often find that using distance education to reach these populations just seems to make learning more distant. Instead, educators should look for opportunities to use new technologies to make learning meaningful, not simply more available. Transferring educational practices done in one setting to another is a trivial effort likely to continue disappointing those organizations that support and engage in it. Educators should spend less time employing new technologies toward solving existing problems. Instead, educators should focus upon educational goals and their underlying assumptions about teaching and learning as they develop distance-learning environments. We should extend more effort toward solving the problems that come from the interface between the goals that reflect our current and emerging missions – and the emerging technologies we have to help us get there. We need to struggle to understand the relationship between our assumptions about teaching and learning and the technologies we are employing to deliver education at a distance. Anamnesia: the process of forgetting to forget; the unfolding of an implicate order of previously occluded memories. - from Honk If You’re Honking by Dave “Dave Cutlass” Cutlass
Appalachian State University have utilised the idea of metaphor in a very different way from us, and not surprisingly this has produced very different results. For AppState there are two elements of this design that are important. The first we have already alluded to: making sure all the information and materials are in place and the activities we ask students to do are such that they engage the students in personal meaning-making and require and facilitate student-to-student and student-to-instructor interactions, both formal and informal. The second element involves designing space that encourages exploration and interaction. For this second element we have chosen to explore the use of metaphors. According to Lakoff and Johnson, a metaphor is defined as “understanding and experiencing one kind of thing in terms of another” (as cited in Cates, 1994). The comparison is not literal. Rather, one thing, often familiar, is a figurative representation of the other, often abstract or unfamiliar. Aristotle understood the value of metaphor when he said, “Ordinary words convey only what we know already, it is from metaphor that we can best get hold of something fresh.” It is important to note that simply a virtual representation of a physical space or artifact is not metaphorical. Rather, the virtual representation must be different in its representation (Cates, 1996). A graphical user interface (GUI) that is metaphorical must be based on either an explicit or implicit metaphor (Cates, 1996). In other words, it makes little difference as to whether the metaphor is obvious to the user 132
*&- A
or not. The important aspect is that the metaphor is, in some way, different from what it is representing and that it works to provide some insight into or aid in understanding of that idea, concept, or thing it represents. According to Black and later expanded upon by Cates (1994), there are two types of metaphors, underlying or primary, and auxiliary or secondary (as cited in Cates, 1994). An underlying metaphor is the main metaphor used. For example, in one of the courses the underlying metaphor of the Wild West (representing a new frontier or beginning) was used as the main metaphor throughout the virtual learning world. An auxiliary metaphor is one that is consistent with the underlying metaphor and is used to support or enhance this main metaphor. An example of an auxiliary metaphor in the Wild West course would be a corral or a saloon. We have used the world of Rosario as a crucible within which ideas could be heated until they melted, exploded or created a successful reaction. For us Rosario exists as a body of lore that can be built upon in the same way that, for some others, the Klingon Empire exists as a body of lore. The exercise of translating Shakespeare into Klingon may be useful in several different ways. It could focus attention on Shakespeare’s actual choice of language; it could focus attention on the problems and issues of translation; it could focus attention on issues of cultural difference that come to the surface when we ask why Klingons would want to receive the works of Shakespeare. A question to be asked, then: is Rosario a “metaphor” in the sense that Lakoff and Johnson use the term? My sense is that it is not, and therein lies the difference between AppState’s endeavours to create successful virtual worlds and our attempts to terraform a virtual culture. Our concern, in the end, has not been with the virtual but with the hyperreal – with what you might “really” see when the Empire passes and anamnesia sets in. The Gnostic Christians of the second century believed that only a special revelation of knowledge rather than faith could save a person. The contents of this revelation could not be received empirically or derived a priori. They considered this special gnosis so valuable that it must be kept secret. Here are the ten major principles of the gnostic revelation: 1. 2. 3. 4. 5.
The creator of this world is demented. The world is not as it appears, in order to hide the evil in it, a delusive veil obscuring it and the deranged deity. There is another, better realm of God, and all our efforts are to be directed toward a) returning there; b) bringing it here. Our actual lives stretch thousands of years back, and we can be made to remember our origin in the stars. Each of us has a divine counterpart unfallen who can reach a hand down to us
133
!" to awaken us. This other personality is the authentic waking self; the one we have now is asleep and minor. We are in fact asleep, and in the hands of a dan gerous magician disguised as a good god, the deranged creator deity. The bleak ness, the evil and pain in this world, the fact that it is a deterministic prison controlled by the demented creator, causes us willingly to split with the reality principle early in life, and so to speak willingly fall asleep in delusion. 6. You can pass from the delusional prison world into the peaceful kingdom if the True Good God places you under His grace and allows you to see reality through His eyes. 7. Christ gave, rather than received, revelation; he taught his followers how to enter the kingdom while still alive, where other mystery religions only bring about amnesis knowledge of it at the “other time” in “the other realm,” not here. He causes it to come here and is the living agency to the Sole Good God (i.e. the Logos). 8. Probably the real, secret Christian church still exists, long underground, with the living Corpus Christi as its head or ruler, the members absorbed into it. Through participation in it they probably have vast, seemingly magical powers. 9. The division into “two times” (good and evil) and “two realms” (good and evil) will abruptly end with victory for the good time here, as the presently invisible kingdom separates and becomes visible. We cannot know the date. 10. During this time period we are on the sifting bridge being judged accord ing to which power we give allegiance to, the deranged creator demiurge of this world or the One Good God and his kingdom, whom we know through Christ. - from the (mostly unpublished) exegisis of Philip K. Dick
In 2007 the Eurovision Song Contest was held in Helsinki, and the people of Rosario campaigned to be recognised by the organisers and allowed to submit an entry. Perhaps unsurprisingly the organisers remained unmoved, despite the viral videos that appeared on YouTube in favour of the campaign. Undeterred Rosario commissioned an entry, recorded a video and broadcast it on YouTube, where it was a success. The song was written and played by L’angelot. It was called Al Dek Manto in the honor of Dek Manto, the great Rosarian writer. You can find it here: http://www.youtube.com/watch?v=18BzfsbVCJA The video was broadcast several times on Finnish television during the period of the song contest. In what proved to be the climax of the Marinetta Ombro project, students from Arcada, in collaboration with a team of students from ITT, Dublin, organised a parallel Eurovision event on Rosario which ran continuously for forty-eight hours. The attendance during that period was close to (or just over) one thousand unique avatars. By Second Life standards, it was a huge event. The students had put in an extraordinary amount of voluntary effort, and 134
*&- A
afterwards we sat back wondering what (if anything) we could do next. Life could be taking you somewhere and you could be co-piloting your vehicle. Drive on full beam and you’re travelling as fast as you can inside the speed limit. The speed limit is 60 minutes per hour and is unlikely to be raised anytime soon. - from Travelling on Full Beam by Dave “Dave Cutlass” Cutlass
In June 2007 the current phase of the Marinetta Ombro project officially ended. We felt that we had achieved enough that we needed to pause and take stock of the journey. We also felt that we needed to change our focus for two years in order to avoid simply duplicating our efforts. Accordingly we tore down the island and began rebuilding it in line with some of the lessons we had learned about Second Life. We also began to experiment with ideas of telepresence. In the period 2009-2011 we will be exploring the use of Second Life as an alternative to video-conferencing. This is a deliberate shift away from the virtual culture we have created back into the delusional world of consensus reality. During this time period we will be on the sifting bridge being judged according to which power we give allegiance to. In 2011 we shall return to the virtual culture and ask how we can use it as the basis of a cross-media group learning platform. This will be known as Rosario Familiara, the Family of Rosario, and will incorporate a narrative game structure into the cultural life of the island. Biji: Life Span 220-1912 AD Natural Habitat China Practitioners Duan Chengshi, Ji Yun, Hong Mai, Zhao Yi, Qian Douxin Characteristics musings, anecdotes, quotations, “believe-it-or-not” fiction, social anthropology. Biji can be translated as “notebook”, and a biji can contain legends, short anecdotes, scientific and anthropological notes, and bits of local wisdom. (True to its polyglot form, the biji is known by many names, xiaoshuo, zazu, suoyu, leishu, zalu.) Accounts of everyday life mix with travel narratives and stories of the supernatural; tales of romance and court intrigue are interspersed with lists of interesting objects or unusual types of food. The unstable styles and irregular content ultimately cohere between fiction and non-fiction, biji offer a top-down vision of a culture and its time. - from page 39 of issue 31 of Timothy McSweeney’s Quarterly Concern
135
On the Database Principle – Knowledge and Delusion # I think, therefore I am. René Descartes You know who you are and you know all about yourself. But just for day-to-day stuff notes are really useful. Leonard Shelby
The following is an experiment I initiated without being fully convinced of its success. I’ve had my doubts if it would work, whether it was something you can expect of an audience at all. Now, after having done it and while I’m writing these last words as an introduction to confirm the experiment’s methodological settings, I am inclined to believe in it. It is quite possible that it works. Surely, the audience remains an unreliable constituent. Most probably, you will have read the www (in a fragmented way, obviously) or you might have visited one of the last Documentas at Kassel (again in a fragmented way, there is no alternative) or you may have watched Christopher Nolan’s film Memento (this, also, is only possible in a fragmented – though different – manner) or you may have learned other ways to err or to get lost. If there is just a faint common basis here, it will be quite likely that the following conjectures will have found a symbolic arrangement and thereby can be communicated as knowledge, as an epistemological object.
Inside. Desktop. Day. Maybe this is insane. When I look around, right this instance, as I begin to write, I can only see chaos. Dozens, maybe hundreds of notes are spread all over the desktop, some are at least stapled together, some are hardly recognisable as former piles. In-between there are books, a few of them piled on top of each other, most of them flipped open somewhere, sometimes on a page that was bookmarked, or sometimes on a random page where I stopped reading for different reasons. Things look even worse on the screen of my working device, presented to me as my other desktop: thousands (millions, trillions to be more precise, but for what reason?) trillions of – as people like to say: virtual – notes are all over the place. In this case as well, some of them are bundled, stapled together, piled up, partially loosely spread, somewhere... all of them somewhere, literally “flipped open“ by the search engine I use to dig for the notes. In front of me, right next to the keyboard, is a real paper pile consisting of six or seven notes. It’s chronologically organised. The one at the bottom is from last night and the night before, I wrote on it, drew on it, scribbled words, notes, arrows, circles, highlighted things.
136
&(/') B" ! 1
/)' 1 ( &! 3
The one on top of the pile is of today. It refers to the slips of papers underneath. It shows the temporarily final structure I anticipated a moment ago. The structure of what I just started to write.
Outside. Global Village, Marketplace. Day. Once again: If insanity denotes something that is not objective nor objectifiable; something that is not generally shared or something that can’t be shared at all; something that is not arranged along universally acknowledged or intelligible criteria; something that does not allow for a least common denominator, then globalisation imposes a problem on us. Most things may not be uttered aloud in the marketplace of our global village, if any suspicions of insanity are to be avoided. The complexity and the complications of worldwidisation or worldisation must explicitly not be made imaginable in the marketplace of our global village (in contrast to the Kassel Binding Brewery). Only that what transgresses uniqueness may be uttered aloud. Any e.g. culturally specific characteristics are to be avoided. The discourse that takes place at the marketplace of the global village would be one that is restricted to the least common denominator. And this also is a form of insanity. The individual, the unique and the extraordinary – non-objective knowledge – stay off the record. It does not belong to the archive. It is not being recorded. In accordance with what I said earlier, this is a consequence of the universal self-assured reasoning, which is organised along the paradigm of the linear perspectival viewpoint. And it is the consequence of a limited horizon – an inevitable effect of its inherent concept of referentiality. The global viewpoint belongs to an extreme linear perspectivity; in a way, it’s the bird’s eye view or the god’s eye view. The distance between the viewer and the object is enormous. You can only see the earth as a globe when looking at it from outer space. (Otherwise, let’s face it, by earthly standards only, it looks more like a panel or disc.) However, this perspective is only obtainable to a few. It has left deep impressions on some of those who had been able to take this point of view. Ulrich Walther, for example, recounts how he was struck by a kind of “space-sensibility”, 137
#
when he was in outer space: “Up there, there is deep black on the one side and a light blue towards the earth. Thereby, you are in-between things. […] You are in a transitional situation, gaining distance to everything. And this exactly is a completely different feeling. This is space-sensibility.”1 This particular perspective with an overview seems to lead towards a sense of assuredness that seemed lost for a period of time. Likewise, NASA astronaut Jerry Linenger answered the question whether his experiences in space had altered his religious convictions, or to be precise, whether he believed in God thus: “[…] seriously, the experience reinforced my faith. I believe in God and looking down from up there, I was reminded that there is a Creator...”2 It is relatively easy to explain this phenomenon with the social-psychological mechanisms generated by the symbolic form of the perspectival depiction. As Brunelleschi’s experiment has shown, the perspective depiction forces the recipient via its construction to assume the perspective of the producer. The recipient is now able say: “Yes, this is my view. This is how I see it as well.” Thus, a social bond has been tied between recipient and producer (or with other recipients), which creates the previously mentioned notion of belonging to a community: “This is our viewpoint.” Accordingly, the inevitable assumption that there must be a producer is a constitutive implication. On the other hand: God is dead. We should not forget that. He has been subject to functional secularisation. This is also an aspect of Enlightenment. And an aspect of the rational perspective. And this is why we are now dealing with the archive formatted as a database.
Inside. Desktop. Day. I give up. It’s not working. It does not arrange. At least not in a way that I only need to copy it into writing. I start anyway, already started... to begin... – Was that the beginning? Did I miss it once again, the beginning? The start? Do I have to act through writing again with the sure feeling of popping into or out of something? Actually, so I think, actually it should be like this: Being a scientist I hold this knowledge, somewhere, inside my head or on these notes and I simply need to write it down. At best in a way that makes it easy for my potential readers to understand it, to share it with me, to own it and to independently have access to this knowledge after reading it. Why can’t I accomplish that? Why can I – to be honest – never accomplish that?
Inside. Kassel, Documenta11_Platform_5, Binding Brewery. Day. To me, the Documenta11 was an attempt to tackle this “New Medium”. It was an example of what Manovich advocated, namely, “to develop poetics, aesthetics and ethics of this database.”3 The whole concept of the Documenta11 lasted for 18 months: Five so-called 138
&(/') B" ! 1
“platforms” – each for discussions, conferences, workshops, books, film and video presentations – took place at various locations – Vienna, New Delhi, Berlin, Santa Lucia, Lagos, Kassel. This alone suggests that worldwidisation was the main focus of considerations – by all means, in Derrida’s intended sense. The same becomes clear when looking at a detail only; precisely when looking at the exhibition without trying to approach this specific form of presentation with an understanding of art that is appropriate to artworks of the beginning of the 20th century. Thus, it becomes evident that considerations focused on real experiences are to a great degree modified by mediatisation just as the utopia of a multicultural-pluralistic, postcolonial-heterogeneous, diverse-democratic, telematically structured “world society”. “The common way of understanding art is by no means sufficient at the Documenta11,” writes Franz Billmayer. “In order to be able to understand core aspects, you have to read a lot and you have to activate a lot of knowledge from outside the art context.”4 This is one of the reasons why the conception has been criticised for being visitor-unfriendly. But is this a weakness? Was it a failure, a result of carelessness? I don’t think so. Rather the opposite is the case: The scarce amount of annotations to individual works fits perfectly well with the concept of visitor-unfriendliness. Presumably for the same purpose, these annotations hardly ever help to establish a more profound understanding of the works. Rather, they demonstrate to the recipient their profound lack of knowledge (and that this becomes tolerable only through the “future culture technique” of err-ability). Likewise, the plain abundance of time-based works – videos, slideshows, etc. – with their oftentimes unknown start time, and most obviously the Binding brewery’s interior design as a hardly manageable maze of white cubes – all this informs the recipient with a feeling of nescience on the one hand and the pressure of quick decision-making accompanied by drastic time pressure.5 Initiated through his practical experiences, his readings of different critics and his work with psychotic reacting analysants in a psycho-analytical practise, KarlJosef Pazzini developed the idea that “through the Documenta, a formulation was found that shows how the structures, which so far were taken to be neurotic with their paranoid declivities, have altered their shape to such a degree that they become much more discernable and more stable through the neighbouring structures of perversion and psychosis. The relational mixture of discourses is changing. It defines the social ties and helps to configure individual structures.”6 Once again: Is 139
#
this “lack of visitor-unfriendliness” a failure, carelessness? In his book Interface Culture, Steve Johnson draws a parallel which I find remarkable and well worth remembering because of its “illuminative” quality: He transfers Samuel Taylor Coleridge’s comments on the architecture of the Gothic cathedral to what he calls the “new medium interface:” Coleridge said that the architecture of the Gothic cathedral was “infinity made imaginable”. The medieval mind was not able to imagine the entire infinity of the divine but became able to comply with the majestic towers of Gothic domes and their interior designs as a “heaven reduced to earthly standards.”7 And again: Is the Documenta11’s “lack of visitor-friendliness” – the lack of clear overviews, the time pressure and the incessant strain of having to take decisions, and the excessive demands – is this a failure, carelessness? Is it not rather an attempt to reduce the “new medium” to “earthly standards”? To make it visible, perceptible, imaginable? Complication made imaginable?
Inside. Desktop. Day. Maybe this is insane, in general. Maybe it’s just the subject. At first there was this fascination with the character of an investigator suffering from memory loss. And maybe an even bigger fascination with the way his story was told. A story that actually isn’t a story anymore; it can’t be a story because it attempts to put the recipient into a situation very similar to the one of the investigator suffering from memory loss. From this perspective there is no story; from these ever-changing new perspectives there is at best a whole bunch of stories. But for some reason there is this drive to put these stories in the right order or something similar, to make sense of them or detect meaning in them. This applies for both, the recipient as well as the tragic hero of the movie.
Inside. Database. Night. “Indeed, if after the death of God (Nietzsche), the end of grand Narratives of Enlightenment (Lyotard) and the arrival of the Web (Tim Berners-Lee) the world appears to us as an endless and unstructured collection of images, texts, and other data records, it is only appropriate that we will be moved to model it as a database.”8 140
&(/') B" ! 1
One could describe this as a form of delusion that is specific to the computer-age: the world (formatted as) a database. Held together by nothing but a – paradoxically, highly rigid – technical construction, without context, a red thread or a thematic preference. The ultimate “anything goes”. Lev Manovich claims the database to be the current “key form of cultural expression”. Following Erwin Panovsky’s analysis of the linear perspective as a symbolic form of the Modern Age,9 he suggests thinking of the database as a new symbolic form, as the answer to the perspectival form. The term symbolic form, which goes back to Ernst Cassirer,10 denotes a fundamental epistemic arrangement. It could be understood as a sort of knowledgemanagement institution. With regards to the conceptional level of the term, it is to some extent comparable to what Michel Foucault calls a “historical a priori”. It is a presupposition for each era that he discusses in his Archeology of Knowledge. In contrast to Michel Foucault’s historical a priori, the Symbolic Form is intended to meet wider standards. Whereas Panofsky’s text refers to the time period from the Renaissance to modernity, Foucault takes into consideration diverse historical a priori of the same timeframe. Foucault calls this historical a priori, which defines time-specific norms and ways to approach the world, an archive: “The archive is first the law of what can be said, the system that governs the appearance of statements as unique events. [...] It is that which defines the mode of occurrence of the statement-thing; it is the system of its functioning.”11 Consequently, matters of discourse that are specific to certain epochs are not in the first place results of rational thinking, but the products of what counts as a possible statement and as thinkable at a certain moment in time. It is an attempt to try grasping the enacted meta-discourse of an era: “Between the language (langue) that defines the system of constructing possible Sentences, and the corpus that passively collects the words that are spoken, the archive defines a particular level: that of a practice […].”12 In The Order of Things,13 Foucault applies this methodology to the meta-discourses, the archives of the so-called modernity. Foucault focuses on describing the transformations in transitional moments from Renaissance to classicism and from classicism to modernity. With reference to the idea above, one could say that he analyses each era’s “new media” in correspondence to the possible discourse formations. In this same sense, I read Lev Manovich’s statement: namely, that the logic of the database is the current, prevailing symbolic form. The logic of the database is the current historical a priori which constitutes how and what we can see. The computer age supplies the epistemic structure of the database, which requires a certain form of delusion that is specific to the computer-age. This again (I hope I have been able to show) can be demonstrated paradigmatically with the example of the detective who suffers from memory loss. In a similar way as Descartes, Leonard Shelby recedes to a point of indisputable assuredness: From the point of view of a 141
#
subject who is assured of its cogito, there is no differentiation between delusion and knowledge. On the one hand, this suggests an almost unlimited flexibility, which is ignorant of presuppositional perspectives; on the other hand, it entails an increasing subject-centrism, which ignores its secluded condition. While any ambitions to gain rational objectivity seem to mostly break down, individual claims on gaining power do not recede at all (as the end of the film shows quite impressively). This problem concerning the subject and its perspectival flexibility is partly due to media formations. And vice versa, as can be demonstrated by the example of the www: The specific form of delusion that is characteristic to this era brings into being its archive’s structures and contents. Thereby, it is an enacted meta-discourse. But it is impossible to assume that the particular archive was simply a causally determined effect of those communication technologies. To a certain point it seems quite obvious that there is a striking structural link between the tendency towards modularisation, standardisation, globalisation, or to put it bluntly: generalisation, and a radically normative and standardising way of thinking that is required for programming the universal machinery. On the other hand, the fundaments to the development of the universal (or better: globalising) machinery had been established in an era that has been shaped by the symbolic form of the perspective. And, as I highlighted earlier, the so-called “crisis of representation” was basically performed as a crisis of representational techniques in perspectival artworks. This was as early as around the last but one turn of the century. The representational mode of the perspective did not suffice as an appropriate medium to make visible all the aspects that artists wanted to make visible or that can be (re-)presented in general. As we know today, this problem has had a spectacularly productive effect, e.g. on twentieth century artworks.
Inside. Desktop. Day. An investigator suffering from memory loss – this quite tragic character fascinated me. Sergej Sergejewitsch Korsakoff once put it this way: »A man’s memory is all that stands between him and chaos.« So, problems of the memory are quite a tragic thing already. But of all things, an investigator suffering from memory loss! An investigator is always searching for the truth. In the classic detective story it is often along the lines of “who is the murderer”. To come closer to the truth he gathers evidence, testimonies; he puts them in an order, combines them, reconstructs the course of events, puts together a puzzle that in the end will reveal the truth and allows conviction of the murderer. An investigator collects – one could say – many little stories and arranges them to create one connecting story. Of course an investigator suffering from memory loss should have some trouble with this. Leonard Shelby can keep contents in his memory for a maximum of 15 minutes, after which they fade. It has to do with the transfer between short- and long-term memory,
142
&(/') B" ! 1 “... but it’s not amnesia!” he adds quickly, to make sure people don’t view him as crazy. He remembers everything up until his injury that lead to his “condition” – as he calls it. This says it all: “I just can’t make new memories. Everything fades.” Therefore, he needs to know his facts by heart. Fifteen minutes are at least enough time to write some notes, to take Polaroid photos of situations and sometimes comment on them with a few enlightening words: “my car”, “my hotel” or the name of the person depicted. It’s no different for the recipient of the movie. It takes five minutes at the most until one gets pulled away from a scene and a new episode starts. At least every other scene is in blackand-white. That’s helpful for more orientation. But only a little, because the coloured scenes are being shown in reverse order. Due to the fact that a coloured scene ends with the beginning of the preceding (coloured) scene, the recipient recognises at some point that the coloured scenes show the chronological order of the story in reverse. But one will only notice that when it’s already too late, when the first scenes that are chronologically in the future have already been forgotten. This kind of storyline causes the recipient to virtually wait for 113 minutes until the beginning is shown - if one looks at the black-and-white scenes as some type of previous history – while the (chronological) ending is being forgotten. And during this process, some strange kind of mimetic identification of the recipient with the problem of the main character is being triggered, which is probably the producer’s main intention. After every cut the recipient is being thrown into a scene that practically starts right in the middle of something and therefore one shares the tragic detective’s confusion about what’s going on – momentarily and in general.
Inside. Geneva, Conseil Européen pour la Recherche Nucléaire. Night. At the end of the 1990’s, the operators of the search engine Lycos promised us “all the world’s knowledge” – just like heaps of others who projected hopes for prosperity in the “new media” with the conviction that the www offered true and promising expectations towards the democratisation of knowledge and unlimited enlightenment: “All the world’s knowledge – simply point-and-click! […] Up-todate information, unlimited communication and the complex knowledge of mankind – anytime for anybody.”
) % $
143
#
Lycos’ corporate logo highlights how the Internet had been understood – and continues to be understood – as an Enlightenment project: The solar eclipse – or rather, its predictability – is the stereotypical symbol for Enlightenment and for the triumphant celebration of reason’s victory over a superstitious medieval era, where an obfuscating of light still used to cause true obfuscation. The problem with understanding the www as the archive of the world’s knowledge becomes immediately apparent when looking at it from the linear perspectival point of view. Sometime in 1998, when I once again was lost and without an overview while surfing the Web, I asked myself whether there was a centre to the semantic vastness among all those websites: a centre of the Internet that could provide some kind of overview. At that time, it used to be still quite manageable at a number of about 50m sites. Accordingly, I typed “the centre of the Internet” into a search engine and – to my own surprise – the search was successful. I found “the official centre of the Internet” at www.uni-kassel.de/fb22, which was the point of projection for a representation of the www in the linear perspectival form. Students Oliver Schulte and Maik Timm of the University of Fine Arts Kassel had even installed a panoramic “view from the centre”. Still, the overview offered here remained an unsatisfactory experience. It is true that a real panoramic view was presented – thanks to the Quicktime-VR technique, which is based on Descartes’ mathematic coordination system.14 Still, what one actually gets to see, even if in 3-D, remains only visual noise. In addition to that – and this is the problem if the world’s knowledge is to be scrutinised in accordance with a Cartesian methodology – it remains semantic noise. It is not clear or concise and distinctive at all. It is precisely not “totally countable, nor possible to have a complete overview of it, in order to […] be saved from any form of overlooking.”15 Likewise, the problem could not be solved with an approach that is in a metaphorical sense a linear perspectival one: namely, the attempt to arrange the semantic vastness of the www in an ontology (that even claims inter-subjective validity). Web-catalogues like Yahoo (as opposed to true search engines like Google, Altavista, etc.) were based on manually monitored websites that are registered in huge ontologies, but they only represented a marginal part of the www.16 The website www.yahoo.com showed fourteen master categories that formed the surface level of 144
&(/') B" ! 1
an ontology harbouring around 20,000 categories in the systematic order of a word pyramid. It only touched approximately 0.4% of the actual content of the www. Because the subsumption of a particular site to a category occurred manually, this small percentage doesn’t seem surprising. Each site actually has to first be read, or at least be looked at. Accordingly, it is not striking that the quota for successful hits was usually very good. The one who was looking for information and the one who categorised it were in a similar social relationship to the one between a recipient and the producer of a linear perspectival depiction. Both recipient and producer draw judgements from the point of view of an individual who is conscious of self and ratio. This is their common sense. Still, this form of managing the world’s knowledge allowed only for a ridiculously small coverage, which raised the question whether this method might be appropriate in the case of the www. Mike Couzens hits home when he answered the question of whether one could describe the www through the metaphor of a library: “Possibly, yes, but if so: Firstly, the librarians have gone home. Secondly, all books are on the floor. Thirdly, the lights are off.”17 If the www is such a library in the dark, where books lie scattered on the floor and the people behind the information desk have gone home; and if there is no catalogue which could provide some kind of order or orientation – Couzen must have forgotten to mention this – then it is to no surprise that the readers are developing psychotic personality patterns or that they develop multiple identities here. After all, there is a significant increase in dissociative identity disorders. (Just as fundamentalisms appear as a form of neurotic contra-reactions…) There is a question concerning this medium, which has been called into being through formal regulations, i.e. the HyperTextTransferProtocol (http) and the HyperTextMarkupLanguage (html) developed by Tim Berner-Lee at CERN. That is, can this medium be appropriately grasped at all by utilising the definition of media that has usually been understood as a mainly technical term? A medium, understood as a tool or engine, can always be turned off and users can simply choose not to use it, etc. In contrast to that, the www cannot simply be turned off. The www must certainly be seen as something that is the basis for – and at the same time 145
#
technical fundament of – what at least triggers the process of transgressing borders with regards to markets, cultures, states and identities – labelled with the term “globalisation”. In this case, it is possibly more appropriate to talk of a medium in the same sense as one talks of fish living in the medium of water; that is, like in physics or chemistry, the term medium is being understood as a “transmitter” or “substance” where particular processes take place (air would be a transmitter of sound waves or the substance in which specific chemical processes occur). The www and the related process of vanishing borders in the context of markets, cultures, states and identities may be understood as a new medium (transmitter or substance) for and of psychological and social processes. Jacques Derrida introduced a name for this process: “worldisation, the worldwidisation of the world”. He refuses to make use of the term “globalisation”; he wants to keep using the French term “in order to maintain the connection to ‘world’ [monde, welt, mundus], which is neither cosmos, nor globe, nor universe.”18 It may be pure chance that it is possible to derive the acronym www out of the German term “Weltweit-Werden”, (which is not possible in English, “worldisation”, or French, “mondialisation”); but maybe these things are in fact not as intertwined as I see them. Thus, all those conjectures based on this and on the particular insanity of the “computer age” might possibly be wrong. But: In German, World, Welt, is a derivative from the Germanic term “weralt”, a particular formation that comes from “vir” – man and “age”, or “(period of ) human existence”, hence meaning “age or life of man”. See also: German “Werwolf“: the human being who periodically turns into a wolf. From this perspective, the “world” means something different from the cosmos, the globe, or the universe. And it refers to something different from a sum total of transcontinental trading flows and monetary pipelines. In doing this, I also take history into consideration – and (hi)stories, language, Culture, cultures, social bondings, discourses, traditions, generations, age…. Human age, weralt… “age” not as an era or epoch, but in the sense its etymologic derivation communicates, namely, as “growing”, “changing” or better: As something that is, because it grew. In its broadest meaning, “culture” would be another word for it. Then again: Currently, “culture” only exists in the plural form – and this exactly is the complexity of worldwidisation. On the global level, this leads towards this “destinerrance”, towards “destination errance.”19
Inside. Desktop. Day. “Sammy Jenkins had the same problem. But he really had no system. He wrote himself a ridiculous amount of notes, but he got them all mixed up.” On the back of the investigator’s left hand there is a tattoo that says “Remember Sammy Jenkins”. Sammy was his first big case back then, before the incident that had caused his memory loss. Leonard Shelby worked as an investigator at an insurance company. He had to examine claims to make
146
&(/') B" ! 1 sure there was no insurance fraud involved: “Mr. Samuel R. Jenkins ... strangest case ever. You know, the guy is a 58-year-old semi-retired accountant. He and his wife have been in this accident, nothing too serious, but he’s acting funny ... he can’t get a handle on what’s going on.” The case of Sammy Jenkins also helps the storyline of the movie to offer some type of objective outside-perspective on Leonard Shelby’s mental “condition”. Because otherwise, this gets shown only through the hereby fragmented first-person-perspective of an investigator suffering from memory loss. “I guess I tell people about Sammy to help them understand; Sammy’s story helps me understand my own situation.” The hint on the back of his hand is part of his specifically developed knowledge management system. It starts a whole chain of conditioned movement, a search for other tattoos – wrist: “THE FACTS”, underarm: “FACT I: male”, “FACT II: white” and so on. The reconstruction of the current case always starts with the meta-methodological demand “Remember Sammy Jenkins”. Personally, “the case of Sammy Jenkins” reminded me of the www: At the end he had “a ridiculous amount of notes, but he got them all mixed up.” Now we have “all the world’s knowledge”, because we took “endless amounts of notes” within an easy-to-handle medium that can be easily controlled via mouse-clicks. It should be evident that an investigator suffering from memory loss needs quite a polished knowledge management system. “You really need a system if you gonna make it work,” Shelby lectures us. “You kinda learn to trust your own handwriting [I associated: Is this not Descartes? Or Kant? – Sapere aude! – Have courage to use your own handwriting?] That becomes an important part of your life. You write yourself notes and where you put your notes, that also becomes really important. [Descartes, 3rd rule of the method.] You need like a jacket that’s got like six pockets in it, particular pockets for particular things. You just kinda learn to know where things go and how the system works, and you have to be wary of other people writing stuff for you.” [Kant: “Nonage is the inability to use one’s own understanding without another’s guidance.”] 20 They might write something “that’s not gonna make sense or that’s gonna lead you astray. [Descartes: “In order to find truth in all discernible things, at first all prejudices have to be abandoned; that is, one has to be careful and not trust one’s former views before examining them and identifying them as truthful.”] 21 ... I mean, I don’t know how there’s people trying to take advantage of somebody with this condition.” – I can’t help it, maybe it has to do with the “condition” of my desktop: “I shall suppose, then, not that God, who is very good and the sovereign source of truth, but that a certain evil genius, no less wily and deceitful than powerful, has employed all his ingenuity to deceive me […].”22 Once again Descartes on his quest for the perfect method to think right and to find truth in the sciences – as I said before, it might be due to over-interpretation that the knowledge management system of this investigator suffering from memory loss made me think this way. But to me it’s not about movie interpretations and to see through Christopher Nolan’s game. It’s quite unimportant whether Christopher Nolan thought of René Descartes when
147
# he invented the investigator suffering from memory loss; whether he intended to criticise, mock or caricaturise the modern scientist or his methodical approach. I look at this movie independently of the assumed intentions of the producer to be able to find a knowledge management system between the lines. Or to be more precise: to describe its knowledge production system that follows a different method, maybe even the same method or one that can be deducted from it, but with different consequences than were intended by Descartes at that time. Leonard Shelby, the investigator suffering from memory loss, functions as a test object. Just imagine his “condition” – tentatively “in sensibili experimento” – applied to a different type of truth seeker: a scientist who repeatedly loses his knowledge or mind….
Inside, Neuburg, Parlour. Day. The Cartesian space is based on the depiction technology of linear perspective in several respects. It should seem immediately clear that analytical geometry is marked by virtually the opposite process to what happens during the construction of a linear perspectival depiction. Nevertheless, when René Descartes spent November 1619 in his warm parlour close to Ulm, he invented nothing less than the meta-theory to a new communal assuredness. The projection point of the information processing of the linear perspective shifted – from the spectator’s eye a few centimetres further back, deeper into his head and thereby it became, in a way, the universal projection point of any possible mode of thinking. With the self-assuredness of the “cogito”, the anonymous mass communication, beginning now, has found its methodological basis. Whereby mass communication itself has its technical and economic requirements met through book printing and the free market economy. By mass production of books inter-subjective communication about the environment without direct interaction between an author and all his readers became technically manageable. This communication, accompanied by the accumulation of knowledge, is possible only if author and reader meet at an agreed upon projection point of the rational ME. Interestingly, Descartes published his thoughts anonymously (as if he intended to foreclose the database principle by filling in the variable “author” not with his name, but with “du Perron”). Just as daring as this anticipation – only understandable from hindsight – was Descartes’ assumption – also only understandable from hindsight – of something Wolfgang Ernst believed to be an essential need for being able to handle the rumouring archives of worldwidisation: the “future culture technique” of an err-ability. Descartes’ incredible boldness (which, as Derrida proposes, we “may not interpret anymore as being utterly bold, since we […] became too familiar already with the pattern”) was to take thinking to a place beyond the opposition of knowledge versus delusion; beyond the opposition of the community versus the individual. Descartes’ “cogito” that is found through doubting is even true for a state of delusion: Even if my thoughts are completely mad, “cogito, 148
&(/') B" ! 1
sum” would still apply. “The certainty thus attained need not be sheltered from an emprisoned madness, for it is attained and ascertained within madness itself. It is valid – even if I am mad – a supreme self-confidence that seems to require neither the exclusion nor the circumventing of madness.”23 Descartes retreats to a point of invulnerable assuredness, which allows not only erring and insanity to be one out of many possible ways of thinking. But – in exactly this sense and also in its opposite sense WEG? – he also retreats to a point where most of all aspirations of globality have their roots: namely, to think totality by sidestepping it. “By escaping it: that is to say, by exceeding the totality, which – within existence – is possible only in the direction of infinity or nothingness.”24 Descartes has put this mode of thinking in methodological words, and it was already anticipated through colonialism and the linear perspective during earlier centuries. It enables practising a cartography of the whole – in the form of a globe or global matter. This practise happens from an impossible point of view, because the projection point – in existence – is the disappearing point – that is, in infinity: It is the point of view created in accordance with perspectival rules, but which takes now the bird’s eye view or God’s eye view.25 Successive readers of René Descartes are to be blamed for the fact that all efforts to try getting hold of wholeness were obviously tied to a particular understanding of communication. Namely, the repetition of the author’s information processing through the reader. They seemed to have overlooked that social accumulation of knowledge based on linear perspectival principles entails an understanding of communication giving birth to the myth that knowledge is like a book, a commodity, which can be easily handed over to others. “Communication appears in analogy to trading goods as an exchange of information.”26 Only a myth like this made it possible to formulate assumptions such as “all the world’s knowledge” would lie in the Internet.
Inside. Desktop. Day. “It’s just an anonymous room. There’s nothing in the drawers. But you look anyway. Nothing except the Gideon Bible which I of course read religiously ... haha ... hm ....” It’s not quite clear how Leonard Shelby obtains assuredness. He lives in hotel rooms. He can only recognise his room when looking at his handwriting on the notes spread all over the place. He becomes aware of his “condition” once he looks at his methodical meta-tattoo on the back of his hand: “Remember Sammy Jenkins”. As described before, that’s the start of his methodical approach. He does it differently from Sammy Jenkins who got his “ridiculous amount of notes” all mixed up. Shelby has a “real system […] If one has a piece of information that’s really important, the solution might be to write it on your body instead of on a piece of paper. That way you can always take notes.” The phone rings. Shelby picks up. And, even though he doesn’t know or doesn’t recognise the caller, he talks about Sammy Jenkins – while he takes care of his knowledge management system by looking at
149
# his tattoos and preparing new ones: “Sammy had no drive. No reason to make it work. Me? Yeah, I got a reason.” – “John G. raped and murdered my wife” is the “fact” tattooed right across his chest that determines Shelby’s life. Right underneath it reads: “Find him and kill him.” While he’s on the phone Shelby examines the sequence of tattooed “facts” that resulted from his investigations: “1. male, 2. white, 3. First name: John (or James), 4. Last name: G____, …” The camera zooms into a close-up of his body, follows his hands that lift a bandage off a fresh tattoo here and there, shave a thigh to prepare a new place for the archiving of new “facts” until he stops telling his story abruptly due to a change in perspective: under a bandage that covered a fresh tattoo it says: “Never answer the phone!”
Outside. Florence, Baptistry. Day. It must have been an anticipation of a “future culture technique” such as what Wolfgang Ernst wrote about in his book “The Rumor of the Archives”. The reader – for example – would have changed his point of view constantly; he would have been able to change perspectives and look at things in different lights. He would have needed something like an err-ability: According to Wolfgang Ernst, “to get lost in this data-jungle is the imperative of a familiar pedagogy [….] But to learn how to err in the labyrinth of this un/order is the option of a future culture technique, beyond the archives and as a journey, whereby the destination is yet to be discovered – destinerrance in Derrida’s terms.”27 The present culture technique – an agreement between reader and author to meet at a common viewpoint – has a long tradition. Even though they meet there consecutively, it enables them to comprehend what the other one sees. In Erwin Panofsky’s view, this goes back to the invention of the linear perspective in the Renaissance.28 He understands the linear perspective as the symbolic form of modern times and thereby differentiates the procedure from a simple depiction technique or a model of mental perception. Moreover, it was then a new communication technology, primarily of a visual kind. The linear perspective made it possible to repeat experiences unknown observers made somehow and somewhere. It makes it possible to copy visual information processing and by that “to programme the viewpoint and perspective of other people.” According to Michael Giesecke it was in the 14th century that “the question of how to generalise individual perception and how to put individual knowledge at people’s disposal, became of great importance. And the painters and architects that dealt with perspective constructions provided the best answers.”29 One of these painters and architects was Filippo Brunelleschi. His perspective image of the Baptistry in Florence appeared so overwhelming to him that he suggested an experiment for intersubjective verification. This turned out to have immense consequences: The observer should stand directly in front of the Baptistry and compare his view with the view on Brunelleschi’s painted panel. The observer should first look at the original Baptistry through a small hole in the centre of 150
&(/') B" ! 1
'&C),3
the panel that he held upside down between himself and the Baptistry, and then he should hold a mirror between the picture and the original to see the painting instead of the Baptistry.30 In this way, the subjective view became mobile and generalisable. Anyone who took the viewpoint of the painter – or better: the perspective viewpoint of the architect in front of the panel, wherever it would be located, could see the Baptistry as Brunelleschi did. Because the viewpoint of the architect (the peephole in the panel) is communicated through the painting itself due to describable rules of construction which are independent of the actual picture. “What kind of common experience is this,” asks Giesecke, “and not only an experience, but an assuredness that can be experimentally confirmed?!”31
Inside. Desktop. Twilight. Leonard Shelby’s knowledge management system has its flaws. The recipient of the movie might fall prey to the same flaws. At least if she or he is waiting for a story at the cinema. “After the novel, and subsequently cinema privileged narrative as the key form of cultural expression of the modern age, the computer age introduces its correlate — database,”32 states Lev Manovich. He claims a new symbolical form. We can’t expect stories anymore. And I suspect that the movie “Memento” might be designed exactly this way in order to show the novel- and Hollywood-spoiled recipients their habits and corresponding expectations – maybe in a deconstructive way. I deconstructed the movie insofar as I cut it into 44 pieces and placed them in a database as single records. To cut even pieces I used the perforation where the colour mode changes (from black-and-white to colour and vice versa). Consequently, I ended up with 21 black-and-white scenes and 22 coloured scenes and one that starts out in black-and-white but turns colour approximately in the middle, when Leonard Shelby is looking at a slowly developing Polaroid picture showing the corpse of John G. This scene is at the same time the last one of the movie as well as the first one, the beginning that the recipient awaits anxiously for 113 minutes (if one looks at the blackand-white scenes as some type of previous history). This becomes evident abruptly once
151
# certain requests, so-called SQL-queries, are being sent to my database.33 But depending on how the requests are being articulated in simple query language, there is always a different answer to who the murderer is. It’s quite astonishing that this riddle is not only obviously harder because of the reverse order of events, which seems like piecing a puzzle together to reveal the truth. Christopher Nolan demonstrates quite convincingly that in the decontextualisation of data (the movie scenes) that is inherent to the database principle, the recontextualisation of data is directly dependent on the order and if necessary on the selection, in other words on the queries sent to the database. Actually it’s not even recontextualisation but contextualisation, since exactly that can’t be ensured – due to the main character’s memory loss as well as the recipient’s disturbance in memory that’s caused by the cuts. For the longest part of the movie the recipient will look at Leonard Shelby as the victim of the story. When the database-query is changed however – and its construction principle only becomes evident in the last scene – he emerges as an insane murderer. Over and over again – no matter what the pathological or non-pathological reasons – he suggests stories to himself that imply motives for new murders of otherwise innocent victims. The alleged puzzle becomes a collage that is subject to totally different construction principles – and above all it doesn’t offer only one correct solution. Nolan’s experiment, to demonstrate the database principle in a – paradoxically (formally) linear – story inspired me to apply the same method to the production of a scientific – formally linear – text. I wonder whether in this way, the “condition” of my desktop could be temporarily rendered into the form of a symbolic order, whether it could be transformed into knowledge; Or, whether my loose conjectures could be transformed into a general, intersubjectively understandable form. In doing so, it should be made clear that it could have naturally been a different type of form – that’s what it’s all about in the end. But not just any form, it would have to be a specific form – insofar as it would have to show explicitly that it could also have been another one. That’s what I’d be most interested in: to put the database into a form. The database is amorphous; it has no particular form but can be put into all types of forms. That’s why the database “in itself ” – meaning without a concrete detailed query – can’t be a story, a narration or a history. Not even in the sense of a scientific course of arguments – at least not since the virulent paradigm of the self-assured rational scientists subject of early modern times. It virtually lies in the character of the database to not be a story. A database has no beginning and no ending. It has no theme, no “story”, let alone a “moral”, no order – neither a defined series of data or thereby presented objects, nor words or sentences that would render the objects or data presentable etc.: Everything inherent to a story – as a narration or history – is missing. Anything that could be defined as the work of an author is missing. The author as someone who reduces complexity is absent. And therefore the standpoint of the author is missing as well. That’s why the principle of the linear perspective – as an agreement on a common standpoint of both painter and observer or author and reader – fails.
152
&(/') B" ! 1 Would I still succeed as an author (who in a way pretends not to be one) to render the “condition” of my desktop into text, a formally linear story that could be communicated as knowledge, and therefore can’t be put off as insane? Would the transfer of Christopher Nolan’s montage technique (and his strange kind of mimetic identification of the recipient with the main actor – in this case the author) to a knowledge-generating text succeed at generating knowledge? Will I have accomplished phrasing my conjectures in a way that it will have been knowledge? A process-like form of knowledge at least? Translated from the German with much help from Juliane Oepen, Nadine Ott and Anna Mayrhuber.
Notes 1
2 3 4
5
6
7 8 9 10
„Dort oben ist ein tiefes Schwarz auf der einen Seite und ein helles Blau zur Erde hin. Sie sind also zwischen den Dingen. [...] Sie sind in einer Zwischensituation, haben eine Distanz zu allem gewonnen. Und das ist eben ein ganz anderes Gefühl. Das ist das Weltraumgefühl.“ See: Litz, Christian. 1999. Ein Gefühl, das du auf der Erde nie haben wirst. Warum der Astronaut Ulrich Walter jedem empfehlen würde, einen Kurz trip ins All zu buchen. http://www.brandeins.de/magazin/ archiv/1999/ausgabe_02/leitbilder/artikel1.html. Accessed 3 August 2009. http://emagazine.credit-suisse.com/article/index.cfm?fuseaction=OpenArticle &aoid=12785&lang=EN Date: 07/01/2009 Manovich, Lev: Database as a Symbolic Form. Billmayer, Franz. 2002. Veränderungen Übergänge, Umbrüche … Überlegungen zur Documenta11 in Kassel: BDK-Mitteilungen, Nr.4, p. 14-15, p. 15. 29 Ibid. Pazzini, Karl-Josef. 2002. Documenta11 - Inszenierung von psychotischer Struktur?”, Lecture at the congress “Produktionen (in) der Psychose der As soziation für die Freudsche Psychoanalyse”, Burghölzli, 21 September 2002, unpublished manuscript. Johnson, Steven. 1997. Interface Culture. How New Technology Transforms the Way We Create & Communicate, San Francisco: HarperEdge/Harper, p. 54. Manovich, Database, ibid. Compare: Panofsky, Erwin. 1927. Die Perspektive als ‘symbolische Form’, Berlin: Spieß. Compare: Cassirer, Ernst. 1924. Philosophie der Symbolischen Formen, Darmstadt: WBG. Foucault, Michel. 1972. The Archaeology of Knowledge & the Discourse on Language. New York: Harper Torchbooks, p. 129.
153
# 11 12 13
16 17
18
19 20
21
22
23 24 25 26 27 28
29 30
154
Ibid., 130. Foucault, Michel. 1970. The Order of Things: An Archaeology of the Human Sciences. New York: Vintage Books. It is sincerely recommended to visit the “centre of the Internet“ in the www and to use the navigation possibilities at the site. One is not only able to navigate left-right and up-down movements, but is even able to zoom in, in order to register details: see http://www.uni-kassel.de/fb22/home/candela2/canter/ main.html Date 3 June 2004. Descartes, Discourse, (4th rule). This and the following information is based on research on Internet search engines and Web catalogues, which Steve Steinberg published in the magazine Wired in 1996. (See: Steve G.: “Seek and ye shall find (Maybe)” In: Wired, No 4.05 (1996), p. 108-114, p. 174-182) It is hopelessly outdated by now. Mike Couzens, manager at Cisco Systems, during his inauguration speech at the online Educa Berlin 2000. Thanks to Joeran Muss-Merholtz for the reference. Derrida, Jacques. 2001. L’université sans condition, Paris: Galilée, p. 51. Derrida expands profoundly on the concepts of “destiny” and its errors, “des tinerrance” in: Derrida, Jacques. 1987. The Postcard: From Socrates to Freud and Beyond, The University of Chicago Press. Kant, Immanuel. 1783. Beantwortung der Frage: Was ist Aufklärung?: Was ist Aufklärung? Aufsätze zu Geschichte und Philosophie, ed. Zehbe, Jürgen, 55 – 61. Göttingen: Vandenhoeck & Ruprecht. Descartes, René. 1637. Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences http://www.literature.org/authors/ descartes-rene/reason-discourse. Accessed 3 August 2009. 3 Ibid. Derrida, Jacques. 1978. Cogito and the History of Madness, In Writing and Difference, The University of Chicago Press, p. 67. 12 Ibid. p. 68. Schmeiser, Leonhard. 2002. Die Erfindung der Zentralperspektive und die Entstehung der neuzeitlichen Wissenschaft, München: Fink, p. 79. Giesecke, ibid., p. 108. „Sich im Datenwald dieser Un/ordnung nicht zu verlieren ist der Imperativ einer vertrauten Pädagogik [...] Doch sich in einem Labyrinth verirren zu lernen ist die Option einer künftigen Kulturtechnik, jenseits der Archive und als Form einer Reise, deren Ziel man erst kennenlernen muß – destinerrance im Sinne Derridas.“ See: Ernst, Wolfgang. 2002. Das Rumoren der Archive. Ordnung aus Unordnung, Berlin: Merve, p.131. Panofsky, Erwin. 1927. Die Perspektive als ‚symbolische Form’, Berlin: Spieß. Giesecke, Michael. 1998. Der Verlust der zentralen Perspektive und die Renais
&(/') B" ! 1
31 32
sance der Multimedialität: Kemp, W. et al. (Ed.): Vorträge aus dem WarburgHaus, Berlin, pp. 85-116, 103. Compare: Pazzini, Karl-Josef. 1992. Bilder und Bildung. Vom Bild zum Abbild bis zum Miederauftauchen der Bilder; Münster: Lit, p. 58. Giesecke, ibid., p.106.
Works Cited Billmayer, Franz. 2002. Veränderungen Übergänge, Umbrüche … Überlegungen zur Documenta11 in Kassel: BDK-Mitteilungen, Nr.4, pp. 14-15. Cassirer, Ernst. 1924. Philosophie der Symbolischen Formen, Darmstadt: WBG. Descartes, René. 1637. Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences http://www.literature.org/authors/des cartes-rene/reason-discourse. Accessed 3 August 2009. Derrida, Jacques. 2001. L’université sans condition, Paris: Galilée. Derrida, Jacques. 1987. The Postcard: From Socrates to Freud and Beyond, The University of Chicago Press. Derrida, Jacques. 1978. Cogito and the History of Madness, In Writing and Difference, The University of Chicago Press, p. 67. Ernst, Wolfgang. 2002. Das Rumoren der Archive. Ordnung aus Unordnung, Berlin: Merve http://emagazine.credit-suisse.com/article/index. cfm?fuseaction=OpenArticle&aoid=12785&lang=EN Date: 07/01/2009 Foucault, Michel. 1972. The Archaeology of Knowledge & the Discourse on Language. New York: Harper Torchbooks. Foucault, Michel. 1970. The Order of Things: An Archaeology of the Human Sciences. New York: Vintage Books. Giesecke, Michael. 1998. Der Verlust der zentralen Perspektive und die Renaissance der Multimedialität: Kemp, W. et al. (Ed.): Vorträge aus dem Warburg-Haus, Berlin, pp. 85-116. Johnson, Steven. 1997. Interface Culture. How New Technology Transforms the Way We Create & Communicate, San Francisco: HarperEdge/Harper. Kant, Immanuel. 1783. Beantwortung der Frage: Was ist Aufklärung?: Was ist Aufklärung? Aufsätze zu Geschichte und Philosophie, ed. Zehbe, Jürgen, 55 – 61. Göttingen: Vandenhoeck & Ruprecht. Litz, Christian. 1999. Ein Gefühl, das du auf der Erde nie haben wirst. Warum der Astronaut Ulrich Walter jedem empfehlen würde, einen Kurztrip ins All zu buchen. http://www.brandeins.de/magazin/archiv/1999/ausgabe_02/leitbilder/artikel1. html. Accessed 3 August 2009. Manovich, Lev. 2001. Database as a Symbolic Form. http://www.manovich.net/ docs/database.rtf. Accessed 3 August 2009. Compare Manovich, Lev. 2001. The Language of New Media, Cambridge / London: MIT Press. Nolan, Christopher. 2001. Script of „Memento“, http://www.christophernolan.net/
155
# files/memento-script.pdf Accessed 3 August 2009 (for educational purpose only). Panofsky, Erwin. 1927. Die Perspektive als ‚symbolische Form’, Berlin: Spieß. Pazzini, Karl-Josef. 1992. Bilder und Bildung. Vom Bild zum Abbild bis zum Miederauftauchen der Bilder; Münster: Lit. Pazzini, Karl-Josef. 2002. Documenta11 - Inszenierung von psychotischer Struktur?”, Lecture at the congress “Produktionen (in) der Psychose der Assoziation für die Freudsche Psychoanalyse”, Burghölzli, 21 September 2002, unpublished manuscript. Schmeiser, Leonhard. 2002. Die Erfindung der Zentralperspektive und die Entstehung der neuzeitlichen Wissenschaft, München: Fink. Steinberg, Steve G. Seek and ye shall find (Maybe). In: Wired, Nr. 4.05, May 1996, 108 – 114, 174 – 182
156
Regressive and Reflexive Mashups in Sampling Culture $ %
During the first decade of the twenty-first century, sampling is practiced in new media culture when any software users including creative industry professionals as well as average consumers apply cut/copy & paste in diverse software applications. For professionals this could mean 3-D modelling software such as Maya (used to develop animations in films like Spiderman or Lord of the Rings);1 for average persons it could mean Microsoft Word, often used to write texts like this one. Cut/ copy & paste which is, in essence, a common form of sampling, is a vital new media feature in the development of Remix. In Web 2.0 applications cut/copy & paste is a necessary element to develop mashups; yet the cultural model of mashups is not limited to software, but spans across media. Mashups actually have roots in sampling principles that became apparent and popular in music around the seventies with the growing popularity of music remixes in disco and hip hop culture, and even though mashups are founded on principles initially explored in music they are not straightforward remixes if we think of remixes as allegories. This is important to remember because, at first, Remix appears to extend repetition of content and form in media in terms of mass escapism; the argument in this paper, however, is that when mashups move beyond basic remix principles, a constructive rupture develops that shows possibilities for new forms of cultural production that question standard commercial practice. The following examination aims to demonstrate the reasons why mashups are not always remixes, as defined in music, and the importance of such differences in media culture when searching for new forms of critical thinking. I will first briefly define mashups and Remix and examine mashups’ history in music, then briefly consider them in other media, and subsequently examine in detail their usage in Web applications. This will make clear the relationship of mashups to Remix at large, and will enhance our understanding of sampling as a critical practice in Remix and Critical Theory.
Mashups Defined There are two types of mashups, which are defined by their functionality. The first mashup is regressive; it is common in music and is often used to promote two or more previously released songs. Popular mashups in this category often juxtapose songs by pop acts like Christina Aguilera with the Strokes, or Madonna and the Sex Pistols.2 The second mashup is reflexive and is usually found outside of music, and most commonly in Web 2.0 applications. Some examples of this genre include news feed remixes as well as maps with specific local information. This second form 157
$ %
of mashup uses samples from two or more elements to access specific information more efficiently, thereby taking them beyond their initial possibilities. While the Regressive Mashup can be commonly understood as a remix in terms of its initial stages in music, the Reflexive Mashup is different. I define it as a Regenerative Remix: a recombination of content and form that opens the space for Remix to become a specific discourse intimately linked with new media culture. The Regenerative Remix can only take place when constant change is implemented as an elemental part of communication, while also creating archives. This implementation, at a material level, mirrors while it also redefines culture itself as a discourse of constant change. But to move further with this argument Remix must be defined in direct relation with modernism and postmodernism, because it is at the crux of these two concepts that Remix was first practiced popularly as an activity with a proper name. 158
A1 DDAE D&)DD,) 1D
Remix Defined Generally speaking, remix culture can be defined as a global activity consisting of the creative and efficient exchange of information made possible by digital technologies. Remix, as discourse, is supported by the practice of cut/copy and paste.3 The concept of Remix that informs remix culture derives from the model of music remixes produced around the late 1960’s and early 1970’s in New York City, with roots in the music of Jamaica.4 During the first decade of the twentyfirst century, Remix (the activity of taking samples from pre-existing materials to combine them into new forms according to personal taste) has been ubiquitous in art, music and culture at large; it plays a vital role in mass communication, especially in new media. To understand Remix as a cultural phenomenon, we must first define it in music. A music remix, in general, is a reinterpretation of a pre-existing song, meaning that the “spectacular aura” of the original will be dominant in the remixed version.5 Some of the most challenging remixes can question this generalisation, but based on its history, it can be stated that there are three basic types of remixes. The first remix is extended: it is a longer version of the original composition containing long instrumental sections to make it more mixable for the club DJ. The first known disco song to be extended to ten minutes is “Ten Percent” by Double Exposure, remixed by Walter Gibbons in 1976.6 The second remix is selective: it consists of adding or subtracting material from the original composition. This type of remix made DJs popular producers in the music mainstream during the 1980’s. One of the most successful selective remixes is Eric B. & Rakim’s “Paid in Full”, remixed by Coldcut in 1987.7 In this case Coldcut produced two remixes. The most popular version not only extends the original recording, following the tradition of the club mix (like Gibbons), but it also contains new sections as well as new sounds, while others were subtracted, always keeping the “essence” or “spectacular aura” of the composition intact. The third remix is reflexive: it allegorises and extends the aesthetic of sampling, where the remixed version challenges the “spectacular aura” of the original and claims autonomy even when it carries the name of the original; material is added or deleted, but the original tracks are largely left intact to be recognisable. An example of this is Mad Professor’s famous dub/trip hop album No Protection, which is a remix of Massive Attack’s Protection. In this case both albums, the original and the remixed versions, are validated on the quality of independent production, yet the remixed version is completely dependent on Massive’s original production for validation.8 The fact that both albums were released in the same year, 1994, further complicates Mad Professor’s allegory. This complexity lies in the fact that Mad Professor’s production is part of the tradition of Jamaica’s dub, where the term “version” was often used to refer to “remixes,” which due to their extensive manipulation in the studio pushed for autonomy. This was paradoxically 159
$ %
1 +/ '6 31 & 61?GGI
F'D( 1D' D!&DD ,D D&DF ,D # &' -6 ! 6 6?GGH
allegorical, meaning that, while dub recordings were certainly derivative works, due to the extensive remixing of material, they took on an identity of their own.9
The Allegorical Impulse in Remix Now that Remix has been defined, I will contextualise the theory of allegory by art critic and theorist Craig Owens in direct relation to the three basic forms of Remix, in order to evaluate how a fourth form emerges in areas outside of music. I call this fourth form the Regenerative Remix. The remix is always allegorical following the postmodern theories of Owens, who argues that in postmodernism a deconstruction—a transparent awareness of the history and politics behind the object of art—is always made present as a “preoccupation with reading.”10 The object of contemplation, in our case Remix (as discourse), depends on recognition (reading) of a pre-existing text (or cultural code). For Owens, the audience is always expected to see within the work of art its history. This was not so in early modernism, where the work of art suspended its historical code, and the reader could not be held responsible for acknowledging the politics that made the object of art “art.”11 Updating Owens’s theory, I argue that in terms of discourse, postmodernism (metaphorically speaking) remixed modernism to expose how art is defined by ideologies and histories that are constantly revised. The contemporary artwork, as well as any media product, is a conceptual 160
A1 DDAE D&)DD,) 1D
( ) 6?GGI6/& #+ # 6+ 1 31 ,&
&*+* ( 6 ! !61 6 ?GGH
and formal collage of previous ideologies, critical philosophies, and formal artistic investigations extended to new media. In Remix as discourse, allegory is often deconstructed in more advanced remixes following the Reflexive Remix, and quickly moves to be an exercise that at times leads to a “remix” in which the only thing that is recognisable from the original is the title. Two examples from music culture are Underworld’s remixes of “Born Slippy”, released in 1996,12 and Kraftwerk’s remixes of their techno classic “Tour de France” released in 2003.13 Both remix projects are produced by the original authors. Some of their remixes are completely different compositions that only bear the title of the supposed remixed track. At this moment Remix becomes discourse: its principles are at play as conceptual strategies. Kraftwerk and Underworld use Remix as a concept, as a cultural framework rather than a material practice. These examples demonstrate that a remix will always rely on the authority of the original composition, whether in forms of actual samples or in the form of reference (citation), as demonstrated with Kraftwerk and Underworld. The remix is in the end a re-mix—that is, a rearrangement of something already recognisable; it functions on a meta-level. This implies that the originality of the remix is non-existent; therefore it must acknowledge its source of validation selfreflexively. The remix when extended as a cultural practice, as a form of discourse, is a second mix of something pre-existent. The material that is mixed at least for a second time must be recognised, otherwise it could be misunderstood as something new and it would become plagiarism. However, when this happens it would not mean that the material produced does not have principles of Remix at play, only 161
$ %
that the way the author has framed the content goes against an ethical code placed by culture on intellectual property. Regardless of the legal contentions, without a trace of its history, then, the remix cannot be Remix.14
The Regenerative Remix The recognition of history is complicated in the Regenerative Remix. The Regenerative Remix takes place when Remix as discourse becomes embedded materially in culture in non-linear and ahistorical fashion. The Regenerative Remix is specific to new media and networked culture. Like the other remixes it makes evident the originating sources of material, but unlike them it does not necessarily use references or samplings to validate itself as a cultural form. Instead, the cultural recognition of the material source is subverted in the name of practicality—the validation of the Regenerative Remix lies in its functionality. A Regenerative Remix is most common in Software Mashups, although all social media from Google to YouTube rely on its principles. The Regenerative Remix consists of juxtaposing two or more elements that are constantly updated, meaning that they are designed to change according to data flow. I choose the term “regenerative” because it alludes to constant change and is a synonym of the term “culture.” Regenerative while often linked to biological processes is extended here to cultural flows that can move as discourse from medium to medium, although at the moment it is in software that it is best exposed. This is further evaluated in later sections. The Regenerative Remix is then defined in opposition to the allegorical impulse and, in this sense, is the element that, while it liberates the forms that are cited from their original context, opens itself up for ahistoricity and misinterpretations. The principle of the Regenerative Remix is to subvert, not to recognise but to be of practical use. In this regard Google news is a basic Regenerative Remix. Google does not produce any content but merely compiles—mashes up—material from major newspapers around the world. People often do not think about which newspaper they may be reading, but rather rely on Google’s authority as a legitimate portal when accessing the information. In the following sections I note how online resources like Yahoo! Pipes appropriate pre-existing information to create mashups that are specific to a user’s need. For instance, people looking for an apartment may mash together a map with a list of rentals, both which are constantly updated by their particular members. This example serves the argument that, while Remix is mostly recognised for its three basic forms, it is the Regenerative Remix—the fourth form—that offers a great challenge, as the tendency to appropriate material in the name of efficiency does not always mean that proper recognition of the originating source is performed. This contention, as will be noted in one of the following sections, is what keeps the term remix culture relevant, which was largely made popular by Lawrence Lessig to support the production and distribution of derivative works while doing justice to intellectual property.15 As Lessig’s main 162
A1 DDAE D&)DD,) 1D
' 516! , 6,11 & 61?GGI
concern is with the law, his preoccupation exposes how history (a trace of citations, in his case) is vital in derivative licenses distributed and supported by the international non-profit Creative Commons, which Lessig co-founded.16 The principle of periodic change, of constant updates (e.g. as how Google news is regularly updated) found in the Regenerative Remix makes it the most recent and important form that enables Remix as discourse to move across all media, and to eventually become an aesthetic that can be referenced as a tendency. Nevertheless, even in this fourth form, allegory is at play—only it is pushed to the periphery. Whether at the periphery or at the centre of culture, it follows that Remix is not only allegorical, but is also dependent on history to be effective. This is the reason why it is a discourse. This is crucial to keep in mind because History was questioned coincidentally in the same time period of postmodernism, which ranges roughly from the mid/late-sixties to the mid-eighties, in which the rise of remixing in music took place. The postmodern period resists a simple definition; however, to note its complexity, two contrasting views by Jean Francois Lyotard and Fredric Jameson can be revisited. Jean Francois Lyotard contextualised postmodernism as a time of fragmentation, of bits and pieces, of incompleteness and open-ended possibilities;17 a time when little narratives questioned Universal History. Meta-Narratives attained a certain stigma due to the rise of disciplines such as Cultural and Post-colonial Studies, 163
$ %
where the story of the subaltern could be expressed. Simultaneously, during the postmodern period the general tendency of specialisation in both research and commercial fields became streamlined. In contrast, Fredric Jameson considers the postmodern period as a manifestation of the logic of Late Capitalism, following the definitions of Ernest Mandell. Jameson, unlike Lyotard, does not question Universal History, but instead argues that what is called the postmodern is really “a conception which allows for the presence and coexistence of a range of very different, yet subordinate, features.”18 For Jameson, Postmodernism is in line with the dialectic of History, as defined by Marx, and thus is in its complex form a progression of Modernism and Capitalism. In both Lyotard’s and Jameson’s positions as well as those in-between, an acknowledgement of some form of plurality, as well as a rupture in History, is evident. However, what is debated by theorists who reflect on modernism and postmodernism is how such plurality and rupture are linked to History, epistemologically. This is of great importance because neither modernism nor postmodernism have been left behind—they are mashed up as ideological paradigms. During the first decade of the twenty-first century, we function with a simultaneous awareness and conflictive acceptance of both cultural paradigms. Therefore, we must dwell on how they are linked to new media, particularly in relation to the terms repetition and representation as defined by political economist Jacques Attali, who wrote about the relationship of these two terms in the 1980’s, during the heyday of postmodern thought. Attali, who shares a materialist analysis with Jameson, argues that since the rise of mechanical reproduction, the main way that people understand their reality is not through representation but repetition; for him this means mechanical repetition vs. representation by a person who, for example, performs a music score repeatedly for an audience.19 These concepts are actually linked to Jameson’s own theory, which he calls “the waning of affect in postmodern culture”, that is, a sense of fragmentation, a suspension or collapse of history into intertextuality due to the high level of media production. I paraphrase this collapse as multiple ahistorical readings of all forms of cultural production. During the postmodern period, the concept of the music remix was developed. As previously noted, the remix in music was created and defined by the DJs in the early 1960’s and late 70’s in New York City, Chicago and other parts of the United States. Their activity evolved into sampling bits of music in the sound studio during the 80’s, which means that the DJ producers were cutting/copying and pasting pre-recorded material to create their own music compositions. New Media depends on sampling (cut/copy and paste), an activity that shares the same principles of appropriation that DJ producers performed. To provide a specific example in new media, the Internet as a network relies directly on sampling; some examples include file sharing, downloading open source software, live streaming of video and audio, sending and receiving e-mails. These online activities rely on copying, and deleting (cutting) information from one point to another as 164
A1 DDAE D&)DD,) 1D
data packets. Cut/copy and paste then applies directly to New Media at large when we consider the efficiency with which independent print publications are produced and made accessible for download or online reading, by small businesses or nonprofits like the activist publication The Journals of Aesthetics and Protest,20 as well as the online and print new media magazine a minima,21 among many others. The international activity of these and other journals and magazines was acknowledged in 2007 by Documenta, an exhibition of contemporary art that takes place in Germany every five years. Documenta created a special forum and exhibition that encased new digital forms of publication.22 Here we see how the act of sampling, a key element in actual remixing, is used for different interests beyond Remix’s foundation in music. In this case, principles of sampling (cut/copy & paste) are at play for practical reasons. The journals are mainly concerned with producing affordable publications, and make use of computer sampling technology towards this end. Sampling (cut/copy & paste) technology also makes possible the larger than life special effects of movies like Star Wars,23 not to mention the possibility of watching video on iPhones and iPods while text messaging: constantly being connected becomes the norm based on this one activity of cutting/copying and pasting. Thus, culture is redefined by the constant flow of information in fragments dependent on the single activity of sampling. The ability to manipulate fragments effectively, then, extends principles of Remix even in practical terms. But it must be noted that these examples are not remixes themselves. They are cited to note how principles of Remix have become ubiquitous in media, so that we may begin to understand the influence of Remix as discourse. Now that remix has been defined in its four basic forms, we are ready to look at mashups in music as well as other fields in mass culture, especially Web 2.0 applications. This will then expose the latent state for critical practice in Reflexive Mashups.
From Megamix to Mashup The foundation of musical mashups can be found in a special kind of Reflexive Remix known as the megamix, which is composed of intricate music and sound samples. The megamix is an extension of the song medley. The difference between a medley and a megamix is that the medley is performed usually by one band, meaning that a set of popular songs will be played in a sequence with the aim to excite the listeners or dancers. A popular example of a medley band is Stars on 45, a studio band put together in 1981 to create a medley of songs by the Archies, the Beatles, and Madness among others.24 A megamix is built upon the same principle of the medley but instead of having a single band playing the compositions, the DJ producer relies strictly on sampling brief sections of songs (often just a few bars, enough for the song to be recognised) that are sequenced to create what is in essence an extended collage: an electronic 165
$ %
medley consisting of samples from pre-existing sources. Unlike the Extended or the Selective Remixes, the megamix does not allegorise one particular song but many. Its purpose is to present a musical composition riding on a uniting groove to create a type of pastiche that allows the listener to recall a whole time period and not necessarily one single artist or composition. The megamix has its roots in the sampling practice of disco and hip hop. While disco in large part experimented with the Extended Remix, hip hop experimented with Selective and Reflexive Remixes. Grandmaster Flash may be credited with having experimented in 1981 with an early form of the megamix when he recorded “The Adventures of Grandmaster Flash on the Wheels of Steel,”25 which is essentially an extended mix performed on a set of turntables with the help of music studio production. The recording included songs by The Sugarhill Gang, The Furious Five, Queen, Blondie and Chic. Flash’s mix does not fit comfortably into any of the Remix definitions I have provided above; instead, it vacillates among them as a transitional song. “The Adventures of Grandmaster Flash on the Wheels of Steel” exercises principles of the Extended Remix, when it loops an instrumental version of the 1970’s group Chic’s “Good Times”, over which sections from different songs (such as “Another One Bites the Dust” and “Rapture”) are layered for a few bars to then slip back to Chic’s instrumental. Flash’s mix also has principles of the Reflexive Remix because it pushes the overall composition to attain its own independence with the quick juxtaposition of the songs. But in the end, the slipperiness of the recording is mainly invested in exploring the creative possibilities of the DJ mixing records on a set of turntables as quickly as possible. The influence of the cutting and switching from one record to another found in this particular recording can be sensed in megamixes that were produced in the music studio from actual samples. An example from the history of electro-funk is “Tommy Boy Megamix” produced in 1984, which is a six-minute remix of the most popular songs by the hip hop label Tommy Boy; the megamix includes compositions by Afrika Bambaataa and the Soul Sonic Force, as well as Planet Patrol and Jonzun Crew among others.26 The megamix found its way into the nineties in the forms of bastard pop and bootleg culture often linked to culture jamming. One of the best-known activists/artists during this period is the collective Negativland, who have produced some well-noted mashups to date.27 The music mashups at the beginning of the twenty-first century follow the principle of the eighties megamix, and unlike the Selective or Extended Remixes, they do not remix one particular composition but at least two or more sources. Mashups are special types of Reflexive Remixes, which at times are regressive— meaning that they simply point back to the “greatness” of the original track by celebrating it as a remix; this tendency to take the listener back to the original song logically leads us to name such remix a Regressive Mashup. The term regressive here makes an implicit reference to Adorno’s theory of regression in mass culture, which for him is the tendency in Media to provide consumers with easily understood 166
A1 DDAE D&)DD,) 1D
entertainment and commodities.28 Some popular music mashups are “A Stroke of Genie-us” produced in 2001 by DJ Roy Kerr, who took Christina Aguilera’s lyrics from “Genie in a Bottle” and mashed them with instrumental sections of “Hard to Explain” by the Strokes.29 Another example is a mega-mashup by Mark Vidler of Madonna’s “Ray of Light” and the Sex Pistol’s “Problems.”30 But perhaps the most popular, and historically important mashup to date, is a full-length album by Danger Mouse entitled The Grey Album, which is a mashup of Jay-Z’s special a cappella version of his Black Album with carefully selected sections from the Beatles’ White Album.31 The Grey Album is important because it is completely sampled. It is one of the most important sampling experiments, along with MARRS’s “Pump Up The Volume” 32 which can be considered an early mashup still relying on the concept of a uniting groove as first experimented on the turntables by Grandmaster Flash. The Grey Album goes further because it exposed the tensions of copyright and sampling with emerging technologies: Danger Mouse deliberately used the Internet for distribution and he was pushed by EMI (the copyright holders of the Beatles’ White Album) to take the Grey Album off line.33 The creative power of all these megamixes and mashups lies in the fact that even when they extend, select from, or reflect upon many recordings, much like the Extended, Selective and Reflexive Remixes, their authority is allegorical—their effectiveness depends on the recognition of pre-existing recordings. In the end, as has been noted, mashups are a special kind of reflexive remix that aim to return the individual to comforting ground. As Adorno would argue, they support the state of regression that gives people false comfort. In postmodernism, as Jameson argues, this became the norm. In this fashion we move from modernism, a state of contemplation of utopia, to postmodernism, a state of mere consumption of utopia as just another product to shop around for, along with anything that can be commodified, from nature to the act of resistance. Supporting this waning of affect linked to repetition are the principles of Remix in mashups; however, this norm can potentially be disrupted with Web 2.0 applications, as we will see below.
From Music to Culture to Web 2.0 Once mashups become complementary of Remix as discourse, as a strategy for deployment of repetition, their influence can be noticed in diverse cultural forms: tall buildings in major cities are often covered with advertisements selling products from bubble gum to cell phone services, or promoting the latest blockbuster film. The building turns into a giant billboard: advertising is mashed up with architecture. A more specific example: cigarette companies in Santiago de Chile have been pushed to include on their cigarette packs images and statements of people who have cancer due to smoking; two cultural codes that in the past were deliberately 167
$ %
) ) 1 & '6 1 !3 # 1 ! 6 1 ! ' (?J6?GGK
separated are mashed up as a political compromise to try to keep people from smoking, while accommodating their desires. The Hulk and Spiderman have been mashed up to become the Spider-Hulk, as an action-figure. In this case, the hybrid character has the shape of the Hulk with Spiderman’s costume on top (two already hybrid characters in their own right). It is neither but both—simultaneously.34 Since their popular introduction, mashups as a spectacular aesthetic are everywhere. They have moved beyond music to other areas of culture, at times merely as cultural references, and at others with actual formal implementation. Such a move is dependent on running signifiers that are in turn dependent on the repetition of media. And repetition has meddled with computer culture since the middle of the twentieth century. The strategic aesthetic of mashups was at play in New Media during the 1980’s with the conceptualisation of the personal computer. While people who developed early personal computers may not have been influenced by mashups directly as a cultural reference, their similarities bear comparison, especially because the eighties is the time when computers and remix in music were both introduced to popular culture. The computer’s “desktop” which was designed for Apple’s GUI (Graphic User Interface) is in essence a technological and conceptual mashup; in this case the computer’s information, which usually was accessed via the notorious command line, became available to the average user when it was mashed up with a visual interface called a “desktop” (for convenience of mass recognition), making an obvious reference to a person’s real life desktop. This allowed the computer user to concentrate on using the machine for personal goals, while not worrying about how the different parts of the computer ran. This conceptual model has been extended to Web application mashups, in which the Regenerative Remix is fully at play, as will become evident shortly. 168
A1 DDAE D&)DD,) 1D
Web Application Mashups Mashups as a conceptual model take on a different role in software. For example, the purpose of a typical Web 2.0 mashup is not to allegorise particular applications, but rather, by selectively sampling in dynamic fashion, to subvert applications to perform something they could not do otherwise by themselves. Such mashups are developed with an interest to extend the functionality of software for specific purposes. As we can note, this is one of the essential elements in the Regenerative Remix. In software mashups, the actual code of the applications is left intact, which means that such mashups are usually combinations of pre-existing sources that are brought together with some type of “binding” technology. In a way, the pre-existing application is almost like Lego: ready for modular construction. The complexity with Web application mashups lies in how intricate the connections become. The roughest of mashups are called “scrapings” because they sample material from the front pages of different online resources and websites, and the more complex mashups actually include material directly taken from databases, that is, if the online entity decides to open an Application Programming Interface (API) to make their information available to Web developers.35 In either case Web application mashups, for the most part, leave the actual code intact, and rely on either dynamic or static sampling, meaning that they either take data from a source once (static) or check for updates periodically (dynamic). Web application mashups are considered forms that are not primarily defined by particular software; they are more like models conceived to fulfil a need, which is then met by binding different technology. The most obvious example is Ajax, which has been defined by Duane Merrill as “a Web application model rather than a specific technology.”36 Ajax tentatively stands for “Asynchronous Javascript + XML”. Some well-known mashups include mapping mashups, which are created with readymade interfaces like Google Earth or Yahoo! Maps, offering the combination of city streets with information on specific businesses or other public information that might be of interest to the person who developed the mashup.37 A mashup model, as previously noted, that appears to be stable as long as the websites offering the information keep their APIs open is Pipes by Yahoo!38 This particular type of mashup goes deep into the database to access dynamic data. Pipes by Yahoo! actually points to the future of the Web, where the user will be able to customise, to a sophisticated level, the type of information that s/he will be accessing from day to day. Pipes, in theory, provides the user with the same possibilities made available by Google, when the user is able to customise his/her own personal portal news page. The difference in Pipes, however, is that the user can combine specific sources for particular reasons. In a way, the specificity demands that the user truly thinks about why certain sources should be linked. Pipes allows the user to choose a particular source, such as news, biddings, or map information, to then 169
$ %
link it to another source. Many of the pipes that I have browsed through leave me with a sense of critical thinking and practicality on the part of the persons who created them, not that Pipe developers are after social or cultural commentary, but rather that they develop most pipes to be useful in specific ways. When the user is initiated in Pipes, some of the examples provided include: “apartment near something”, “aggregated news alert”, and eBay “Price Watch”. All these pipes propose a very specific functionality: that is, to find an apartment, to get the latest news, or to keep up with the best prices on particular biddings on eBay. For example, a user could be looking for an apartment in a particular area, the person could then connect a public directory, such as Craigslist which has rental information, to Yahoo! maps; the Pipe would then be updated as the information is actualised in the particular sources, meaning the map and the rental resource. What these examples show is that Web application mashups function differently from music mashups. Music mashups are developed for entertainment; they are supposed to be consumed for pleasure, while Web application mashups like Pipes by Yahoo! actually are validated if they have a practical purpose. This means that the concept and cultural role of mashups change drastically when they move from the music realm to a more open media space such as the Web. We must now examine this crucial difference.
The Ideology Behind the Reflexive Mashup Contrary to popular understanding, Web Application Mashups are not remixes in the traditional sense, following the principles of music. Based on their functional description, they are Regenerative Remixes; they subvert pre-existing material for the sake of functionality, pushing allegory (or the historical importance of the originating source) to the periphery. To reflect further on this, let us consider again the music mashups considered so far. Their power lies in their spectacular aura, meaning that they are not validated by a particular function they are supposed to deliver, but rather by the desires and wants that are rallied in the consumer who loves to be reminded of certain songs for his/her leisure enjoyment. Music has this power because it is marketed as a form of mass escapism. Keeping in mind the previously introduced theories of Jacques Attali and Theodore Adorno, the average person consumes music in order to wind down and find delight in the few spare moments of the everyday. Those who can, go to concerts, but most people are likely to enjoy music as recordings on CDs and MP3s. When people hear their favourite songs mashed up, it is very likely that they will get excited and find pleasure in recognising the compositions; their elation will help them cope with whatever stress they may have had throughout the day. Musical mashups are Reflexive Remixes that never leave the spectacular realm. They support and promote the realm of entertainment and therefore find their power as forms of regression as defined by Adorno, and repetition according to 170
A1 DDAE D&)DD,) 1D
Attali, while extending postmodernism’s intertextuality after Jameson. But Web application mashups can function differently, as we have already seen with Yahoo! Pipes. The reason for this is that Web application mashups are developed with a practical purpose; this tendency towards optimised functionality has pushed Web application mashups to constantly access information from the originating sources: to constantly update data. They are (at least initially) proposed to serve as convenient and efficient forms to stay informed rather than to be entertained. The notion of Mashups found in music culture is appropriated in the name of efficiency once such a concept enters the culture of new media; this also changes the concept of a mashup drastically, making it reflexive rather than regressive. The term ‘reflexive’ here functions differently from how it functions in the Reflexive Remix. As previously defined, the Reflexive Remix demands that the viewer or user question everything that is presented, but this questioning stays in the aesthetic realm. The notion of reflexivity in a software mashup implies that the user must be aware as to why such a mashup is being accessed. This reflexivity in Web applications moves beyond basic sampling to find its efficiency with constant updating. A Reflexive Mashup does not therefore necessarily demand critical reflection, but rather practical awareness. The validation of the Reflexive Mashup found in Web applications does not acquire its cultural authority in popular recognition of pre-existing sources, but instead it is validated based on how well those sources are sampled in order to develop more efficient applications for online activity. This turns the Reflexive Mashup into a different object: one that does not celebrate the originating sources, but if anything, subverts them. Usability rules here, making allegory as encountered in other remixes incidental; allegory is pushed to the periphery. This is Remix as discourse—this is the basic Regenerative Remix, expressed materially in software. However, this does not mean that reflexive mashups cannot be used for spectacular entertainment. YouTube and MySpace (which function according to the principles of the Regenerative Remix) are some of the most obvious manifestations influenced by mashup models in Web 2.0, where people are willing to tell their most intimate secrets for the sake of being noticed, and to (maybe even) become “media stars”. One has to wonder how the concept of privacy may be redefined in these spaces. So, with this in mind, Pipes by Yahoo! may be used for a spectacular cause in the end: any music fan can potentially mash two or more feeds to keep up with the news of his/her favourite movie star. In this example the software mashup becomes appropriated for the sake of pure entertainment. It follows that the reflexive mashup’s foundation in functionality does not make it free from the allegorical tendency that other forms of Remix are dependent upon; however, this duality in purpose may be a hint as to the real possibilities that lie latent in emerging technologies, which can be tapped if one is critically aware of the creative potential of Web 2.0. Software mashups expose that it is a deliberate decision by the user to define the combinations as reflexive or regressive according 171
$ %
to personal interests, regardless of the mashup’s initial mode.
Sampling and the Reflexive Mashup Mashups, whether they are regressive or reflexive, are dependent on sampling. But sampling, as can be noted in the various examples that have been discussed, begins to be supplanted by constant updating. Some mashups do not “cite” but rather materially copy from a source. This differs for the constant updates found in Web 2.0 applications like Pipes by Yahoo! because such mashup is dynamically accessing information. In music, architecture, film and video as well as many other areas of the mainstream, the source is sampled to become part of another source in form, while in more dynamic applications developed in Web 2.0 the most effective mashups are updated constantly. The Regressive Mashup in music is regressive because it samples to present recorded information which immediately becomes meta information, meaning that the individual can then understand it as static, knowing it can be accessed in the same form over and over again—this recorded state is what makes theory and philosophical thinking possible. Because of its stability, the principles of the regressive mashup, as previously mentioned, could inform the aesthetic of a building covered with an image publicising a film such as Transformers, a cigarette box showing the image of a person with lung cancer, as well as two songs by disparate musical acts like Christina Aguilera and the Strokes. The regressive mashup as an aesthetic depends on the recorded signs that are not mixed but transparently juxtaposed: they are recorded to be repeated, accessed, or looked at perfectly over and over again, while the Reflexive Mashup in Web 2.0 no longer relies on sampling but instead on constant updating, making incidental not only the allegorical reference that validates the Regressive Mashup, but also pushing forward with a constant state of action toward reflection on what is being produced each time the mashup is accessed. The Reflexive Mashup then is the most basic form of a Regenerative Remix in terms of software. But this form, after being internalised by people as part of their daily activities, comes to affect other areas of culture.
Conclusion: Regenerating Bonus Beats What exactly is the Regenerative Remix? At the beginning of the 21st century the Regenerative Remix is a form of material production best understood in software. The Regenerative Remix is exposed in the activity of constant updates made with software that also creates a well-organised archive: the Reflexive Mashup has been the case study in this occasion. Yet, even when its archive may be accessible, it does not mean that people will necessarily ever use it directly; most people will stick to the most immediate material, placed on the front pages of any online resource, because the Regenerative Remix encourages the now: the present—for the sake of 172
A1 DDAE D&)DD,) 1D
practicality and functionality. The archive, then, legitimates constant updates allegorically. The database becomes a delivery device of authority in potentia: when needed, call upon it to verify the reliability of accessed material; but until that time, all that is needed is to know that such an archive exists. But there is another face to the coin: the database, which is played down in the front pages, is actually extremely crucial for search engines. Here the archive becomes the field of knowledge to be accessed; it is the archeological ground to be explored by sophisticated researchers and lay-people alike. It is a truly egalitarian space, which provides answers to all queries possible. Because of this potential, RSS feeds have attained great importance, and due to demand, people are given tools with which to choose feeds to read. The interfaces of these RSS readers become personalised “front pages”, which are organised to present the latest information first. There are quite a few RSS readers available; some, like Vienna,40 can be downloaded and used as applications on a personal computer; others like Google Reader are Web applications that run online and can be accessed from any computer.41 The Regenerative Remix, then, becomes the contemporary frame of cultural reference by combining the state of social communication with software that is designed to keep up with changes materially and ideologically. Software Mashups are specifically designed to make this possible. As an extension of this aesthetic, Google News is constantly updated, as is Wikipedia; Twitter feeds are relevant only because of pervasive updates; Facebook, MySpace, YouTube and all social media are dependent on constant updating as well, and thus defined by the principles of the Regenerative Remix. The type of production at play in networked culture was not possible prior to the rise of software, as it is the speed of information exchange that makes such production feasible. In the commercial sector this becomes a challenge for major media corporations, who have to constantly remind people about what to consume because popular culture is deliberately designed to be forgotten—or become uncool almost as soon as it begins to be consumed; this means that eventually, it can be reintroduced in modified form as “new”. This is particularly true of music hits often repackaged as remixed versions, mashups, etc., with the purpose to stimulate fresh demand in the younger generations. This is why commercial production relies on remix principles to reintroduce their products in culture with “retro” flair. Fashion is the master of this strategy, of course—nobody needs to recognise the actual historical reference of a garment, only that it recalls something from a vague period which makes it hip, if it is designed with enough historical distance. Admittedly, people have more power than ever before on what they decide to consume and to what they contribute their time and effort, which is why social media is supported by major corporations to create communities that can then be analysed by marketing experts in order to develop more effective ways to break through the media noise itself. At the beginning of the twenty-first century, ultimately, the question becomes 173
$ %
what to search for, if one is to be presented with ahistoricity as the norm. Why know history when one can learn about particular subjects whenever so desired? But how can one know what to look for if one is encouraged to navigate through fragments according to random desires? These questions are equally important to cultural theorists as well as marketing directors, which means that while constant updates enable people to stay better informed, they also become a challenge for critical reflection. The concept of critical distance, which has been used by researchers and intellectuals to step back and analyse the world, is redefined by the Regenerative Remix. This shift is beyond anyone’s control, because the flow of information demands that individuals embed themselves within the actual space of critique, and use constant updating as a critical tool. This is quite a challenge because as this text demonstrates, the Regenerative Remix is primarily designed for practicality, for the sake of immediate services; and the archive is designed to come to the front at the very moment that a query is made. While these features could be seen as neutral, one can quickly notice their friendliness to the market. In fact, the Regenerative Remix primarily exists because the market finds it useful. The Regenerative Remix privileges the ever-present—at the same time, it knows it needs history for legitimation, and the archive can be called upon to suffice as proof of its reliability. But, as previously noted, the archive also functions in market value, as the resource’s importance grows as its database grows; when reconfigured properly, it can provide revenue when people use a search engine to buy items online. Amazon and WalMart among many other major corporations make the most of this feature. The database, then, is ahistorical, ready to be manipulated for the sake of immediate needs that can place the accessed material in quite different contexts. This was already true when Walter Benjamin noted the popular replacement of exhibit value over cult value in the 1920’s and 30’s, in his well-known essay “The Work of Art in the Age of Mechanical Reproduction,”42 in Attali’s terms, who published his theory in the 1980’s, this is equivalent to repetition overshadowing representation. The difference during the first decade of the twenty-first century is that efficiency is coming close to a collective “living” form: a Wikipedia page is likely to be adjusted within minutes after an apparent inconsistency is found—like a living person, online resources tend to contradict themselves. Yet, in the case of Wikipedia, constant updating is the only reason why it can stand against Encyclopaedia Britannica as a valid alternative. This means that people’s understanding of History in terms of the past, present, and future are mashed up in the Regenerative Remix as a dataset that is always changing and is ready to be accessed according to the needs of the user in the ever-present. At the beginning of the twenty-first century, it is evident that the Regenerative Remix is defining the next economic shift. Remix culture is experiencing a moment in which greater freedom of expression is mashed up against increasingly efficient forms of analysis and control. 174
A1 DDAE D&)DD,) 1D
Notes 1
2
3
4
5
6 7
8 9 10
11 12 13 14
Mike Snider, “Maya Muscles its Way into Hollywood film awards”, USA Today, 25 March, 2003, (23 June, 2007) Sasha Frere-Jones, “1 + 1 + 1 = 1: The New Math of Mashups”, The New Yorker, 10 January, 2005, http://www.newyorker.com/ archive/2005/01/10/050110crmu_music. This is my own definition extending Lawrence Lessig’s definition of Remix Culture based on the activity of “Rip, Mix and Burn.” Lessig is concerned with copyright issues; my definition of Remix is concerned with aesthetics and its role in political economy. See Lawrence Lessig, The Future of Ideas (New York: Vintage, 2001), 12-15. For some good accounts of DJ Culture see Bill Brewster and Frank Broughton, Last Night a DJ Saved my Life (New York: Grover Press, 2000); Ulf Poschardt, DJ Culture (London: Quartet Books, 1998), 193-194; Javier Blánquez, Omar Morera, Editors, Loops: Una historia de la música electrónica (Barcelona: Reservoir Books, 2002). I use the term “spectacular” after Guy Debord’s theory of the Spectacle and Walter Benjamin’s theory of Aura. We can note that the object develops its cultural recognition, not on cult value, but exhibit value (following Benjamin), because it depends on the spectacle (following Debord) for its mass cultural contribution. See Guy Debord, The Society of the Spectacle (New York: Zone Books, 1995), 110-117; Walter Benjamin, “The Work of Art in the Age of Mechanical Reproduction”, Illuminations (New York, Schocken, 1968), 217251. Brewster, 2000, 178-79. Paid in Full was actually a B-side release meant to complement “Move the Crowd”. Eric B. & Rakim, “Paid in Full,” Re-mix engineer: Derek B., Produced by Eric B. & Rakim, Island Records, 1987. Poschardt, 1998, 297. Dick Hebdige, Cut ‘n’ Mix: Culture, Identity and Caribbean Music (New York: Methuen, 1987), 12-16. Craig Owens, “The Allegorical Impulse: Towards a Theory of Postmodernism”, eds., Brian Wallis and Marcia Tucker, Art After Modernism (New York: Godine, 1998), 223. Ibid. Underworld, “Born Slippy”, Single EP, TVT, August 1996. Kraftwerk, Tour De France Soundtracks, Astralwerks, August 2003. DJ producers who sampled during the eighties found themselves having to acknowledge History by complying with the law; see the landmark lawsuit against Biz Markie, see Brewster, 246.
175
$ % 15
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
31 32 33
34
176
Lessig has written a number of books on this subject. The most relevant to the subject of creativity and intellectual property: Lawrence Lessig, Free Culture (New York: Penguin, 2004). Creative Commons, http://creativecommons.org. Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge (Minneapolis, Minnesota: 1984), 3–67. Fredric Jameson, Postmodernism or, The Logic of Late Capitalism (Durham: Duke University Press, 1991), 4. Jacques Attali, Noise: The Political Economy of Music (Minneapolis: Minnesota Press, 1985), 68-81. Journal of Aesthetics and Protest, http://www.journalofaestheticsandprotest. org. A minima:: Magazine, http://www.aminima.net/. Documenta XII, http://www.documenta.de/100_tage.html?&L=1. Snider, 2003. Stars on 45. The Very Best of Stars on 45, Red Bullet. Re-released 2002. Also see the band’s website: Stars on 45, http://www.starson45.com/aboutus1.html. Grandmaster Flash, “The Adventures of Grandmaster Flash on the Wheels of Steel”, 12 inch single, Sugarhill Records, 1981. “Tommy Boy Megamix”, 12 inch single, Tommy Boy, 1985. Negativland, http://www.negativland.com Theodore Adorno, The Culture Industry (London, New York: Routledge, 1991), 50–52. A copy of this mashup can be found at The Hype Machine: DJ Roy Kerr, “A Stroke of Genius”, http://hypem.com/track/54069. Mark Vidler, “Ray of Gob”; for more information on the mashup, see Go Home Productions, 2006, http://www.gohomeproductions.co.uk/history. html. Frere-Jones, 2005. For a good account on the importance of “Pump Up the Volume”, see Poschardt, 1998, DJ Culture. Corey Moss, “Grey Album Producer Danger Mouse Explains How He Did It” MTV, 11 May, 2004, http://www.mtv.com/news/ articles/1485693/20040311/danger_mouse.jhtml. These are citations based on my own travels to different cities; buildings with images can be found in any major city. For information about cigarettes see: Liz Borkowski, “The Face of Chile’s Anti-Tobacco Campaign: The Pump Handle” Posted on 4 January, 2007, http://thepumphandle.word press.com/2007/01/04/the-face-of-chiles-anti-tobacco-campaign/. For an image of the Spider-Hulk see: “The Incredible Hulk Engine of Destruction”, http://www.incrediblehulk.com/spiderhulk.html.
A1 DDAE D&)DD,) 1D 35
36 37 38 39
40 41
Duane Merrill, “Mashups: The new breed of Web App. An Introduction to Mashups”, IBM, 16 October, 2006, http://www-128.ibm.com/developer works/web/library/x-mashups.html. Ibid. For various examples of map mashups see the blog Google Maps Mania, http://googlemapsmania.blogspot.com/. Yahoo! Pipes, http://pipes.yahoo.com/pipes. This is similar to Craig Owens’s observation that the Old Testament validates the New Testament. Without the Old Testament, the New Testament would have no authority. It is allegory that makes this possible. See Owens, 1998, 204. Vienna, A Freeware RSS/Atom Newsreader for Mac OS X, http://www.vien na-rss.org/vienna2.php Barb Dibwad, “HOW TO: Choose a News Reader for Keeping Tabs on Your Industry”, Mashable, 3 December, 2009, http://mash able.com/2009/12/03/news-reader/?utm_source=feedburner&utm_ medium=feed&utm_campaign=Feed%3A+Mashable+(Mashable)
177
Change of Media, Change of Scholarship, Change of University: Transition from the Graphosphere to a Digital Mediosphere &'&! (
Introduction Currently a far-reaching process of cultural change can be observed that is closely connected to the rapid technological developments in the field of digitally-networked media. The Internet establishes itself as a ubiquitous medium of transmission and communication. The following considerations enter into the question of how these cultural changes influence universities as specific, scholarly educational institutions. Especially the concept of scholarship, the social function, and the social structures of universities shall be discussed. For my argumentation I take a cultural theoretical perspective, referring to Régis Debray’s mediological approach. Mediology provides a methodical basis with which to research the correlations of technology, culture and society. Not the media themselves but the processes of mediation and transmission are the focus of attention.
Mediospheres Following Régis Debray’s mediological considerations cultural ages can be distinguished according to the technical media of transmission. Debray identifies four of these so-called “mediospheres”: the logosphere, the graphosphere, the videosphere and the currently evolving digital mediosphere, which Louise Merzeau1 refers to as the hypersphere while Frank Hartmann coined the term numerosphere.2 These eras can be distinguished by the dominance of a certain technology of symbolic encoding and of certain institutions of symbolic transmission.3 t
t
t
178
The logosphere is the technical-cultural milieu that evolved from the invention of scripture. However, the spoken word remains the most important medium of communication and transmission. Scriptures function as an external memory but even more as a slave to the spoken word for the prearrangement of speeches and lectures. The invention of letterpress printing marks the beginning of the graphosphere. Debray refers to the graphosphere as the era where books (in plural) more and more replace the book (in singular). The foundation of the modern university and the general school system occurred within this era. After a short era of the videosphere, coined by images and sounds, a digital
&1 6&1 '& &)6&1 9 mediosphere is emerging. A comprehensive analysis of this potentially new mediosphere, i.e. an attentive and detailed observation of the currently evolving cultural changes, is necessary to be aware of the challenges for educational institutions.
To understand the range of the current transition to mediaspheres, this process shall be compared to and distinguished from previous medial and cultural upheavals. Starting with the formation of the European university in the logosphere until today the practices of knowledge and the correlated concept of scholarship as well as the function of universities in society shall be analysed and discussed.
Function of the (European) University Among educational institutions the modern university takes a special position. As a result of the unity of research and education its social organisation as well as its function in society differs from that of other educational institutions like schools or universities of applied sciences. Universities are institutions that deal with the fostering and development of scholarship and thus also have the function to educate for scholarship. Since the European Middle Ages the social structure of universities has been characterised by the alliance of academic teachers and learners in a scholarly system. In each media-cultural-historical era the processes of teaching and learning at universities have aimed at learning the scholarly acknowledged handling of knowledge. Students should understand and be able to apply the scholarly methods that were valid and accepted at any one time. However, the aim of a scholarly education is not necessarily an engagement in the scholarly system. It should rather enable problem-solving using scholarly methodology and research results. Therefore it is necessary to comprehend the production of scholarly research results. Only by verifying the regular scholarly production of knowledge and not only by the authority of its origin can knowledge be accepted as significant.4 Due to the basic role of universities to educate for scholarship, not only knowledge as a cultural asset is perpetuated, but also the methods and processes that contribute to the generation of this knowledge. This perspective on universities can be taken as a starting point to observe and explain current and previous changes of universities. In reference to Régis Debray’s Mediology, and also with respect to media theorists like Marshall McLuhan or sociologists like Dirk Baecker and Manuel Castells, the assumption can be formulated that the cultural adoption of a new (technical) medium not only radically affects processes of communication, i.e. the spatial dissemination of information, but also the practices of knowledge and the understanding of scholarship and the scholarly methodologies change fundamentally. The definition of what is considered as scholarly knowledge, i.e. knowledge that is dealt with in universities, and therefore the orientation to what is regarded as valid content is changing as well as the
179
&'&! (
authority that legitimises knowledge, the forms of archiving, organising, accessing knowledge, and the forms of a scholarly education and certification.
Historical Perspective on Practices of Knowledge
To establish a deeper understanding of the currently evolving practices of knowledge and how they challenge the modern university, the development of the European university and the associated changing understanding of scholarship shall be clarified with some examples, starting from the formation of the European university in the Middle Ages.
Scholarship in the Logosphere The formation of the European university in the Middle Ages occurred within the era of a manuscript culture, which Debray calls the logosphere. Scholarship was primarily concerned with collecting, systemising, and imparting perpetuated knowledge. Scholarly knowledge was knowledge of divine revelation, which should be “captured as complete as possible, cleansed of external additives, ordered by its own standards, reliably imparted, and demonstrated and proven with every secularistic phenomenon.”5 The central methodical and didactical approach was the scholastic method. Scholarship meant to interpret observations such that they were compatible with the given principles and their consequences in order to form a self-consistent theory. “Every scholarly line of argument then had to tend to the aedequatio intellectus et rei; it did not depend on exact empiricism and empiric verification of cause-andproof, but on the subsumption of the phenomena under a divine order and their estimation according to the Christian norm by using analogy and comparison.”6 In oral forms of communication, i.e. mostly in lectures, knowledge was transmitted. The professors chose texts to read out and as the case may be these texts were commented on and explicated during the lectures. The students acquired their knowledge by listening and mnemonic storage. The practices of knowledge geared towards a strict adherence to the text and a highly formalised handling of text. This is the main reason for the effective operation of mnemonic storage, which was also in the manuscript culture still the most important medium to acquire knowledge. Exams and disputations primarily focused on the correct reproduction of the acquired knowledge and on the ability to make correct use of the scholastic method. Therefore, education for scholarship in the university of the logosphere meant on the one hand to learn about the given canon of knowledge to permanently perpetuate it in the cultural memory and on the other hand to develop the ability of approaching and solving a problem by using the scholastic method and thus integrating it into the divine order. The limited number of manuscripts that were additionally used as external 180
&1 6&1 '& &)6&1 9
memories and that formed the basis of the lectures were provided by the professors and later on by the evolving monastic libraries or university libraries respectively. By doing so, the institutions could at the same time grant and control access to the knowledge to be imparted by them. The practices of knowledge and the teaching methods were supportive and constituent for hierarchical and authoritarian structures. The legitimation of knowledge or truth respectively in the logosphere was bound to institutions and the associated persons e.g. professors or magisters.
Impact of Letterpress Printing With the invention of letterpress printing and the development of new distribution networks large quantities of copies of one and the same content could now be spread. This is what Debray marks as the formative moment of the graphosphere. While in the logosphere the access to knowledge via manuscripts and oral transmission processes was organised and controlled by hierarchical, institutional communication networks, these institutional constraints and mechanisms of exclusion take a back seat in the graphosphere. Books and printed matters become goods on the free market.7 The increasing availability of printed information facilitated and also called for an active and deep involvement with scripture. The subjective idea of the world was no longer primarily a result of orally perpetuated interpretations of the scriptures but the ability to read and to write allowed for an individual, self-contained consideration of the transmitted messages. The individual as knowledge-creating subject came to the fore – in contrast to the more passive recipient of the logosphere. The large quantities of printed information that were now published were no longer ordered and examined by an institutional authority but had to be individually criticised and compared. “Criticism emerges as the central virtue of the society.”8
The Book as a Communication Medium in the Modern Concept of Scholarship A concept of scholarship evolves where the subject as a knowledge creator as well as the critical evaluation of text takes centre stage – and not the paradigm of a confessional controlled, institutional authority. The connection of the emergence of a modern concept of scholarship with the development of a typographic culture shall be presented shortly with reference to the German media theorist Michael Giesecke.9 He states that the typographic culture is based on a concept of mono-medial communication. Communication is defined as social information processing. Similar to Régis Debray, Giesecke acts on the assumption that within a typographic culture – or within the graphosphere as Debray calls it10 – the printed book is attributed the central communicative 181
&'&! (
function. This is contrary to the communication concept of the logosphere: the manuscripts then mainly served as supportive for the individual memory and did not have any communicative function by themselves but were integrated into oral communication systems. According to Giesecke, the linear structure of the production and also of the transmission of knowledge in a typographic form allows for the reader of a book to follow the author’s argumentation and thus to take his or her perspective. The reader can accept the point of view that is formulated in the text, be contrary to it or continue the argumentation by referencing. Giesecke calls this form of communication social information processing without direct interaction.11 The modern concept of scholarship and thus the traditional structure and the self-concept of today’s (European) universities is based on these structures of social information without direct interaction: Knowledge is criticised, accumulated and continuously developed.
The Modern University: A Structural Answer to a New Concept of Scholarship However, a corresponding new concept of scholarship started to develop outside universities in connection with the Enlightenment at first in the humanistic academies. Communication without interaction using printed texts gave rise to new formats of scholarly publications. The transcripts of the oral disputations, i.e. the dissertations, were increasingly published. Only after some time was the dissertation established as an independent written thesis. Scholarly journals as topical communication media evolved; the growing book market required classifications, lists of publications, book announcements and reviews. Journals became the medium of dispute and therefore the medium of scholarly communication. The new formats of publications called for new systems of documentation and finding. Internal ordering structures such as an index of content, a subindex, and footnotes with references as well as external ordering structures such as meta data, keywords and encyclopaedias developed.12 Despite a changing concept of scholarship in the society, universities initially adhered to their traditional concept of scholarship and also to the traditional communication structures. Of course, also in universities printed texts were increasingly used, libraries expanded their collections and the lectures were less dictations but more commenting lectures. But for all that the basic principle of the concept of scholarship remained rooted in hierarchical structures that were directed by confessional thinking. Role perception and the processes of legitimising knowledge were still bound to the old paradigm. However, scholarly education at universities provided the basis for the changes of scholarship outside universities. These newly developing forms of scholarly research imposed pressure upon universities as institutions to legitimate their function and their concept of scholarly 182
&1 6&1 '& &)6&1 9
knowledge. The foundation of the university as an institution that combines research and teaching, based on the ideas of Wilhelm von Humboldt, can be perceived as the structural answer to the expansion of a new concept of scholarship in society. At this moment the main function of the university was no longer the transmission, distribution and storage of knowledge, but rather the production and the development of knowledge became a central issue. The educational aim at universities transformed from recapitulating the knowledge of scriptures, as focused on in the university of the logosphere, to the development of a critical way of scholarly comprehension within the book as a typographical medium. Scholarly legitimation is achieved by taking part in the scholarly processes of the typographic culture, i.e. by joining in the communication via scholarly publications or, in reference to Giesecke, being included in the social information processing within the scholarly system. The practice of peer-review evolves as a filter for scholarship and as an instrument for knowledge validation in text-based scholarly communication.
The Digital Mediosphere As it already could be observed by implication of the introduction of letterpress printing, new cultural processes and practices of knowledge are currently adumbrated on the Net. Not only does the book no longer function as a central medium of knowledge communication, but with the Internet a participatory, interactive, global network is establishing itself as a ubiquitous medium of communication and distribution as well as a dynamically changing archive and storage medium. The computer starts, as Dirk Baecker states, “to take part in the communication of the society with its own memory.”13 Within an evolving digital mediosphere and with new practices of knowledge, the concept of scholarship is also challenged – and thus the core competence and the social function of the university.
Collective Intelligence? Currently it can be observed that digital infrastructures expand the spatial and territorial reach, but the chronological reach is diminished. Information can quickly be spread over the global network, is massively copied and circulated widely, and this is thus leading to an endless information overload. This constantly growing mass of information can hardly be managed with the methods of classification and ordering of knowledge that stem from the typographical culture. In combination with the wear lifespan of electronic storing devices knowledge is fostered within this dynamic, digital medium as an ephemeral asset. On the Web a form of communication is promoted that is geared to the moment, the present. It is all about 183
&'&! (
rapid information, about direct interaction, about instantaneous collaboration. Knowledge is produced in concrete circumstances of practice,14 is permanently re- and decontextualised, and forwarded and circulated in new forms and combinations. Baecker describes, in reference to Luhmann, the challenge of communication in a networked society as follows: “You enter data, you retrieve data, as Luhmann observes, without even being able to connect this data as one was used to with speech, scripture and also with the letterpress. Neither voice and its defining context nor text and the inscribed purpose nor the book and its inherent criticism help to handle the data that appear on the monitors of the terminals linked to computer networks.”15 Different strategies are being explored, trying to master this dynamically changing information overload and to add helpful meta-data to support the process of making connections: There are on the one hand technical solutions like the attempt to develop and establish a Semantic Web16 and on the other hand social processes like social tagging or collectively organised reference and recommendation systems. The possibility of active participation on the Web encourages collaborative forms of knowledge organisation. Various applications that act as distributors and allocators such as, for example, social bookmarking sites like delicious, or Twitter as a system of recommendation and a kind of dynamic, real-time search engine, provide access to the collectively organised knowledge and thus to a kind of collective intelligence. In contrast to the classification and ordering of knowledge in a typographic culture, which was subsequent to the processes of production and publication, the newly evolving ordering structures need to be part of the process of knowledge production due to the rapid and dynamic character of the medium.
Illimitability With the invention of the letterpress and the transformation of knowledge into an external good in the form of printed books that could be distributed via an economic market, access to knowledge was no longer controlled only by institutions. With the development of the Internet access to knowledge is widening again. To an increasing degree information from nearly every scholarly field is available nearly everywhere and at any time. This ubiquitous availability is constantly growing with the augmentation of mobile, networked technical devices and interfaces. However, the scholarly knowledge that is published and quickly and widely distributed via digital media – and thus is potentially globally present – is more and more published without being peer-reviewed. The legitimation of this knowledge does not exclusively depend anymore on a scholarly authority as in the graphosphere, but it has to be functional. Knowledge is legitimated due to circumstances of practice17 – in the first instance it needs to be viable. The privileged status of scholarly knowledge, which claims to be true knowledge with eternal validity, seems to decline under these circumstances. “The criteria of evaluation of quality and relevance of the 184
&1 6&1 '& &)6&1 9
knowledge are not defined anymore only by scholarship itself but also by the users of the knowledge on account of the respective criteria of relevance and the expectation of its benefit”.18 Know-how is favoured over eternal knowledge,19 scholarly knowledge has to stand against other forms of knowledge, has to compete against it in practical contexts. Scholarly research is more and more influenced by public requirements and interest as well as by political and economic intentions. Instead of linearly organised accumulation of knowledge on the basis of interaction-free, social information processing, as described above in reference to Giesecke, knowledge production is thereby based on a kind of collective intelligence, interacting in networked structures. The individual as knowledge creator is not the basic reference in this process. On the Net – and therefore in public communication – the “wisdom of crowds”20 seems to establish itself as a formative way of organising and producing knowledge. Thus it challenges the expert as the educated individual who was considered to be a legitimate reference for the validity of knowledge in the graphosphere. The limits of the scholarly system seem to blur. Gibbons et. al. postulate the institutionalising of a new mode of knowledge production, “mode.”21 An essential issue in this context is that of the dedifferentiation of the scholarly system. Can the scholarly system and with it the institution of university persist in its current form and continue to retain its institutional core, i.e. the fostering and development of scholarship and the function to educate for scholarship? Or will the transformation of the university in a digital mediosphere, characterised by ubiquity in the practices of knowledge and by externalisation of scholarship and research, take place in a way that leads to the loss of the university’s special status? Will scholarship thus result in a (knowledge) economy as Weingart22 states? Comprehensive and critical research on the changing practices of knowledge and the concept of scholarship is necessary, to be involved in the process of transformation of the university as a scholarly institution. The aim is to integrate the achievements of the university of the graphosphere and the structures of the Net without establishing a concept of scholarship that is mainly oriented towards the paradigm of the “wisdom of the crowd” and thus towards a (knowledge) economy.
Education for Scholarship in a Digital Mediosphere Let us return to the core function of universities as mentioned in the beginning: to educate for scholarship. Taking the challenge for scholarly research into account, to produce increasingly applicable, viable knowledge, research processes will probably be more and more externalised. However one main difference between nonuniversity research institutes and the university arises: the unity of research and teaching. The ability for scholarship in a networked, dynamically changing world still – or even more – requires the ability of a critical evaluation of the processes of knowledge production. To educate for scholarship a university that continues to 185
&'&! (
examine these processes critically and independent from external requests is necessary. Dirk Baecker here points to the difference between function and performance: the university will continue to fulfil its function of educating for scholarship and therefore relies on the unity of research and teaching. However, the performance it has always provided in the field of scholarly research will gradually become more externalised. For university teaching, aiming towards a reflective handling of knowledge and the understanding of processes of knowledge production, the imparting of meta-knowledge about the acquisition and the handling of knowledge and socialcollaborative skills are gaining in importance. Education for scholarship means, again with reference to Dirk Baecker, the training of a “functional handling of complexity.”23
Perspectives Comparable to consequences of the introduction of letterpress printing, changing practices of knowledge are developing outside the university. Currently, newly evolving didactical scenarios are reflective of these changing practices of knowledge – but also the university as a whole will be challenged by the cultural change. It remains to be seen what the structural answer of the university to the introduction of a digitally-networked medium will be like….
Notes 1 2 3
4
5
6
186
Merzeau, Louise. 1998. Ceci ne tuera pas cela. In Les Cahiers de médiologie, 6: 27-39. Hartmann, Frank. 2005. Und jetzt auch noch “Mediologie”? Ein Essay. http:// www.medienphilosophie.net/mediologie.html. Accessed 30 July 2009. Cf. also Meyer, Torsten. 2008. Transmission, Kommunikation, Formation. Mediologische Betrachtungen der Bildung des Menschen. In Mediologie als Methode. , eds. Birgit Mersmann and Thomas Weber. 169 – 190. Berlin: Avinus 2008. Meyer-Wolters, Hartmut. 1998. Zur Situation und Aufgabe der (deutschen) Universität. Vierteljahresschrift für wissenschaftliche Pädagogik, no. 2/98: 181-196. Weber, Wolfgang E. J. 2002. Geschichte der europäischen Universität. Stuttgart: Kohlhammer. p. 38. (German original: “[…] möglichst vollständig erfasst, von fremden Zusätzen gereinigt, nach seinen eigenen Maßstäben geordnet, sicher weitergegeben und an jedem diesseitigen Phänomen dargestellt und nachgewiesen werden musste.” Translation: Christina Schwalbe) ibid., p. 38. (German original: “Jede wissenschaftliche Beweisführung
&1 6&1 '& &)6&1 9
7
8
9
10 11
12
13
14
15 16 17 18
musste also auf die aedequatio intellectus rei zielen; nicht auf exakte Empirie und empirische Ursache-Wirkungsnachweise kam es dabei an, sondern auf die Einordnung der Phänomene in die göttliche Ordnung und deren Beurteilung nach der christlichen Norm mittels Analogie und Vergleich.” Translation: Christina Schwalbe) Cf. Giesecke, Michael. 2002. Von den Mythen der Buchkultur zu den Visionen der Informationsgesellschaft : Trendforschungen zur kulturellen Medienökologie. Frankfurt am Main: Suhrkamp. Baecker, Dirk. 2007. Studien zur nächsten Gesellschaft. Frankfurt a.M.: Suhrkamp. p. 17. (German original: „Zur zentralen Tugend der Gesellschaft wird die Kritik.“ Translation: Christina Schwalbe) Cf. Giesecke, Michael. 2002. The Vanishing of the Central Point of View and the Renaissance of Diversity. http://www.michael-giesecke.de/ giesecke/dokumente/258/index.html. Accessed 28 July 2009. Cf. Debray, Régis. 2000. Transmitting Culture. Columbia University Press. p. 104. Cf. Giesecke, Michael. 2002. The Vanishing of the Central Point of View and the Renaissance of Diversity. http://www.michael-giesecke.de/ giesecke/dokumente/258/index.html. Accessed 28 July 2009. Cf. Gierl, Martin. 2004. Korrespondenzen, Disputationen, Zeitschriften. Wissensorganisation und die Entwicklung der gelehrten Medienrepublik zwischen 1670 und 1730. In Macht des Wissens. Die Entstehung der modernen Wissensgesellschaft, eds. Richard von Dülmen and Sina Rauschenbach. 417-438. Köln u.a.: Böhlau. Baecker, Dirk. 2007. Studien zur nächsten Gesellschaft. Frankfurt a.M.: Suhrkamp. p. 140. ( German original: „…sich […] mit einem eigenen Gedächtnis an der Kommunikation der Gesellschaft zu beteiligen.“ Translation: Christina Schwalbe) Gibbons, Michael. 1994. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Thousand Oaks, Calif.: SAGE Publications. Baecker, Dirk. 2007. Studien zur nächsten Gesellschaft. Frankfurt a.M.: Suhrkamp. p. 140. Davies, John. 2006. Semantic Web Technology: Trends and Research in Ontology-based Systems. Wiley & Sons. Cf. Debray, Régis. 2000. Transmitting Culture. Columbia University Press. p. 49. Weingart, Peter. 2003. Wissenschaftssoziologie. Bielefeld: Transcript-Verlag. p. 134. (German original: “Die Kriterien der Beurteilung von Qualität und Relevanz des Wissens werden nicht mehr allein von der Wissenschaft selbst definiert, sondern auch von den Anwendern des Wissens aufgrund ihrer jeweiligen
187
&'&! (
19 20 21
22 23
188
Relevanzkriterien und Nutzernerwartungen.” Translation: Christina Schwalbe) Cf. Debray, Régis. 2000. Transmitting Culture. Columbia University Press. p. 49. Surowiecki, James. 2007. The wisdom of crowds: why the many are smarter than the few. reprinted. London [u.a.]: Abacus. Gibbons, Michael. 1994. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Thousand Oaks, California: SAGE Publications. Cf. Weingart, Peter. 2003. Wissenschaftssoziologie. Bielefeld: Transcript-Verlag. p. 135. Baecker, Dirk. 2007. Studien zur nächsten Gesellschaft. Frankfurt a.M.: Suhrkamp. p. 143. (German „[…]einen operativen Umgang mit Komplexität.“ Translation: Christina Schwalbe)
A Classroom 2.0 Experiment % ) * +,
The computing world around us is slowly transforming from being computingoriented to user-oriented. Accordingly, our ways of communicating with computers are returning to being more natural or ergonomic. At the same time, the way we use computers is also changing. The computer of the future is no longer a khakicoloured box under the work desk, but rather an environment surrounding us, an environment that understands and obeys us. We need to find ways to harness this transformation in the field of communications technology into practice in the field of learning. New interfaces need to be studied, and it is absolutely vital to find new ways to use computers. Although the world has been changing at an exhausting rate, learning environments have often been stuck in the 20th century. More communicative, motivating and creative methods of teaching and learning should be created to complement and in some cases even replace the old ways of teaching. We need a ‘classroom 2.0’. In our thesis, we have studied ubiquitous computing and its effect on learning. More specifically, we have planned and built a ubiquitous learning environment and looked into the numerous possibilities this 21st century classroom has to offer. We have studied touch-oriented technologies and solutions and their relation to a ubiquitous learning environment. We have also described in detail the process of building a set of proof-of-concept solutions around a computer and various parts of readily available hardware and software. This interactive multi-touch-screen computer system forms a central part of our learning environment. It is an intuitive interface for collaborative learning in a ubiquitous environment. Besides the table, the walls are also interactive and the computer can be controlled with hand gestures. The room also allows online participation such that a learner who is not physically present can also take part in learning sessions via the Internet.
Introduction Throughout human history communication with other individuals or groups has been shaped by technological inventions. Today, the world of communications is changing at an astounding rate. A great deal of our daily communication has been processed through some kind of media, whether through mobile phone, chat, Facebook or Twitter. Practically all media from newspapers to photography and books is in the process of being digitalised. The digitalisation of communication has revolutionised the way we interact with each other, the way we work and the way we spend our leisure time. This digital revolution has also had an enormous impact on how we teach and learn. In stark contrast to all previous generations, 189
% ) * +,
+,6'& & *-'
today’s children are growing up as digital natives who in many cases learn to click the mouse before learning to ride a bike. Virtual worlds, computer games, the Internet, and mobile media are all part of their natural habitat. Five years from now, even books will mostly be sold in a digital format. However, we are still physical beings with a need for real interaction. Real-time interaction is difficult to replace. Naturally, we need to think about the environments where these digital natives learn: we need to rethink their future classrooms. In the future, the digital world will naturally integrate with the physical one even more than today. Our natural habitat is becoming smart. Our apartments will be equipped with technologies allowing us to measure our physical condition through various sensors. There are already, for example, smart homes for the elderly that monitor the occupant’s heart rate and breathing and even recognise the sound of an elderly person hitting the floor, responding by calling an ambulance. Our clothing will also adapt by getting warmer or colder according to the weather and our body temperature. We will also have tools to add an extra layer of content on top of our reality. This will be possible through various forms of augmented reality technologies such as downloading and projecting extra content on the surface of a lens. As we become embedded with technology by using wearable devices, so will our environment. The ubiquitous environment sees us, hears us and interacts with us and the devices we are wearing and carrying. At the beginning of the era of eLearning, communication has been mostly textbased interaction, mainly because of the lack of bandwidth. As broadband arrives, other senses have come along, sound, image, and live video, and the amount of interaction has grown. Computing power has also increased to make the computer sense and understand us better. Traditional online learning has several benefits: the possibility to elaborate answers and the independence of time and space. For example it is easy to ignore time zone differences when conversation manifests as written replies in a web-based learning environment. However, even when there is genuine interaction between conversation participants, the interaction is still far from that of real time, physical, 190
,?3G$),
face-to-face dialogue, where ideas fly wildly, collide, mix and mutate into something else. The result can be something more than the sum of the variables. In certain types of collaborative and creative projects the benefits are obvious. By creating a ubiquitous classroom, then, we have sought to harness these benefits of direct interaction, learning by doing and collaborative, social dimensions. We believe that in the future, augmented reality and ubiquitous computing solutions will gain stronger foothold in communications research. As digitalisation has revolutionised communication and the media, so will ubiquitous computing. For too long we have adapted to technology and paid the price, sitting by the computer in a cramped space with a non-ergonomic workspace and a 40-year-old invention called a mouse-and-keyboard interface. Now we are at a crossroads: this time it is technology’s turn to adapt to us. Computers will become blended into the background, ubiquitous. As computers learn to listen to and see us, conversational interfaces will become much more common and sophisticated. As voice recognition technology develops, HAL9000 of 2001: A Space Odyssey will become more than science fiction. The latest trends in computing are intuitive interfaces and multi-touching ability. The iPhone has shown us that human friendliness is for the masses, too. If we look around we see many similar projects emerging at the same time, heading for the same ubiquitous direction. The core of teaching and learning involves communication, interaction and media. Thus, the field of education will naturally be revolutionised, too. New, effective and inspiring methods of spreading, shaping and creating information should be harnessed to upgrade teaching and learning to the 21st century level. In this article we explore the phenomenon of computers turning ‘human friendly’ and how that could be used in education. We have tried to meet this challenge by presenting a versatile platform for teaching and learning. We have taken advantage of different methods of communication: visual, audio, (multi)touch and body language. The keywords in our project are dialogue, presence, shared space and intuitive interactivity. What are the possibilities of telepresence in learning? How can today’s innovations in the field of interaction and interface design be taken advantage of in learning? What would a future classroom look like when packed with interactive, ambient technologies and made accessible from a distance? What kind of classroom can be accessed from different countries, where people could collaborate in real time in a room, involving intuitive interaction with transparent computers, computers you cannot see but which obey your orders? Could this be done with open source coding and online tutorials from scratch? In our thesis we have taken the vision of ‘classroom 2.0’ closer to reality. We have built a prototype of the room and having done so are convinced that these technologies will benefit learning and will be the way of the future. 191
% ) * +,
Collaborate and Learn It is common knowledge that lecturing and other old-fashioned methods of teaching and learning are often inefficient. The old pyramid structure of teaching discourages free flow of information, ideas and creativity. For a long time learning has been moving away from one-way instruction to construction and discovery of knowledge. Current pedagogic research has come to support constructive and collaborative learning theories where the learner is an active and responsible participant in the learning process. We see the learner as an active participant who constructs knowledge by comparing it with previous experiences and reflecting upon them. The truth is always subjective and created from the reflections of the individual. Simultaneously participating in a learning session in a joint physical space benefits collaboration and advances learning, something that is missing in traditional web browser-based online learning. Collaborative learning facilitates constructive learning. Constructive learning theory sees learning as contextual and thus emphasises ‘situated learning’. This means learning that takes place in the same context where learned skills are taken into use.1 Thus learning is not only transfer of abstract and objective information from one person to another but rather a social process where knowledge is constructed together. According to Mohamed Ally, working with other learners gives learners real-life experience of working in a group and allows them to use their metacognitive skills. Metacognition means awareness of one’s own cognitive actions: thoughts, learning and understanding.2 Learners will also be able to use the strengths of other learners, and to learn from others. Knowledge built together while simultaneously experimenting hands-on greatly enhances learning. Besides constructive theories, this idea is supported also by experiential learning theories as well as the concept of social learning. According to social learning theory, behaviour is influenced by both social and psychological factors. Social learning theory takes advantage of both behaviourist and cognitivist views, claiming people learn well by observing others. This observation happens in a social environment and the activity of the participant is essential for learning.3 The learner need not learn everything from scratch; rather he can benefit from what others have learned before him by copying others. The effect of the model depends on various factors, for example, the ages of the learners and the models, personality or authority of the model. The insecurity of the learner helps him to look up to peers for support or the teacher. The theory of experiential learning claims that learning happens by personally experiencing and by reflecting upon that experience. This happens by analysing personal experience, conceptualising it, and then applying the knowledge gained in action.4 Experience in itself is not enough: it must be processed consciously. Experiential learning thereby includes a cognitive perspective, which bring it thus close 192
,?3G$),
to a constructive learning model. To further learning, humans develop symbolic processes to structure reality and to control behaviour and learning. With these symbolic processes we can help the learner to apply and adapt this knowledge to various real life situations. The common problem is that the symbolic processes do not always apply outside school. They must therefore be up to date, and they must be prepared to resemble reality as much as possible: using simulations of real life situations and experiential learning methods can help to overcome this problem. It goes without saying that if you want to learn collaboratively, it is better to be together in real time than not. Possibilities and benefits of shared physical space and time in a high-tech environment have not been researched as much as virtual meetings when designing learning of the future. During the last few years e-learning has had a tendency to advance towards virtual reality solutions such as Second Life learning modules. The virtual reality approach is actually rather the opposite of ubiquitous computing, since it attempts to make us go inside the computer, while ubiquitous computing wants to add to our reality by bringing extra content to it. We believe the room we have built benefits learning by offering excellent tools to collaborate, create, communicate and interact.
The Return of the Campfire Storyteller Obviously people want to find ways to interact with each other in as natural a way as possible. Natural usually means that we can use the tools we are born with and tools we learn to use in the socialisation process of childhood. These include speaking, listening, touching, as well as culturally interpreting tones of voice, facial expressions and postures. During our cultural evolution we have moved further and further away from these natural, inbuilt communication technologies by adapting symbolic communications such as writing, painting or the Internet. But now, according to Marshall McLuhan, we are leaving one specific time behind us, the typographical era, and entering a time of electricity. We are moving from the time of centralised and linear mass media to a time of decentralised and hypertextual multimedia. McLuhan saw a need to be free from narrow typographical media in order to move towards media where all our senses are exploited.5 In the 1980’s, Virtual Reality guru Jaron Lanier went even further in his visions. He had similar views to McLuhan, but concerning Virtual Reality. Lanier hoped Virtual Reality would lead to post-symbolic communication without typography or language. In the end he hoped we could transfer thoughts directly to each other and communicate through gestures, body language and facial expressions, even telepathy.6 We see virtual reality only as one path towards this direction, mixed reality and ubiquitous computing scenarios being more adaptable to our physical nature. The vision of telepathic communication can be criticised as mere science fiction, 193
% ) * +,
' 1 ( ' ,!&& 1&
but on the other hand an objectifying consciousness does offer a tempting ‘end of the line’ scenario. There have already been some breakthroughs in the field of mind control. One example comes from the University of Philadelphia. In the experiment a monkey controls a robot arm using only its mind. The scientists have developed technology that transforms brainwaves into a signal that a computer, and thus the robot arm, understands. The experiment’s activities have thus far been quite simple and primitive, like feeding and reaching, but the general direction this scenario is heading points to the future where telepathy could be a daily routine for the people of tomorrow.7 It is obvious that usable telepathy remains far in the future, but the mere possibility offers a direction to follow if the aim is for faster and more direct communication and interaction. However, telepathic communication without language, as Lanier suggests, sounds even further ahead since we form our thoughts through a language. Language is learned as part of the culturalisation and socialisation process of the human child and is thus deeply integrated into what makes us human. Communication without language or even hive-mind thinking seem quite far-fetched, not to mention slightly frightening. Lev Manovich criticised this vision, as “the fantasy of objectifying and augmenting consciousness, extending the powers of reason, goes hand in hand with the desire to see in technology a return to the primitive happy age of pre-language, pre-misunderstanding. Locked in virtual reality caves, with language taken away, we will communicate through gestures, body movements, and grimaces, like our primitive ancestors... The recurrent claims that new media technologies externalize and objectify reasoning, and that they can be used to augment or control it, are based on the assumption of the isomorphism of mental representations and operations with external visual effects such as dissolves, composite images, and edited sequences. This assumption is shared not just by modern media inventors, artists 194
,?3G$),
and critics but also by modern psychologists. Modern psychological theories of the mind, from Freud to cognitive psychology, repeatedly equate mental processes with external, technologically generated visual forms.”8 However, before telepathically controlling our environment and communicating with each other, we will have to advance with smaller and more pragmatic steps. We must learn more intuitive and natural methods to control and communicate with the tools and computers we have. We need the computer to understand us and communicate with us in our terms.
From Multimedia to Ambient Intelligence We have five basic senses, touch, smell, vision, hearing and smell. Today we use only a fragment of our senses while communicating with or through computers. There are different types of content, text, sound, still images, video, animation and interactivity, which combined together are called multimedia. Multimedia has been seen as a way to broaden our ability to absorb and transmit information. The most common interface items for creating multimedia in the computer have been the monitor, keyboard and mouse. These tools have been around since the 1960’s; the first computer mouse was created by Douglas Engelbart in 19639 and the device still remains rather similar in its basic functions after over four decades. Multimedia offers new methods for viewing the world and interacting with it. Technical innovations have always made communication faster, wider, more effective or versatile. This trend becomes natural when people learn to take advantage of all of their senses. However, multimedia today is a backward approach since it ties us to our machines. The next natural evolutionary step for multimedia is to evolve out from the computer. With the help of these new technologies of ubiquitous computing, and augmented and virtual reality, we are taking steps back to more holistic ways of storytelling and interaction, an era where our machines are adapting to us instead of us adapting to use them. Today, machines help us communicate and learn without having to learn to use the media technology first. In the future the whole of our physical beings can be used to interact with others, filtered through technology, making the machine more transparent.
About Ambient Intelligence and Ubiquitous Computing What we call traditional multimedia is only a first step to taking advantage of our senses other than sight. Communicating with the computer through gestures and voice is what we need. Ubiquitous computing and intuitive user interfaces of this millennium can be seen as a great leap in evolution in the field of human computer interaction, a long-awaited advancement from the desktop paradigm. Ubiquitous computing, also known as ambient intelligence, refers to a model of human195
% ) * +,
. %/, '
% 0 / 0 /
computer interaction in which the interaction between the two has been integrated into everyday objects and activities. The idea is that the computing works in the background, constantly following us, ready to serve and answer our needs, hidden into architecture, furniture and clothing, into our environment. In the end we become accustomed to the intelligent environment and take it for granted as we do with electricity. According to American scientist, and the father of ubiquitous computing, Mark Weiser: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”10
Weiser suggested there are going to be three basic forms for ubiquitous system devices: tabs, pads and boards. t t t
Tabs are wearable centimetre-sized devices (for example, an ID tag that tells the computer where the user is, allowing access to allowed rooms). Pads are hand-held decimetre-sized devices (for example, a paper-like surface on which to read news, email or books). Boards are metre-sized interactive display devices (for example, they can be used in collaborative use or for playing board games).
The term Ambient Intelligence was defined by the Advisory Group to the European Community’s Information Society Technology Programme (ISTAG) as “the convergence of ubiquitous computing, ubiquitous communication, and interfaces adapting to the user. Humans will be surrounded all the time wherever they are by unobtrusive, interconnected intelligent objects. Furniture, vehicles, clothes, roads, even paint will be equipped with Ambient Intelligence. 196
,?3G$),
The concept of Ambient Intelligence provides a vision of the information society, which is characterised by high user-friendliness and individualised support for human interaction.”11 Ubiquitous computing can therefore be divided into three parts: 1. 2. 3.
Ubiquitous computing Ubiquitous communication User adapting interfaces.
Ubiquitous computing refers to an environment filled with invisible, networked computers serving people in the background and freeing people from many mundane tasks, from automatically turning on and off lights to changing the temperature according to the user’s body temperature. Ubiquitous communication refers to the connectivity of those computers surrounding us. The possibility of all the surrounding computers interacting offers powerful new interfaces for the ‘mother-computer’ to collect data and interact with us. Wireless technology, whether it means Bluetooth, WLAN or Radio Frequency Identification (RFID) will obviously be used to create these networks. User adaptable interfaces refer to more intuitive, human-friendly interfaces such as our interactive table or gesture interface. User-friendly interfaces with easy access and intuitive control is one of the major challenges and offers interesting possibilities. In our project we have decided to focus on projecting traditional 2D interfaces onto new surfaces which are controlled with fingers and a light pen. However, in the future, a whole new method of interacting offers interesting possibilities, from gesture interfaces to audio interfaces. Combined with ubiquitous computing and communication where the computer is quite aware of our location and has the capability to obey our gesture, touch or audio inputs, these user adaptable interfaces offer significant possibilities to make our lives easier. At the same time surveillance possibilities, not to mention marketing possibilities, raise questions about security, intimacy and privacy.
Teleaction in a Ubiquitous Classroom Our study consists of three parts, the first two being learning and ubiquitous computing. The third section of our project concerns telepresence. We want our ubiquitous classroom to be accessed online, not only through observing but also participating and collaborating with others. As previously mentioned, real-time, face-to-face interaction facilitates knowledge forming. We thus intend to add the face-to-face factor to online learning too. American psychologist and educational reformer John Dewey referred to interaction as the defining component of the educational process that occurs when the student transforms the inert information passed to them from another 197
% ) * +,
and constructs it into knowledge with personal application and value.12 Getting the students together from one environment into another to collaborate and act together via technology can be called telepresence. One area we are exploring in our ubiquitous classroom is participation in the collaborative and interactive experience from different worlds from that of physical reality. What we need is telepresence, or as media-guru Lev Manovich puts it, “teleaction”, the ability to see and act at a distance.13 In this definition, telepresence encompasses two different situations: being “present” in a synthetic computergenerated environment (what is commonly referred to as virtual reality or VR) and being “present” in a real remote physical location via a live video image or other projection. Virtual reality provides the subject with the illusion of being present in a simulated world. Virtual reality adds a new capability: it allows the subject to actively change this world. In other words, the subject is given control over an artificial reality. Telepresence allows the subject to control not just the simulation but reality itself. Telepresence provides the ability to remotely manipulate physical reality in real time through its image, its representation. The representation of the user is transmitted, in real time, to another location where it can act on the subject’s behalf. In our project, the aim is to achieve a better sense of presence for those participating from a distance. Projection of the participant combined with shared interaction provides a strong basis for a sense of ‘being there’ for the online participant as well for those physically present.
Classroom 2.0 We have planned and devised a learning environment to meet the requirements previously mentioned. This learning environment allows people from different places and worlds to synchronously participate in a teaching and learning session. People will be able to be digitally or physically present and collaborate in the same space no matter how far away they are from each other in real life. Our purpose has been to construct an environment that is as easy, flexible and intuitive to use as possible. We want the intuitivity and flexibility of ubiquitous computing to operate such that we can ‘skip the computer class’ and have access to the content first and foremost. We are convinced this will enhance collaborative learning, interaction, creativity and social skills much more than traditional ways of learning. We have focused on creating a teaching and learning environment where everyone is present in a shared space. Our goal has been to let the user interact and collaborate through representational technologies of the future. We have aimed for a democratic, open approach to teaching, creating an environment as a platform for dialogue and collaboration. There are four ways to participate in a learning session. First, the person may be physically present. This allows the person to take full advantage of the ubiquitous 198
,?3G$),
% ) C' ( '
classroom. Second, the person may participate in the session from the Second Life virtual world through avatar projection. This may be by means of either a fiduciary marker projection or a fogscreen projection. The third way to take part in the teaching and learning session is via a webcam video projection. The fourth way is via an ‘old-fashioned’ web browser interface such as Adobe Breeze or any other shared desktop application. The second, third and fourth ways allow the learner to participate long distance, e.g. from a home or office computer, while the first offers the best benefits: we have created an interface that is user friendly – natural and ergonomic. Demands of human physiology have been our primary concern. The setup and the way the room functions encourages interaction, collaboration, dialogue and creativity. Technology serves the user, not the other way round. However at this stage the level of interaction of a user that is telepresent is still not equal to that of a physically present user. For example, distant participants do not currently have all the possibilities of collaboration and interaction. They cannot, for example, experience the full space and facial expressions as well with the main focus being on the computer ‘screen’. However, it is possible to project the Second Life space in our classroom space OR to project the classroom into Second Life using 3D scanners.
The Engine Room – The Technology The room we have built consists of three main technologies: a teacher’s screen with 199
% ) * +,
interactivity, an interactive table for students, and third, a possibility to take part and collaborate in learning sessions from a distance. The room has at least one projected screen, which is mainly manipulated by the teacher. There can be other projected screens as well, for example the student’s screen or student group’s screen. The lecturer screen and the computer behind it is mainly controlled by hand gestures and a light pen. The projection is made interactive with a use of Nintendo Wii’s remote control, which senses the hand movements and interprets them according to the programming as commands. The person manipulating the lecturer screen has to wear special gloves we have designed so that the Wii mote can sense the receptors in the fingers. Besides the lecturer’s screen the students are using an interactive multi-touch table that can be manipulated by several people at the same time, even online. The surface of the table consists of a computer screen with a special interface that is sensitive to touch and is controlled by the fingers. The touchscreen technology supports multitouch, so a map on the surface screen can be zoomed by pulling two fingers away from each other. Another example is finger drawing with all digits. Thirdly, a student can collaborate with others through a web interface, the shared screen being either the lecturer’s screen or the interactive table’s screen. The distant participant can be present through Second Life, a web browser or video projection. The online participant has three possibilities: he can be the teacher and possibly lecture several teams at the same time, he can be a learner and collaborate with others, or he can be a passive, invisible observer.
The Teacher The teacher has three main tools, interactive projection using video projection which can be manipulated with a light pen or hand gestures. The light pen allows the teacher to interact with the computer and draw on any surface on which the data is projected. All walls can be turned into interactive screens. Not only walls: the projection can be cast so that even furniture can be made interactive. The data gloves we have designed have receptors so that the teacher can manipulate the screen and give orders to the computer with hand gestures. He can write on the wall by typing in the air or change the next PowerPoint slide with a wave of his hand.
The Learner The learner has the benefit of interacting with the computer with his fingers instead of a mouse or keyboard, like an iPhone with the bonus of multi-touch; up to ten people can touch the screen at the same time so that the computer reacts to all of them. For example, a group of students can simultaneously create a collaborative work of art by using their fingers. 200
,?3G$),
The Online Learner The online learner in the first version has a standard web interface by a shared desktop application like Adobe Breeze. He cannot multitouch but he can interact with a mouse and keyboard. He can take advantage of webcam video and audio to further advance his presence and learning. The second version of online learner uses a virtual world like Second Life. A 3D scanner can be used to scan the room and a virtual representation of the room can then be uploaded to Second Life. Fogscreen or another hologram technology can also be used to project a Second Life user to other users.
Multi-Touch Table The multi-touch table has been fitted out and tuned to be functionally very usable. The technology of the multi-touch table consists of a computer unit, a video projector, modified web camera and an FTIR (Frustrated Total Internal Reflection) type of infrared light-based touch-recognition system. The projector is used to throw the displayed image on the touchscreen as can be seen in the image below. The touchscreen is actually an acrylic plate that has been surrounded with LED lights emitting infrared light, effectively filling the acrylic plate with it. When the user touches the surface of the table, infrared light scatters down from the touched point and is then recognised as a “blob” by the web camera, driven by special software. The software recognises and tracks the blobs and then translates them into coordinate data for further use. The computer can be fully used with the multi-touch recognition interface, through the mouse driver software, which translates the recognised touch to Windows HID signalling – essentially, the user’s touches are recognised by the operating system just as it would recognise a mouse. In addition to the mouse driver interface, the touch-enabled applications can be used directly through the TUIO/OSC interfaces.
Hand Gesture Interface The Minority Report-style interface has been implemented by utilising a wiimote game console controller, connected with a Bluetooth interface. As the wiimote has been configured to track points of infrared light, the set-up is powered with an array of infrared LEDs that provide illumination for gesture recognition. The LED array shines infrared light, which is invisible to the human eye but detectable by the wiimote. As the wiimote and the LED array are set up coaxially (the projected direction of infrared light and the wiimote camera’s field of view are on the same axis), the wiimote can “see” and track every reflection of the infrared 201
% ) * +,
light, as long as it is bright enough. The interface’s user needs to wear special gloves to use the interface. In this project the gloves used were self-made, fashioned from basic cotton gardening gloves by attaching pieces of reflector tape to the tips of the index fingers and one thumb. The gloves were black to minimise accidental reflections.
The Whiteboard The whiteboard functionality also relies on wiimote, which is used to recognise infrared light emitted by a special pen that has been designed for this kind of use. The pen features two functions – the infrared LED at the tip of the pen can be turned on by either pressing the button on the pen or by pushing the pen against the material upon which the whiteboard is projected. Using the purpose-built software Smoothboard, the whiteboard functions as a complete control set for the computer. Shining the infrared light once moves the cursor to the point, and another clicks it. In addition to the mouse emulation, the software has built-in macros to ease use of programs and more can be recorded at will. Generally, the whiteboard function is mainly used to extend the interface and the work environment outside the tabletop. While it is a good working environment, it is somewhat limited in size: when the workgroup’s size exceeds five concurrent on-location users the table will likely become crowded and thus the possibility of extending the environment becomes an advantage, if not a necessity.
A Sample Scenario - Teaching to Build a 3D Model of a Pen In this example scenario the teacher wears data gloves and, using a light pen, he can access and control the projection of the computer screen on the walls. He first demonstrates how to model a pen in 3D by manipulating the screen by hand gestures and the light pen. He also directs students to a knowledge pool of the school, for example a wiki, where students are also able to input. While the teacher is lecturing and showing how to model a pen, the learners are sitting by their personal computers making notes, perhaps trying out things themselves, and getting accustomed to the interface. When the teacher has finished his part, students are given an assignment to move to the interactive tables, each holding up to four students. The assignment is to build a scenario where new knowledge can be applied and exploited. One example might be to model a simple writing desk with several pens lying around. The students negotiate, collaborate and create the scenario by exploring and experimenting simultaneously with the multi-touch table. By the end of day, the scenarios created are evaluated by all the students and the teacher so that everyone learns others’ mistakes and successes. 202
,?3G$),
In this example students learn social skills, creativity, team work and of course the substance itself. They create and build their own knowledge by experiencing it directly and reflecting upon it subsequently.
Notes 1 2 3 4 5 6 7 8 9 10 11
12 13
Ally, M. (2004) Theory and Practice of Online Learning, pp. 18-19. Tynjälä, P. (1999) Oppiminen tiedon rakentamisena. Lefrancois, G.R. (1996) The Lifespan (5th ed.) Belmont, CA: Wadsworth. Ruohotie, P. (2000) Oppiminen ja ammatillinen kasvu. Helsinki: WSOY, p. 137. McLuhan, M. (1964) Understanding Media: The Extensions of Man. Druckrey, T. (1991) “Revenge of the Nerds: An Interview with Jaron Lanier”, Afterimage (May 1991) http://motorlab.neurobio.pitt.edu/research.php plus a video: http://motorlab. neurobio.pitt.edu/videos/robosapienDiscovery.wmv Manovich, L. (2001) Language of the New Media. Cambridge, MA: MIT Press, p. 72. http://www.macworld.com/article/137400/2008/12/mouse40.html Weiser, M. (1991) “The Computer for the 21st Century”. Scientific American Special Issue on Communications, Computers, and Networks, September 1991 Gupta, M. (2003) “Ambient Intelligence - unobtrusive technology for the information society”. Pressbox.co.uk, June 17 http://www.pressbox.co.uk/ Detailed/7625.html Dewey, J. (1997) Democracy and education: an introduction to the philosophy of education. New York (NY): The Free Press. Manovich, L. (2001) Language of the New Media, Cambridge, MA: MIT Press, pp. 153-155.
203
Communication Techniques, Practices and Strategies of Generation “Web n+1”
It often happens that after a new cultural technique begins to emerge out of its nascent state to unfold its full potential in a broad field of application, the distinctiveness of individual styles, techniques, practices and expressions becomes somehow implicitly known, yet not explicitly characterised or explored under the premises of socio-cultural impact. We can also say that once a cultural technique has become canonical in the sense of being acknowledged and accredited as a socio-cultural embodied common good, it does not only constitute a society at large, but also greatly spurs prosperity and the material, spiritual and intellectual growth of state, nations and humanity. However, as long as basic cultural techniques such as reading, writing and math are still not fully developed (the estimated number of illiterate people in Europe is four percent), it appears slightly pretentious to focus solely on youths’ digital literacy in the light of aging populations and increasing illiteracy. At the same time, and this goes hand in hand with the hitherto unfulfilled promises of the e-policies across Europe (for example Lifelong Learning), mobile connectivity and ubiquitous access to open educational resources could potentially trigger additional processes of community engagement to a greater extent in informal learning settings. In the case of illiterate people this would mean being able to take advantage of the autodidactic appropriation skills of mobile phones and their communicational capacities paired with the educational challenge to develop playful ways of audiovisual learning in either self- or collaborative study mode. Since illiterate people are partly or fully integrated into working life, albeit without letting the outside world know about their shortcoming, they need to develop several tactics, strategies and skills in order to avoid public outing. This example, despite its social sore note, exemplifies the almost seamless coexistence of two parallel worlds of human interaction, the oral and the symbolic written form of communication, in this case by use of mobile telephony. It is thus no surprise that mobile communication devices in their global coverage have far exceeded any other kind of media ever before in use. Does this plethora of oral communication transmitted and stored as digital information suggest the demise of scripture or even disrupt it? Or are we simply in the midst of a radical transformation through the creation of several forms of meta-languages, including new verbal expressions – abbreviations in combination with iconic and text-based alterations of keywords, tags and globally decipherable vocabularies? 204
,,' #'&L6/''1 <(M7>
6 ,& 1+ 2342'AA5
Lifelong Learning – An Unfinished Project of Modernity? The concept of lifelong learning pursues the idea of the “unfinished project of modernity” (cf. Habermas, 1980), which, in short, has led to a postmodern notion of endless interpretative “freeplay” where there is no longer any place for values of the Enlightenment such as truth, reason and critique. This incompleteness bears a striking analogy to the permanent software crisis that has come into its fourth decade. That particular crisis is less ecologically determined, as software technology creates abstract objects that cannot be put into ecological categories. In software development rationalisation is a key instrument, not only for the safety of knowledge, but also to guarantee the efficiency of intelligent data processes. Its rapid development and diffusion thus repeated the process of socio-economic modernisation in a condensed course of events, and it seems that no other technology has reached a similar status of becoming the synonym of cultural modernity. Software development is also a technique of rational control, and it reflects the desire for modernity in favour of rational discernment and controllability of natural, intellectual and social processes. It emerged from a science that relies on the strictest form of rationality: mathematical provability. René Descartes in his 205
early tractatus Regulae (1628/29) declared mathematics as the only reliable basis for science. Like no other philosopher, he anticipated the methodology, thinking and functioning of this discipline. Yet, what differentiates mathematical formal logic from software is that scientists apply mathematical models to explore and understand the laws of nature or science, whereas software reconstructs “reality”. Evidently, this has roots back to a deep-seated misconception, because software engineering ignores that the role of mathematics in programming is not the one of a “world model”, but a formal description of meanings and intentions. In that such descriptions cannot be verified, like models through experiments, but can only be accepted or rejected through communication, from a mathematical and scientific perspective, an insurmountable verification gap occurs. This gap is, from the position of mathematical rationality, so severe that these meaning and intention descriptions will be either displaced through new programming techniques or formalisation aids, or simply, in the absence of competences, be expelled from the main problem areas. The consequences are system-imminent “bugs” in operating systems, which mostly occur through adoption of source code from older operating systems and their implementation into new programming environments. In short, the accelerating speed of software and its driving players from either proprietary or opensource-based applications has transmuted our society into “a live (live-coverage) society that has no future and no past, since it has no extension and no duration, a society intensely present here and there at once – in other words, telepresent to the whole world.”1 Undoubtedly this parallels the shift from industrial to postindustrial societies, a shift that Beck (1986) so providently elaborated in his book “Risikogesellschaft” in which he stated, “Individualisation means market dependency throughout all dimensions of life style…” and further that “the institutional influences on each individual biography mean that regulations in the educational and occupational system along with social safeguards are directly intertwined with people’s phases of life” (ibid. 221). In Beck’s terms, individualisation means institutionalisation, institutional character, and thus the potentiality of politics to design our CVs and our life. More than twenty years later, it appears that Beck’s thorough, elaborated and forecasted predicaments of a society that was just about to undergo a tremendous shift from an industrial into a postindustrial, service-based knowledge society has entirely come into existence, and that to which Antony Giddens (1996) attributed predominantly decontextualised, abstract forms of societal institutions, has gained bitter actuality along with the collapse of today’s global economy. In this regard, new questions will necessarily emerge about the size and scope of collateral damage, which in turn must lead to readjustments in political and socio-economic architecture in the near future. One could now argue that the concept of lifelong learning poses an even stronger argument to deal with the many imponderabilities 206
,,' #'&L6/''1 <(M7>
(to come); yet if several service-based and industrial sectors break down, and unemployment and social tensions increase, it remains rather a speculative and vague agenda to fulfill its promises with regard to destabilising markets and economic prosperity. The individual disposition to flexibly and adaptively react to accelerating changes in a multi-dimensional way, from family to work, in part driven by strong and often remote global forces, has become a kind of buzzword for justifying neo-liberal policy. To this adheres an increasing diversity and fragmentation of experiences and institutions as well as a greater willingness to tolerate deregulating measures and direct or indirect influences of speculative market instruments, instruments which, in fact, eradicated the social market economy. This has led to changing identities and dislocation of private and professional lives, loss of social bonds, fragmentation and vulnerability of loyalties and aspirations. On the other hand, while a much greater emphasis has been placed upon consumption and its pleasures, this has also created a new participatory, inventive and creative networking culture in which the consumer acts as the producer and vice versa. More choices in life have necessarily created new demands, which in return have unleashed a lifestyle economy that prioritises the fulfillment of almost every thinkable individual desire, not least triggered by an overarching delusiveness and pluralism of popular culture. High speed data connections, ubiquitous computing, and externalisation of information and knowledge resources into globally connected databases have created a gigantic accumulation of digital information and knowledge: in the nineteenth century, it took about fifty years to double the world’s knowledge, whereas today, the base of knowledge doubles in less than one year. Apparently, ICT permeates through our life like no other technology before; yet it has also opened up an ever-increasing gap between the “Haves” and the “HaveNots” (cf. YouTube video Globalization: The Haves and Have Nots) in terms of access to these technologies on a global scale, and the ambiguity of lustful connectivity and an uncanny suspicion of control and surveillance. And, most pertinent, the widening of key social divisions will become even stronger and experienced in the areas of income, employment, housing, health and education, including access to information; i.e. evidence of the growth of social exclusion, despair and hopelessness, resulting from the impact of unemployment, proliferates more and more. When the European version of the LLP concept was first introduced at the Bologna follow-up meeting in Prague in 2001, essential elements of the European Higher Education Area were outlined. Firstly, Lifelong Learning should comprise all phases of learning, from pre-school to post-retirement, and should also cover the broad spectrum of formal, non-formal and informal learning. Secondly, the implementation of this idea would be facilitated by bringing together education and vocational education in central aspects of different policies such as education, youth, employment and research. Thirdly, a lifelong learning framework would be 207
needed to enable each individual to choose among learning environments, jobs, regions and countries in order to improve knowledge, skills and competencies and to apply them appropriately. Another important aspect outlined relates to a coherent system of credits that would allow the evaluation and recognition of diplomas and certificates acquired at school, at university, and in the framework of work-based learning. In this way, the transfer of qualifications between schools, universities and the world of work could be ensured....
Vulnerabilities in a Future Learning Intensive Society From the perspective of creating an operational lifelong learning framework towards creative and flexible learning spaces, fundamental changes must take place to meet anticipated scenarios, for example the shift from hitherto predominantly technocratic, hierarchical and exclusive approaches to education and skill achievement to ubiquitous forms of equal access to both institutional and informal information and knowledge bases. This would include personalised digital spaces where each individual learner can start building up a comprehensive lifelong track record of learning goals and achievements independent of time, location and access device. What would this mean in practice, specifically if we think about the consequences of lifelogging…? A system that records and signals what people know, what they have learned, what they aspire to learn or not, at particular times and in particular places, is prone to permanent surveillance and control. The idea of “a memory of life” is not something new, yet many of these technological promises are equated with human habits of collecting and archiving during their lifetime: for example the interoperable data exchange between passive and active digital self-surveillance has led to several erroneous conclusions. One of the false assumptions is that technology can help us to share experiences. If we store memory bits in external databases, we assume that other people can participate in our perception of the world. As there is there is no unfiltered, objective perception of the world possible, the lifelogger establishes his/her individual context during data capture in comparison with his/her own cognitive and affective map of the world that cannot be executed several times. Consequently, the captured material only provides the external context and excludes the internal context. It is nevertheless the inner world model that permits us to interpret and evaluate the sources based on comparison with existing memory structures. We know that information and control are closely related concepts within systems theory (like cybernetics). Ideas can be extracted and can exist independently of people, in a computer, for example. As a result, information and its processing can exist in a disembodied form. Accordingly, the meaning (content) of information is set apart as irrelevant to the determination of its value in terms of quantitative measures. But cybernetics is also about “purposiveness”, goals, information flows, decision-making control processes and feedback (properly defined) 208
,,' #'&L6/''1 <(M7>
at all levels of living systems. Crowdsourcing is a good example of how malleable the principles of open source and exchange culture become (“you contribute something in order to get something better in return”), not least triggered by leveraging the mass communication and collaboration enabled by Web 2.0 technologies to make profit at the expense of each individual contributor. Thus principles of open dialogue and shared networking cannot be separated from the ideologically-driven market mechanism. What has originally been conceived as an economy of scale where there is no exchange of commodities but immaterial value has only recently been distorted by the announcement that the Bertelsmann publishing house, part of the multinational Random House group, is to print a German version of Wikipedia. This commercially-driven act torpedoes the main principles of a participatory online encyclopedia in multiple ways: a) a voluntarily co-authored, open and dynamic web-based knowledge repository cannot be transferred into static print media; b) open content is free for everyone, whereas the printed version is not; c) a book version is limited in size, scalability, prevalence, distribution and (closed) format; d) the argument that a printed version would reach the poor and non-connected does not hold true in the specific case of a German version (and I rather doubt this would be true for other countries with a high number of illiterate people); e) even though the GNU Free Documentation License allows commercial reuse, there is a profound difference between generating new business models on the basis of keeping the source code open and selling a book product that is genuinely non-modifiable. From this example we can learn how exploitable “gift culture” is, and how subtly market mechanisms are cloaked in the name and symbols of common wealth. Not only then do we intrude with our communication technologies into public spaces and spheres (e.g. the concept of Proxemics2), coevally we leave our digital traces (location, provider, sender, receiver, services, etc.) with each single humanmedia interaction in globally connected databases. This kind of intertwining of informal personal (individual + communication media) space with social, public space and information space (radio and electromagnetic waves) presupposes interoperable technology, media interaction and data transfer. In this constellation each single switched-on mobile device claims to provide from the start a client and server system, for example through the support of preconfigured widget libraries. The user enters the info space at any rate regardless of his/her level of engagement with communication technology inasmuch as every cell phone is not only reachable but also detectable at any time. Meanwhile several online services (trackyourtruck.com, childlocate.co.uk) offer GPS tracking services to allow parents or employees to remotely control from a web-enabled cell phone or PC the exact location on a street map of their children or employees, provided that the mobile phone is in fact attached to either observed persons or object of investigation. It is a kind of investigative data mashup, which 209
goes beyond the individual sphere of influence and control since private public interest groups (insurance companies, e-governments) accumulate more and more private information based on (in)voluntary data traces. Using the example of Amazon, we can observe one of the first and perhaps also one of the most successful e-commerce embedded marketing techniques employing personalisation of your experiences via customer tracking. But again, what appears convenient for the shopping experience, for example recommendations based on past purchases, lists of reviews and guides written by users, is not necessarily for the good of the end user. Some of the future scenarios show us quite plainly in what directions misuse of personal data mashups can move: imagine, for example a situation in which you order via telephone a pizza and the clerk on the other end of the line recommends a low fat, vegetarian version while processing your personal data record matching with your phone number, including your home address, bank and credit card balance, personal health risks etc. It is that kind of meta-level in electronic communication that has created an intangible grey zone, a moment of suspicion in that each medium contains a “submedia space” that is behind the material medium. In other words, whoever confronts or engages with symbolic data in electronic communication becomes in all probabilities entangled in another subject’s intention to control, manipulate, conceal or deceive – what has sadly become reality in several cases of data misuse by employers.
Power Laws Given the penetration of mobile phones at such accelerating pace (reaching the four billion mark in 2009) the main protagonists of early modernism – though they widely and (wildly) speculated about new possibilities of the electronic age and its flourishing communication media (radio, TV, telephone, fax and early computer networks) – could never have predicted the exponential growth and speed of converging technologies into one single information, entertainment and communication device. However, McLuhan’s statement, “In the future, people will no longer only gather in classrooms to learn but will also be moved by ‘electronic circuitry’”3 – coming from an interview in which he predicts world connectivity through either physical or electronic mobility – holds a fascinating actuality amidst the creation of future learning and working spaces. Despite the setback of the slightly overhyped virtual office and teleworking boom in the early 1990’s, partly caused by social deficits, McLuhan’s thoughts unintentionally gain fresh momentum through the global eco-logical and economic crisis and the claim to reduce carbon emissions what inter alia equates with a rather populist formula: electronic networking=less travelling=less emissions=less climate damage. Unfortunately, this is an erroneous and far too simplistic argument insofar as electronic networking tools became adjusted to the needs of a mobile society, 210
,,' #'&L6/''1 <(M7>
6 ,& 1+ 23*2'AA5
driven by global markets. In this regard it might not be very helpful to ask whether the chicken or the egg came first, but what can be stressed quite clearly is the fact that early concepts of electronic networking, except for the early artistic attempts to humanise technological environments,4 arose out of strategic cold war-driven data communication (package switching) that established multiple computer connections shared by one communication link (Arpanet). In retrospect, the Internet and WWW was not merely a socially driven approach to communication, compared to for example the development of the telephone (Alexander Bell’s prime occupation was teacher of the deaf ). Rather hypothetically though, I would argue that mobile telephony brought to the whole social media movement communicative social aspects, whereas the Internet paved the way to the global system of interconnected computer networks. Subsequently, today’s networking cultures pursue a strategy of converging techno-communication into a single medium that has three main characteristics: mobility, connectivity and ubiquity. In terms of mobility, we can observe responsiveness to the most prevalent challenges in education, professional and private life in making communication media adaptable to the personal informal space that does not stay fixed, but moves with us. Connectivity presupposes an electronic network that establishes information and communication links of our choice, and ubiquity by definition transcends our spatiotemporal relationship with our immediate surrounding area to the space 211
concept, the state of being everywhere at once (or seeming to be so). Embedded in this triangle are human and/or machine interactions that follow a complex logic of technological enhancement and social ergonomics. Any purely cognitive and techno positivistic approach, therefore, does not suffice to provide sound arguments for networked practices. Arguably social network theory borrows from applied mathematics, network science and graph theory (Leonhard Euler, 1736), in which power law structures known as the Pareto distribution (or the 80-20 rule named after the business management thinker J.M. Juran) play a major role: that is, some people (or organisations, or an organism of any kind) become more powerful than others by creating strong ties with other nodes so that they become important hubs. Originally, this observation was in connection with income and wealth, made by the Italian economist Vilfredo Pareto in 1906 when he noticed that 80% of Italy’s wealth was owned by 20% of the population. He then carried out surveys on a variety of other countries and found to his surprise that a similar distribution applied. This is of course a rather shortened explanation of complex underlying mathematical formulae, yet in this particular context it suffices to provide arguments for the system immanent bug inherent in current social media developments if they become synonyms for the democratisation of knowledge and information based on sharing culture. This suggests that a market with a high freedom of choice will create a certain degree of inequality by favouring the upper 20% of the items (“hits” or “head”) against the other 80% (“non-hits” or “long tail”). MySpace, Facebook, and many other businesses have realised that they can give away the tools of production but maintain ownership over the resulting products. One of the fundamental economic characteristics of Web 2.0 is the distribution of production into the hands of the many and the concentration of the economic rewards into the hands of the few. It is a sharecropping system, but the sharecroppers are generally happy because their interest lies in self-expression or socialising, not in making money, and, besides, the economic value of each of their individual contributions is trivial. It is only by aggregating these contributions on a massive scale – on a Web scale – that the business becomes lucrative. To put it differently, the sharecroppers operate happily in an attention economy while their overseers operate happily in a cash economy. In this view, the attention economy does not operate separately from the cash economy; it is simply a means to create cheap inputs for the cash economy. But does the knowledge about measurable and quantifiable links, nodes and hubs attributed to an individual’s or group’s social interaction explain the whole bandwidth of meta-communication, the context in which one says something, the tone and volume of our voice, body language, to name only a few? Those who have experienced voice over IP conferences or simple chat conversations know how difficult it is for at least some sort of meaning-making processes to evolve from such poorly established initial conditions. Only recently in an online 212
,,' #'&L6/''1 <(M7>
conference where I was invited as speaker a black hole experience happened to me when the screen in front of me became a silent wall echoing my voice and showing some shaky images from my Web cam. In between my monologue, which was originally drafted as ‘discussion’, some loose scraps of chat conversations appeared somewhere at the bottom of the screen as I was trying to reassure myself of being present while anonymously talking and presenting my online slides to an audience of 40 people somewhere connected on the globe. Admittedly, this was the worst online communication experience I have ever had, as the audience was unidentifiable, the channel was blocked, and only a sparse crowd of listeners gave feedback after the session. In hindsight, I wonder how the 80-20 rule can be applied to this failed communication experience that was intentionally flagged as an innovative online encounter. The likelihood in that case is that 20 percent of the technical defects caused 80 percent of the problems or frustrations, something that should serve as a daily reminder: to focus 80 percent of your time and energy on the 20 percent of your work that is truly important. Here exactly, then, comes into play the quality of communication, for the actual media in use to transpose information so that it can be interpreted meaningfully. At the same time, a communication channel that is not working fails to establish a bond of trust, engagement, motivation, interest and empathy among the actors involved. A moment of uncertainty appears in both real and virtual interactions if your vis-à-vis does not understand what the message and the medium literally are about. In real-life communication our abilities to non-verbally communicate via gestures, postures, facial expressions or eye contact might be useful tools that help us to either support language or to decipher disguised verbal messages. When it comes to online communication any channel is always capped, let alone the technical inadequacy current technology entails. “On the Internet, nobody knows you’re a dog” (published in The New Yorker, 1993) is a striking caricature and metaphor exposing another slippery terrain, that of the semantic/ ontological question of if a computer would ever pass the Turing test. In any case, words have no meaning to computers until YOU give it to them.
The Social Networker People connect globally by means of social networking tools. The social networkers of today hold accounts in several social networks such as Facebook, Twitter, MySpace, Studivz, Xing, and StayFriends; they twitter with their iPhone, stream life video messages on seismic, inform their community via blog entries, and publish videos on YouTube. However, this is not good enough by half: it takes ping.fm (which aggregates all your accounts with a single message) to manage their content overdose. They will not omit Tweetdeck, as this desktop application allows them to quickly overview and select messages coming from several social media applications at the same time. Last but not least the RSS reader keeps their blog/website updates 213
in a structured reading mode. Seeing the bigger picture of the so-called media hype that evolved through the networking practices of social media is not the point: “Getting there is not what you want; Being there is what you want.” The traditional definition of the “user” thus loses its hitherto determinative character of information consumption and application usage. A new species, the social networker, has come into being. He/she is a multitasking information producer and manager, a multimedia artist and a homepage designer, an actor and a director of self-made videos, an editor and an author of his/her own blog, a moderator and an administrator of a forum, to name only a few of the aforementioned characteristics. Classical media, newspapers, magazines, TV stations, radio, books, e-mail providers, and telephone vendors, for example, are losing terrain and audience. Their information, service and entertainment dominance is about to crumble. Each new Internet start-up catches new users at exactly the place they want them to be, and giving them the appropriate tools at hand with which they can best express their own creativity, further diminishing the number of users, buyers, and subscribers of classical media. Information chunks hit the digital nerds via Twitter in almost real time, and it is the user who decides. They select and publish their own information and put it straight from other networkers’ flows directly into their own communities. These forms of interaction require personal communication skills and competences to judge information for its relevance and added value by sharing it with others. Meanwhile music videos gain greater popularity on YouTube than on TV channels. That is no surprise because, amongst others, movies and series can be streamed or downloaded with no annoying, interrupting TV ads. Various kinds of blogs provide a more comprehensive and multifaceted information spectrum than any other medium has done previously. ICQ, Skype, Ventrilo and many other applications take over the job of classic telephony. Social networks are the new homeland, bazaars of the avatars, e-life cinemas, pivots of the networking generations and their “profile maps” displaying places, actions, contacts, news, their own creative stuff, preferences, ideas, recommendations, and critiques. But what are the social components in this? Perhaps one could attribute to social networks and their actors the unifying desideratum to share and communicate information and experiences with others, to participate in collaboratively arranged projects and initiatives, and to support each other by solving a problem, or simply to gain fresh and inspiring input. Out of this, new communities of interests and practices emerge, new friends appear on the contact lists and, as a consequence, several new subnetworks come into being. A major activity of these communities is then developing the skills, procedures and attitudes needed for people to jointly create through their diversity. These democratic tools, the diversity and independence of opinions, the decentralisation and 214
,,' #'&L6/''1 <(M7>
aggregation of information and experiences, hold a notion of hope that the community’s thinking can at least develop into some sort of collective social system.
The Social in Finding, Storing and Sharing Information In my attempt to describe some of the prevalent techniques and tools applied to “Web n+1” technologies I try to identify at the same time the main characteristics and how they can be approached as discernible techniques and strategies in communication culture. Social bookmarking has roots back to the launch of online services in the late 1990’s. It allowed users to have private as well as public bookmarks. Over the years, more features were added such as the provision of folders for managing bookmarks. Some services and software also featured automatic sorting of bookmarks into appropriate folders, browser buttons to save bookmarks and allowing the users to send their bookmarks through email. In retrospect, Bookmarking + Social apparently merged into an inseparable unit of individual preferences about certain topics of interest based on search results on the Web, which are shared in a network where everyone can access each individual collection of bookmarks. In this connection, I try to recall the pre-digital age to see whether there was a similar principle attached to bookmarking. Nevertheless, a comparison is somehow objectionable, since any of the endeavours to establish a historically stringent trajectory between the analog and the digital fails at the initial conditions. Neither can a personal bookmark in one of your private books be compared with librarian cataloging and indexing, nor with any kind of digital storage and sharing of information since it cannot be shared with nor distributed to others at the same time. In this respect the term ‘bookmarking’ seems to be slightly misleading, as is true for so many other metaphors that simply apply an analog terminology to a digital tool that in fact entails an entirely different action. A good example demonstrating the reminiscence of real world metaphors is our computer desktop containing the old office palette of the desk, folders, files, etc. As objects in real office environments are moved around and put into action (searching, collecting, etc.), the files and folders on your desktop follow a similar logic as they contain a data structure stored in a database whereas the user interaction with software is described as an algorithm. Social bookmarking essentially lets you maintain a personal collection of links online, similar to the bookmarks or favourites in your browser, but they are also accessible to others on your own personal archive page. But before I dwell upon the relevant social media questions of reciprocality in giving and taking information, it is advised to bring to mind the actual practices of social bookmarking on the Web. An ever-increasing number of people spend more and more time on the Web looking for information related to their area of expertise. Information sources are 215
no longer scarce since email, newsletters, subscription to RSS news feeds and the use of search engines helps uncover resources that may be of value in private or professional contexts. Every now and then people use folders in their Web browser to organise bookmarks of online resources, but this practice has become inefficient. If a resource is relevant to several topic areas, the user has to save that bookmark in multiple folders and at times he/she will discover that the essential bookmarks are on the home machine while working at the office. Another scenario suggests that the user presumes that the bookmarked site is on his machine, but the process of finding one site from hundreds of bookmarks is more difficult than finding it again using a search engine. People who need to share bookmarks with colleagues or friends are requested to find the reference and email it. A much greater effect could be achieved by using for example del.icio.us to add bookmarks and “tag” it with a few relevant keywords. Here, the social sharing approach becomes apparent, as the list can be made public so that colleagues and friends can be easily directed to it, or others can find the list of bookmarks through keywords. When a site is bookmarked the social bookmarking software tells the user how many others bookmarked the same site, and when clicking on that number, one can see exactly who else bookmarked the site and when they found it. Subsequently, a further click shows the bookmark collections of others interested in your site. All this may help to facilitate group collection and aggregation of bookmarks creating a Web of resources and connections that is not limited to individuals and their folders but represents the interests and judgments of a community of users. Given the sheer innumerable amount of social bookmarking tools available on the Web5 it is not only the major pragmatist who may miss the forest for the trees. What, then, is the added value, and why can these specific tools be considered an integral part of media literacy? In a way, with social bookmarking, users can generate a greater variety of differing perspectives on information and resources through informal organisational structures. In contrast to a single person exchange of information, new communities of commonly shared interests continue to impact on the ongoing evolution of folksonomies and common tags for resources. The widespread acceptance of tagging is a core part of cooperative classification and communication, which became popular on the Web around 2004 by means of social software applications entailing the practice and method of collaboratively creating and managing tags to annotate and categorise content. The advantages many users attribute to a folksonomy-based tool for research refers to the insights of other users to find information not only related to the topic in question; for example, if you are looking for information about photography, you might find other users’ connections between photography and film scanners or stereoscopy, taking you in new, potentially useful directions. It is crucial here to see the wider picture, especially when comparing the classical form of a relatively static form of taxonomy with the dynamic, user-driven 216
,,' #'&L6/''1 <(M7>
form of tools that encourage users to revert because the cooperative form and the collections are in permanent flux. This principle of user involvement in classifying online resources by attributing tags to it is seen as controversial by the proponents of organised knowledge translated into classification or categorisation schemes. It is surely unwise to mix up scientifically controlled taxonomies with user-generated tags, although assigning a value for individual resources, resulting in a ranking system that functions as a collaborative filter, might be of considerable benefit for a cross-cultural and cross-discipline thesaurus, particularly in fluid/adaptive/emergent vocabulary fields. It is in the nature of user-generated information that there is no oversight as to how resources are organised or tagged, which can lead to an inconsistent or simply poor use of tags. Another concern is the ambiguity of collaborative tagging systems as long as there is no certainty about a plausible correspondence between tags and a well-defined concept. As if synonyms, misspellings, incorrect encodings, and compound words would not already cause enough confusion, pidgins,6 a combination of words from other languages absent of any grammatical structure make it even worse. Tagging can be turned into a key tool when including a process of tag selection, such as a checklist of questions that could be applied to the object being tagged, in order to direct the tagger to various salient characteristics. Another idea that could be implemented is to introduce a semantic structure within tags. Currently, tags are generally defined as single words or compound words, which means that information can be lost during the tagging process. Single-word tags lose the information that would generally be encoded in the word order of a phrase. This is particularly seen in English, with the dissociation of adjectives from noun. For example when tagging a photo one might want to use tags to describe a red rose and white lily. Once the single-word tags “red”, “rose”, “white” and “lily” are assimilated into the database, their meaning is lost. Users searching no longer know which flower is red and which is white. However, the problem of adjective/ noun dissociation is not equally relevant to every language. In some languages the issue is avoided or mitigated, as in those languages, such as Russian or German (“weisse Lilie”, “rote Rose”), that impose noun and adjective declination for case. With regard to compound words, private conventions are chosen by individuals for indicating relationships within an otherwise flat namespace, but these indications are applied for personal use, are not standard and cannot therefore be leveraged to any common advantage. What can be attributed to the majority of social media is that the technology and grade of interaction behind social bookmarking is not very complex, i.e. the threshold to participate is low for the ordinary user. Similarly to everything that seems to be only a few clicks or steps away (“Start blogging in two easy steps!”), the seductive element of social media tools is obviously grounded in fast access and easy publishing. Social bookmarking, however, is based upon interaction between 217
different applications, i.e. tagging of information is automatically being extended to multimedia files and e-mail. The diffusion of social media tools advances aggregated information from various data feeds. Rather than propagating social media as the overarching terminology for various kinds of interaction, as by definition they are not necessarily social, I would like to propose instead another more broader concept, that of networked communication: the way humans and machines connect, communicate and collaborate with each other. This is not confined to human-to-human interaction modes but includes for example APIs (Application Programming Interfaces), which have allowed Web communities to create an open architecture for sharing of content and data between communities and applications. In this way, content that is created in one place can be dynamically posted and/or updated in multiple locations on the Web. The shift away from formal taxonomies impacts on how new user communities emerge and how tags are evaluated against criteria for usefulness such as self-expression, organising, learning, finding and decision support. Tagging information resources with keywords will gradually alter classification systems and database and information management. In networked communities where ideas freely float for the benefit of all it may become less important to know and remember where information was found and more important how to succinctly retrieve it utilising a framework created by and shared with peers and colleagues. Again and again we have to come to terms with the implicit/explicit knowledge transfer dilemma, which seems to be engrained in human nature as an insurmountable barrier to sharing knowledge – whether it is distribution of reference lists, bibliographies, papers or other resources among peers or students. There is still a long way to go.
Hacking Ethics and Civic Engagement Hacking is not limited to computers. “Cultural Hacking – the Art of Strategic Action”7 for example deals with subversive efforts to escape the branding machine of the media and corporate retailers. Thereby strategies of subversion and symbolic exchange in search of a revised Cultural Studies approach take up practices from Dadaist and Situationist movements. In my book “(IN)VISIBLE”,8 I argue that cultural allusions are reduced to stereotypical signifiers that provide the constitutive elements used by advertising to signify the impact of globalisation caused by neo-liberal economic practices. The global presence of a particular corporation serves as apparent proof that corporate practices are beneficial to all peoples. The amplification of capital as it flows across the globe at an accelerating pace searching for higher rates of return makes use of advertising to legitimise its power as it transforms socio-cultural environments.9 As a consequence, originally subversive works and ideas are themselves appropriated by mainstream media10 and are offered lucrative contracts in return for partaking 218
,,' #'&L6/''1 <(M7>
5 2 -6?G7G
in ‘ironic’ promotional campaigns. Slavoj Zizek argues that the kind of distance opened up by cultural jamming provides the possibility for ideology to operate: “…by attacking and distancing oneself from the sign-systems of capital, the subject creates a fantasy of transgression that ‘covers up’ his/her actual complicity with capitalism as an overarching system.”11 Without going too much into the details of hacking history, it is important to understand its origin and initial purposes; for example, a “hack” has always been a kind of shortcut or modification – a way to bypass or rework the standard operation of an object or system. The term originated at MIT in 1960 with model train enthusiasts who hacked their train sets in order to modify how they worked. These and other early computer hackers were devoted programming enthusiasts, experts primarily interested in modifying programs to optimise them, customise them for specific applications, or just for the fun of learning how things worked. In many cases, the shortcuts and modifications produced by these hackers were even more elegant than the professional programs they replaced or circumvented. During the 1970s public awareness of hacking became apparent when a different kind of hacker appeared: the phreaks or phone hackers being part of emerging cultural jamming activities such as hactivism which marked out a completely new direction of subversive strategies and techniques against corporate culture. It is thus important to shed light on the skills and attitudes of software hackers, and the traditions of the shared culture that originated the term “hacker” as Eric Raymond has put it so aptly in Hacking, Open Source, and Free Software.12 Still striking are his remarks on “Status in the Hacker Culture”, which in retrospect make plausible the core principles of social networking. As Raymond concludes, hackerdom runs on reputation: whether your solutions are truly good is something that only your technical peers or superiors are normally equipped to judge. More specifically, hackerdom is a gift culture, by giving away your time, your creativity, and the results of your skills. 219
The Pirate Manifesto,13 an attempt at a fundamental rights-based platform for the Pirate Party movement14 in Sweden and Germany, puts forward key points on digital labour and its cultural, political and economic implications. I will pick up a few points from their core goals, which can be considered as first steps towards awareness-raising in the political establishment about the many issues of privacy, surveillance and infrastructure in a networked society. The following passages concentrate on questions regarding how we can find better solutions to ensure intellectual property and at the same time anchor, adept and modify copyright laws, individual and common rights, cultural heritage and privacy according to changing professions, modes of production, labour, and living, networked, parallel existing societies. Core issues are the protection of citizens’ rights, the will to free our culture, and the insight that patents and private monopolies are damaging to society. In this regard highly recommendable for broaching understanding is the movie “RIP: a remix Manifesto 2.0” hosted by Open Source Cinema.15 Each of the movie chapters, for example Copyright vs Copyleft, The Past tries to control the future, Back in the People’s Hands, The Revolution will be digitized, links well to the arguments brought in by the Pirate Party claiming that copyrights were originally created to regulate the right of a creator to be recognised as the creator, and have later been expanded to cover commercial copying of works as well as also limiting the natural rights of private citizens and non-profit organisations. What the Pirate Party criticises16 is that this shift of balance has prompted an unacceptable development insofar as economic and technological developments have pushed copyright laws far out of balance in favour of unjust advantages for a few large market players at the expense of consumers, creators and society at large. So, for example, millions of classical songs, movies and books are stored in the vaults of huge media corporations, not wanted enough by their focus groups to republish but potentially too profitable to release. However, cultural heritage must be accessible to all; a fundamental concern is that ideas, knowledge and information are by nature non-exclusive and their common value lies in their inherent ability to be shared and spread. The services and support non-profit organisations like Creative Commons are providing for scientific and academic communities is only a first step probing legal sharing, use, repurposing, and remixing of cultural, educational, and scientific content that is available to the public for free. Change in commercial copyrights struggles with balancing conflicting commercial interests, and suggestions to reduce commercial copyright protection, i.e. the monopoly to create copies of a work for commercial purposes, to five years from the publication of the work is unlikely to be achieved in the near future. In the spirit of Open Source, Free Software and General Public Licenses, the new political movement is an important voice for Generation “Web n+1”, trying to respond to existing and new challenges digital life poses to individuals and society in finding new answers to safeguard citizen’s rights, 220
,,' #'&L6/''1 <(M7>
their right to privacy and basic human rights. In this vein, civic engagement in public spaces as a tool, practice and technique needs to be rethought and reorganised with regard to synchronous forms of real and virtual communication by means of social media. Flashmobs17 are a good example of how quickly a large group of people can assemble in a public place, perform an action for a brief time, and then quickly disperse. What sympathetically discerns flashmobs from smartmobs18 is the fact that flashmobs do not necessarily have a purpose, although they may express an opinion or a statement. In other words, the definition of smartmobs emerged from smart mob technologies and their impact on communication and cooperation, in both a beneficial and destructive manner: for example used by some of its earliest adopters to support democracy and by others to coordinate terrorist attacks. Flashmobs in contrast put the ephemeric event character into the foreground, which can be socially, culturally, politically, or artistically motivated using compound tools and methods for transmitting the instructions either by email, SMS, forums, discussion groups, or word of mouth. From the Flashmob Manifesto we can learn how generation “Web n+1” develops performative and activist skills and competences to convey their messages in public spaces in playful appropriation and re-interpretation of historical happenings, which are a form of participatory new media art, emphasising an interaction between the performer and the audience. In breaking the imaginary “wall” between “performer” and “spectator” happenings include everyone present in the making of the art and there are no set rules, only vague guidelines that the performers follow. What both happenings and flashmobs have in common are a) short durations; b) ad hoc arrangements; c) focus on mass consumption; d) mass spontaneous participation; e) novelty and creativity; f ) freedom from form restraints; g) location independence, to name only a few. However, today’s mainly politically motivated flashmobs bring into play a set of rules and guidelines, which are considered prerequisite in order to communicate and operate effectively and distinctively within a determined time frame and event purpose, for example in the following way: a) a flashmob must remain discrete; b) the mobbers do not communicate with one another during the flashmob; c) the gatherings must happen at an exact time and the mob must not last more than 10 minutes; d) a single person can create a flashmob. Although the popularity of flash mobbing is short-lived and its style is deliberately ephemeral, its popularisation is well documented by blogs and mainstream media primarily because of the use of mobile communication technologies. In conclusion, the hacking ethos developed in the early open source movement has irrevocably changed consumer mass media culture into a participatory media culture, which has established novel forms of user engagement and interaction with the public domain both on- and offline. In that remediation has turned into a signifier of fluid real and virtual world enactments generation “Web n+1” continuously explores the permeating boundaries of technology enhanced modes of production, 221
reception and perception in interdependent socio-cultural and political contexts.
Notes 1 2
3 4
5 6 7 8 9 10 11 12 13 14 15 16 17 18
222
Virilio, P. (1996), Fluchtgeschwindigkeit, p. 25. München: Hanser. The term ‘proxemics’ was introduced by anthropologist Edward T. Hall in 1966. Proxemics is the study of set measurable distances between people as they interact. http://archives.cbc.ca/arts_entertainment/media/topics/342-1834/ (Accessed 12.10. 2009) Chandler, A. (2005), Animating the Social: Mobile Image/Kit Galloway and Sherrie Rabinowitz, p. 153. In: Chandler, A., Neumark, N. (2005), At a Distance. Precursors to Art and Activism on the Internet. Cambridge, Massachusetts: The MIT Press. http://www.searchenginejournal.com/125-social-bookmarking-sites-impor tance-of-user-generated-tags-votes-and-links/6066/ (Accessed 11.01.2010) Pinker, S. (2000), The Language Instinct: How the Mind Creates Language. Harper Perennial Modern Classics. Düllo T., Liebl F. (2005), Cultural Hacking. Die Kunst des strategischen Handelns. Vienna, New York: Springer. Sonvilla-Weiss, S. (2008), (IN)VISIBLE. Learning to Act in the Metaverse. Vienna, New York: Springer. Klein, N. (1999), No Logo: Taking Aim at the Brand Bullies. New York: Picador. http://www.i-shop-therefore-i-am.dk/ (Accessed 11.01.2010) Žižek, S. (1989), The Sublime Object of Ideology. London: Verso. http://catb.org/~esr/faqs/hacker-howto.html#why_this (Accessed 10.12.2009) http://www.wired.com/beyond_the_beyond/2009/10/and-yet-another-piratemanifesto/ (Accessed 10.12.2009) http://www.piratpartiet.se/international/english (Accessed 10.12.2009) http://www.opensourcecinema.org/project/rip2.0 (Accessed 10.12.2009) http://docs.piratpartiet.se/Principles%203.2.pdf (Accessed 10.12.2009) A Q&A with the anonymous founder of flash mobs: http://www.laweekly. com/2004-08-05/news/my-name-is-bill Rheingold, Howard (2002). Smart Mobs: The Next Social Revolution. Basic Books.
Playing (with) Educational Games – Integrated Game Design and Second Order Gaming #
Introduction This text gives a short overview on playing as an integrated activity of toying, game creation and game play, with liberating, reconstructive, reflective and innovative aspects. Games are seen as a class of media consisting of narrative and regulative elements, spanning a situating possibility space for expressive decisions and cognitive mapping by the players. These aspects may be usable for education, knowledge representation as well as critical media reflection. Two questions lead to two key concepts to develop: First, how do internal boundaries and contexts of games, known as rules and narratives, influence information and its mode of acquisition made available to the player? Beyond the design of the educational content there is a need for an Integrated Game Design approach which includes the rule system and the narrative structure for an adequate situating of content. Second, how may specific modes of gaming raise the player’s awareness of the boundaries and contexts of (educational) games, and of media in general? To render defining elements visible, make them accessible and challenge them to be reconfigured demands a specific mode of design and playing I would like to call Second Order Gaming. This is presented in three approaches, in metagaming, transmediality and unusability.
Educational Content, Games and Playing Triggered by the growth of digital games over the last few decades and the demand for new educational approaches for the digital age and lifelong learning, games are receiving more attention as a medium for educational purposes (Squire 2001). Unfortunately, no medium is a mere passive container waiting to be filled with content, but each brings its own unique limitations and possibilities shaping the perception of its content and context. Shoehorning educational content into games without accounting for the particular properties of playing and gaming may forfeit this medium’s unique way to relate content to the learner both emotionally and cognitively and may forego its innovative, creative and reflective potential. 223
#
Toying, Game Creation, Gaming: Playing as Three Modes of Action
Playing is about choice and the communication of choices. This feature sets games apart from classic media, where the balance of choice is usually tipped in favour of the content’s creator (Crawford 1982). But playing is also about getting rid of choice, defining limitations, and about refitting new ones (Bateson 2000a). This feature sets toys apart from any object, process or system, where the defining limitations are socio-culturally encountered and have to be adopted individually, e.g. for a book, reading, or script culture. Thus the verb ‘playing’ usually describes at least two distinct modes of perception and activity: the explorative, assimilative and re-interpretative handling of a toy (Piaget 1975; Sutton-Smith 1978; Frasca 2001); and the choices made within a framework of rules and narratives of a game to reach a given goal (Costykian 1994; Caillois 2001; Crawford 1982). Abstract regulative elements – rules – give a game jurisdiction, direction and manageability while narrative elements – e.g. basic game metaphors, background stories, and visual design – give a game context, continuity and signification. A toy is an object that is in certain aspects regulatively and narratively yet undefined, but suggests certain interpretations of its use as part of a game. Every object may be turned into a toy – it can be toyed with – before it can be recreated as a game – where gameplay is possible. A ball, a doll or a stick, for example, pass as archetypical toys, but “Kick it as far as you can”, “Play house with it” or “Use it like a sword” define them as part of a game, with a set of rules and background narrative to comply to, at least as long as the game lasts. As Glasberg (Sutton-Smith 1978, 77) showed, the interpretative handling of objects is not connected to their cultural usage, as long as a player is given leeway to ‘invent’ new usages Toying strips objects from their common signification. To allow for a specific gameplay, a different set of significations has to be refitted in an act of creation so that a game within this new set of defining limitations may be possible. Thus a branch may be broken from a tree, playfully tossed and twirled, and finally used as a sword. However, if the player breaches this novel signification by aiming it at her fellow and saying “Bang!”, a new set of rules and narratives has to be communicated and agreed upon. Thus the defining properties of toying, game creation and gaming are the detachment, the creative reassignment and the acceptance of new significations of objects, processes and systems. The result is a simplified, virtual aspect of reality that is accompanied by the freedom of consequences beyond the scope of itself, which in turn provides the safety required for a positive (re-)interpretation of failure and shunned behaviour, and fosters exploration, innovation and novelty (Sutton-Smith 1978) – in short: a game. Games themselves consist of two layers: a static regulative-narrative frame as a result of game creation, and the course of individual games performed within, as result of players playing the game. 224
/ 18!&;$' ,
51NDOPQ4DD('&D ,DDDD))D DD1F' DDDDD 4D D&D)& ' D) )D,DD DDED D ' ND5 D&D('&DDDDD! D (' ,,( '1,1 ( )' ) 6&( !& &) R1&C3
Choices in a Self-Chosen Framework: Games as an Expressive Medium Games are organised information in the form of rules and narrative elements. However, more importantly, they themselves organise a player’s emerging knowledge in the form of rule interpretation and communicative exchange via meaningful moves. If a given game represents a simplified version of a medium – including an artificial cultural background in the form of a background story or base metaphor (Squire 2005) – then a played game represents a unique expressive exchange within this medium: the rules may limit the choices available but cannot foretell which trail of decisions will be made by the player (Luhmann 1997). The range of the players’ creatable ‘expressions’ may vary from game type to game type: from linear quiz games, where the only expressive options are to proceed or to fail by giving true or false answers; to strategic board games like chess, where moves and countermoves interdependently form a branching tree; to networked community-based interpretative games such as Alternate Reality Games, which may open semantically, spatially and participatively into a landscape of interpretations. Thus, games are not merely descriptions of forms, but also of the space where potential formations can take place and are challenged to happen. For education, games ideally deliver contextual boundaries and motivational incentives for meaningful decisions. The player is challenged to repeatedly tackle virtual problems with differing strategies, while allowing for failure, digression and deviance, not as hindrance or something to be avoided but as supportive to playing and learning (Bateson 2000b; Sutton-Smith 1978). Gameplay is a dynamic, recursive expression within a static medium, more related to improvised performing arts and digital simulations than to text, picture or film. As such, the potential of consequential and meaningful decisions of all kinds (Abt 1970) should be taken into account when a learning objective is going to be based upon games as the medium and on playing as the limited action of rule-bound gaming. 225
#
Liberating virtuality and expressive choice are two defining aspects of games; related to these, however, there is a third important aspect of playing, though it is not characteristic to all types of games. If playing has no consequences outside of the game, and a game with the same premises may take on many courses, then the player is challenged to play more than once to truly enjoy, understand and master a game.
Repeatability: Good Games are Played Often but Are Never the Same Possible formations within specific types of games reflect a basic dichotomy of handling in-game-reality: On the one hand, rules may render a game’s states and moves discrete, absolute, recordable and replayable (e.g. chess). On the other hand rules may be semantically open to be interpreted by the players, to become part of the formational process that turns the course of the game into a singular, unrepeatable event (e.g. children’s roleplaying games). Discrete states and discrete processing steps are educationally beneficial in a certain category of games, especially in digital games. Discreteness means that a game’s state in its entirety may be saved and later restored, and that developing consequences caused by a decision may be accelerated, decelerated, stopped or rewound. Discrete state games thus provide for skill development in the form of dialogic environmental interaction, in a way that is hard to achieve in an analogue situation, respectively with human actors as co-players. The same game situation may be tackled again and again, with different strategies – through synchroporosity (Tan 2006), following multiple ways ‘at the same time’ – or from different viewpoints, multiperspectivity, taking different views on the same subject. It is therefore not only a single working strategy that may be searched for and finally found, but the player/learner may be striving for a host of possible strategies and playing styles, virtually ‘at the same time’, mapping the possibility space of all potential actions and consequences. Repetition and variation thus provide for a much deeper, intuitive insight into a complex problem field, into its accessibility, extent and texture (Spiro et al. 1991; Wright 2005; Lischka 2002; Jenkins and Squire 2003). Although good books, photos or movies also beg to be received and interpreted more than once, only in games is the interpretation itself externalised as significant moves by the player. This sequence of game moves is itself a shareable and interpretable expression for co-players and on-lookers, a useful educational trait covered in the constructionist approach of Papert et al. (Papert 1994; Wilensky 1993). The media rich environment of the Alternate Reality Game “World without Oil” (Independent Lens 2007) is a good example of a game’s course being expressive, innovative, communicable and educational both for the players as for spectators. In educational terms this leads to following conclusion: While classic media deliver structured information, games provide a structure for the experimental formation 226
/ 18!&;$' ,
51NS4DA)( 4D D( DF1D&D DD)&D(D( D,))1D&DNDT D ?GGK; 513U4<B&' >8 ( "?GGI;) ,1 , , B! &6&))6' & &B !16 , ( 3#& ) ' ,,) 1&1, ,8 ( " , (?GGI;3 513I4& '<#,, >61&81&?GGU;& !& ) ( ' 1,) <#&,> , ')B<) ( )'>B(!, ' D''VD!&D&D ' DDL D&DF' D D)) '&ND&D'&D1,D) D&D ) DR CD, D D DF' DVD'& 1D D&D'&D D, D D ' D' 'DBD VD, DF' VD D( &N
of structured information. How can structures provided by games support certain types of formations?
Integrated Educational Game Design If one agrees on the constructivist approach of situated cognition, i.e. that knowledge has the character of a meaningful tool and is inevitably connected to the situation of its acquisition and application (Brown et al. 1989), we have to take into account not only the overt content of an educational game, but also its regulative and narrative framework in which it is encountered by the player. Good educational game design may contextualise knowledge, extending the classic “what and how” with “why and when”. This may happen via the choice of a basic metaphor, background story, design of elements, non-linear course of the game etc. Since knowledge is deeply connected with its application, an integrated approach to game design should also consider the rule set – the game mechanics – which allows or even calls for specific behaviour by the players, while suppressing or discouraging others. Rules define the boundaries of the players’ actions and give them direction and jurisdiction. They usually require unquestioned acceptance from the player to play a game (Caillois 2001). To become a better player the knowledge of the game rules when deciding on actions has to become ingrained, procedural and automatic: As with any language, a fluent native speaker is likely less aware of grammar and vocabulary than a first-year student of this language. Specific paradigms of teaching have repercussions on what is learned beyond the overt content by supporting specific types of interaction with knowledge bases, peers and experts, comparable to aspects of the “hidden curriculum” encountered 227
#
in schools. For example behaviouristic approaches try for an imprint of facts by repeated drill & practice of objective knowledge, while constructivist approaches may go for social interaction and collaboration towards a self-set goal in a field of expertise. In educational games rules represent similar paradigmatic settings, guiding the playing style and thus the situating of attitudes, skills and knowledge achieved ingame. Some examples of rule dimensions: Dimension
Extensions
Game examples
Topic examples
Mode of Cooperation
Competitive – Col-
Player vs. player – team-
“Economic workings” –
laborative
based games
“Democratic values”
Limited – Unlimited
Path-based boardgames
“Study planning” –
– networked MMOGs
“Lifelong learning”
Area-based Simulations
“Urban development” –
– networked ARGs
“Sustainable transporta-
Temporal boundaries
Spatial boundaries
Limited – Unlimited
tion” Mode of jurisdiction
Information as resource
Algorithmic – Interpre-
Single player digital
“Medical differential
tative
games – analogue role-
diagnosis” – “Storytell-
playing games
ing”
Depletable – replenish-
Quiz games – net-
“Irregular verbs” –
able
worked ARGs
“Web-based Information retrieval”
Complexity
Linear causality – sys-
Quiz games – systemic
“Road safety for kids” –
temic feedback
simulations
“Eco-systems”
For topics such as “How does our economy work?”, “How do I get safely to school?”, or “How do I do research on the web?” the choice of a rule set demanding and thus fostering a certain behaviour connected to the aimed-for skill is an important design decision. If the simplified game environment has to resonate in the actual field of application, then the required mechanics to perceive, judge and act should do so too. In the same ilk, narrative elements such as background story, basic metaphor, visualisation of game elements or a game’s obvious genre assignment may influence how the game is perceived by the player and what actions may obviously be required. Narrative elements may be added to motivate and justify the player’s decisions in a game’s rule space. For example a competitive player vs. player behaviour is supported with a quiz-show as narrative background, but this may run counter 228
/ 18!&;$' ,
to what the learner is supposed to learn when the topic is about affirmative action and considerate behaviour on the job. Gaming can be seen as a composition of the players’ dynamic decisions and interpretations in the possibility space spanned by static regulative and narrative elements. But gaming is always preceded by the educational game designers’ decisions about what set of regulative and narrative elements should be used to support safe, meaningful, motivational and repeatable – and therefore effective – gameplay. Realistically, these design decisions may unintentionally preserve cultural bias embedded in the game’s rules and narratives or may be wilfully used for advertisement, indoctrination or propaganda. Stereotyping may be easy to spot for educated players, e.g. when violence, ethnicity or gender are exploited. Others may be very difficult to notice, for example the predominance of growth, gain and teleological progress as desirable aspects to be found in nearly all games. Alternate guiding principles such as homeostasis, cyclicity or aesthetics can be encountered on only very few occasions.
Second Order Gaming To raise awareness of games as a medium, as well as its specific limitations, manipulative dangers and stereotypes, I suggest three approaches from the ‘outside’ of the game: Metagaming, Transmediality and Unusability. While first order gaming aims for an integrated, self-contained and balanced game experience within a given framework of rules and narratives, second order gaming strives for creative modification, transfer and subversion of these, and thus also of related cognitive and medial limitations and stereotypes. Comparable to a second order observer who deals with the conditions of observation, a second order gamer does not play within the given confines of a game but with its confines.
513H41,1 &, ' ' !&)') (& 11 L(&1,C 3 513K4< & >8)+?GGH;& )'& 11 ( ' !&D ' DD D' L'ND#&D1,D,'&'DFD&DF' D )'D DD1 4D #&1, )6! , ) 1,6!&&) C, ' ' 18 '; 6' 6 ' ' ' ' ,11 ), '((&1,,3 513:4,)') '&161 !&) 1& (' , ( ( &(L6, 1& ' ' )3,&'& 1&) '
229
# & , '( ( &' )( 63131&C <,'?GGG>87::@;6 ' ,, 3
Metagaming: Pop Up the Hood We were all once game designers, when we made up rules about how to play with a ball, or invented our own narratives that turned our bed into a pirate ship in a shark infested sea. However, unlike with text, picture or film, the creative process of ‘professional’ game design seems hermetic: While the technical and creative means to write, photograph or film are easy to come by, the skills and means to analyse, create, modify and test rule systems as well as the accompanying motivational narrative backgrounds seem much harder to achieve. Taking into account that our environment, culture, economics etc. are systemic in nature and consist of processes shaped by human actions decided within rule sets (Bateson 2000b), motivated and made meaningful by narrative elements, a mere documentation of these systems by static media seems insufficient. It is time to create and subvert them, play with(in) their boundaries, and share the results. In my diploma thesis (Tan 2006) I identified several established modes of metagaming, i.e. the temporary change of a given game into a toy, with a modified, playable new game as a result. These modes of metagaming are not usually recognised as playing because they happen outside of the actual game. The mechanisms applied are similar, though, to the ones described above for the troika of toying – game creation – playing: With metagaming a game’s frame of reference is temporarily or indefinitely transcended, modified and embraced anew. Among the modes to achieve this are quite mundane features, e.g. menu functions, cheats and walkthroughs; more complex approaches such as exploits and emergent gameplay (Juul 2002; Stöcker 2005; Kringiel 2005); as well as skinnings, modifications, extensions and conversions (Kücklich 2004). As exotic as many of the alterations may seem when first encountered, they represent the anarchistic spirit inherent in toying. On the other hand many results – like the famous ‘rocketjump’ – have turned or will turn into tropes of genres and game forms, to be expected by the player community in subsequent game generations, thus closing the circle of innovation and conservation. Basic menu functions in discrete state games such as ‘save’, ‘load’, ‘restart’ etc. will alter a game’s linear course to one of a possibility space, creating branching paths, for different (re)solutions of the game. This feature also expands the educationally interesting repeatability of games, supporting a finer, more adaptable mapping of the topic. Cheats and walkthroughs may alter the premise or even the medial nature of a game, e.g. turning “The Sims” via unlimited funds from a simulation of resource management into a building simulation; or turning a quiz-adventure like “Myst” into a linear visual storytelling experience. This may reveal aspects of the game 230
/ 18!&;$' ,
hidden intentionally by the game designers but may also be used as a possible change-of-view in educational games to contrast factual knowledge with interpretative knowledge and skills. For example a ‘walkthroughed’ quiz-based adventure game may provide for the ethical, ‘in principle undecidable’ decisions (Foerster 1995) to stand out. Exploits and emergent gameplay see the altering of game features unforeseen by the designers, by players expanding their possible actions beyond the overt rule set. For example the ‘rocketjump’ allows for reaching heights – and locations – never intended to be accessed by the player’s avatar; and a first person shooter, initially to be played as a shoot-em-up, may be turned into a competitive ‘speed-run’, where getting from A to B as fast as possible – with a bodycount of zero – is the new winning condition. An exploiting player enters uncharted territory within a game, requiring new regulative jurisdiction, set by herself or by the community. Modifications, extensions and conversions still require some technical skills and go for a change in graphics, audio, world-structure, interface etc. The result may be as simple as the ability to attach a digital photo of the player as the face for the fighting avatar or as intricate as “Counter-Strike”, a “Halo 2” modification. Metagaming gives the player the chance to modify their gaming – or learning – experience by self-setting goals, tweaking rules, integrating external, personally meaningful material into the game’s mechanisms and narratives, and sharing the results with other players. To lower the threshold, these options should be made available by the designers, ideally challenging the player to do so, but at least not actively block such activity. While writing, photographing or filming may experiment with boundaries of tropes, technique and technology of the respective medium, games, as stated above, have more in common with digital media in that their technical substrate may include all of the aforementioned media. Games are not bound by codality or modality: Every thing and every way may be integrated into a game. Moreover a game may also be the source for other medial representations.
Transmediality: When It is Worth to be Retold A story that is deemed worth to be retold is one that will be remembered, and vice versa. The wish for re-experiencing, retelling and archiving of medial and real experiences is one as old as mankind and has always been accompanied by the problem of medial transfer. Each medium has its own rules and elements, its own grammar and vocabulary, its own genres and tropes. Turning e.g. an experience into a diary entry, a novel into a movie or a comic into a game is problematic: There are always aspects which cannot be transferred because of the targeted medium’s defining limitations. On the other hand, the targeted medium may support a wished-for form that is impossible by the original’s medium limitations. This combination of limitations and opportunities may challenge a transfer de231
#
51NOW4D1,1DD VD VD VD1 D1 D'ND D VD D(D, FDD )(&) & (&!& &3 513774#&R 'X,)C'&)' ) 6& 1 ( ) & D1,4D&D&D DFDD) D1&D !DDX,)DD&D,D, ,VD &) ! ) ) &, &&1&&&(&1,13 5137?411,633'&111)&'6 !& 1& '6 ! DDD'&1D D1F' 4DDD1 D1,D' !DD)D&D(& DLD, D D # / 3
pending on finding new technical extensions or new cross-media metaphors and thus open up a space of possibilities that may transcend the original content and its jurisdiction. The actual educational effect can be seen in the experience of transfer and differentiation, of active movement in-between medial worlds (Fromme and Meder 2001). If a medium, e.g. television, has been culturally established and individually mastered, its underlying properties may evade perception, even though these properties still influence each expression created (McLuhan 1964). If a medium is, likewise, very dynamic and shifting in its properties, e.g. digital networked media, emerging properties may be overlooked due to white noise. Because experiences are medially transmitted and stored, it should be an educational goal to be aware of both the inherent, shaping limitations of media, as well as the possibility of creative extensions, transfers and ways out. Transmediality describes processes of transferring content and context of one medium to another. The concept of ‘Medium’ used here is not restricted to a technical medium like print, photo, or film but covers any means of expression bound to a given medial grammar and vocabulary (McLuhan 1964). Genres, tropes and stereotypes for example can be seen as conceptual media, where an artificial limitation on expressive range facilitates authorial creation and recipient re-creation of meaning. A shooting gallery game with sentient, suffering targets (Frasca 2003) or personal everyday actions revisited under a game’s dire premises of a global crisis (Independent Lens 2007) can be seen as experimental distortions of a medium, for aesthetic reasons or for the effect of irritating, challenging and/or educating the audience. Military violence and social crisis management may be the content to be reflected upon, but how this is done also redefines the respective medium in its formative possibility space. Though it is difficult to see in a time of dominance of digital games, games are neither bound to a specific technical substrate nor a specific receptive or expressive 232
/ 18!&;$' ,
5137@4#, ( &(,) ' ' ,, &3 5137J4&,'&, 8A #&/ ' ?GGJ ?GG:;6< ?> ' )'1 ' ,' ( (' ,)'& 'D DD1 VD,' D1,D D D' E'ND'&,DD,'DD DD DD 6 ,)' &) C' 6 ' ( 1& , D'&'DL D&DF' D1,D ,N 5137U4,C<1& >8,7::H;1 1, 61 , '&'D DR D CD1,D D'D'& D( D&DF' D D1D&D)' 6' , , ,, &633 ) 1, ' '4 &' ,) !, &',( ' 3
medium. Gaming material can take the form of books, boards, cards, of words, sounds, gestures, bodies, of software, data and the Internet; games may require, include or exclude vision, hearing, touch, gesture and mimicry; they may forfeit or rely on emotional response or cool reasoning. This incredible adaptability and potential for transmediality is shared with digital media, which may explain the mutual attraction between these two formative spaces. Thus a game created as a potential educational toy should facilitate transmedial play as well as digital networked games do: Both are accepted as in principle polymorph, open and constantly changing in their forms, and in fact can be characterised by this property. If transmedial transfer or metagaming alteration is intentionally and purposefully sabotaged by an educational game designer, the irritatingly dysfunctional result may find nonetheless effective use as an “unusable” game.
Unusability: You Do not Want to Play it Again Games demand from the player blind trust that they, as a medium, behave in a stable, foreseeable and conventional way. For example a game is usually accompanied by the exciting ambiguity of who may win in the end. A game that ‘cheats’ by subtly sabotaging this balance in favour of the game, of one player or a group of players, may turn gameplay into a frustrating experience. Therefore, if given a game, the player expects it to be balanced, to be fun, to contain a coherent contextualisation; to be either culturally and traditionally tethered and proven like chess, or, with contemporary games, created en bloc by a competent and benevolent game designer for the entertainment of the players. 233
#
What is perceived as the defining properties of a medium evolve in lockstep with the perpetuation and establishment of it as a medium, effectively stabilising and solidifying it in its technical form, its genres and tropes. Avantgardistic experiments, revealed audience manipulations or technological advances may challenge – or endanger – these properties. Such dissolutions can be seen for example with Orson Welles’ 1938 broadcast of “War of the Worlds” in a hitherto unfamiliar format resembling a newscast; in revisionist photo manipulations of undesired political personas; or the anarchistic though benevolent hackers who show that physical media handling information are no match for information-based media handling information in terms of manipulative potential. Unusability as an educational approach strives for an understanding of medial limitations and preconditions, by aiming for a disruption of trust in these. This happens through game design decisions which deliberately and unbeknownst by the user turn a game unworkable, aporic, unbalanced and disturbing where it should be intuitive to use, guiding, fair and entertaining. The critical review of a medium’s content, a mainstay of media educative skills, is expanded to the medium itself: It carries a social, cultural, artistic and technological bias that is difficult to make visible unless unexpected cracks are showing, or unless a cracking noise in the joints can be heard – and felt – as Debray would put it (Debray 2004). On a level more connected to genres, tropes and content expectations, Frasca terms this the “Videogames of the oppressed” (Frasca 2001b) which can also be read as gaming with the oppressed: The invisible, unconscious, ingrained properties of a medium are brought to unsettling attention. Irritation and puzzlement can lead to new cognisance about trusted expectations and habits. Games that break with expectations concerning gaming as such
51NDOY4D9( DDD' D1,D1D)) '&D'& 1D&D) D D'' DE'D ,)' ') (( 11&)') 1)'3 51NOZ4D&CD<) !>DT&DO:Y:[DDD' D1 )D1,D&D,) FD&D F' D,'&,D D 'D(D111D&D1,D D1DD ,D1 )D D) D TR'' C[DD1 1D &D1 )D&D) !D D D&D D D&D !D(FDTR1 C[VD!&D ,1 , ( 1 )(!8RLC;39 &'' 1! ) 6!& & 1 ' !&3 5137K45'C<),(7?&B ! >85'?GG@;'' ! &<& ' !> ) '' 1 ,4#&) 1 , ) ( ' !D&D1,VDNNDD& 1P1 D'D D) DD,) FD ' D, ND&D D D DD 1 &,, '!\
234
/ 18!&;$' ,
can enhance the understanding of games and media in general as an intended, manipulated and manipulating creation. This can – and probably will – also lead to frustration, fear, or aggression; thus the more radical approaches should be part of a greater educational concept to help the player cope with the experience or help understand the rationale behind the approach. For example in Wiemken’s “Breaking the Rules” (Wiemken 1997) an affective-cognitive wrap-up is an integral part of this socio-pedagogical approach, as well as in Shirts’s famous game of social stratification, “Starpower” (Shirts 1969). Others like Costykian’s “Violence” (Costykian 1994), Frasca’s “September 12th” (Frasca 2003) or Wong’s proposal of “The Ultimate War Simulation Game” (Wong 2007) are labelled implicitly or explicitly as ironic statements, thus warning the player not to expect a working game.
Conclusion Beyond using games as a mere single-use container for declarative knowledge, an integrated approach to educational gaming takes into account that regulative and narrative elements may span a space for the structuring of possible moves, without dictating an explicit true-false dichotomy. Games deliver self-contained simplified media with explicit rules for formational processes. They challenge the player to map game-encoded knowledge via varying meaningful paths laid out by repeated playing. Thus integrated educational game design aims for situated cognition in the form of explorative reconstruction of knowledge fields loosely encoded in the game’s content, narrative elements and rules, while utilising the analogue or digital medium’s specific strengths. This first order educational gaming and game design, which aims for a consistent learning experience, has to be complemented with a second order point of view, of design and of action, challenging or allowing critical reflection, creative modification and the sharing of results by the players. There are several reasons second order gaming may play a decisive role in future game-based learning approaches as well as for game design issues in general: It offers low-cost sustainable game design by drawing on the potential of gamers to customise, alter, expand, communicate or reinterpret a game’s regulative and narrative elements, thus keeping up the game’s long-term appeal. For aesthetics education and critical media education, the experience, transgression or modification of medial boundaries may be difficult to achieve in classic media or closed educational games, while being a defining property of Second Order Gaming. For practical media education, games share important traits with digital networked media, such as polymorphism, emerging properties, or self-contained mediality. A game-toy approach may offer metaphors and motivation to experiment with these properties, both in the game context as well as in ‘serious’ digital applications. For theory and practice of games and play in education (in German “Spielpädagogik”),
235
# there are new definitions and applications waiting to expand the classic structural and functional approaches with unforeseen attributes and uses of gaming, especially when coupled with digital networked media.
Generally, toying – game creation – gameplay can be seen as a highly adaptive mode of cognition, which may render the resulting forms of games and play difficult – or even impossible – to define. An alternative to categorisation of educational games by their content thus may be by engaging the treatment of medial limitations and requirements, or their called for, allowed or challenged workarounds.
Works Cited Abt, Clark C. 1970. Serious Games. New York, NY: Viking. Bateson, Gregory. 2000. A Theory of Play and Fantasy. In Steps to an Ecology of Mind. Gregory Bateson, 177-193. Chicago and London: The University of Chicago Press. Bateson, Gregory. 2000. The Logical Categories of Learning and Communications. In Steps to an Ecology of Mind. Gregory Bateson, 279-308. Chicago and London: The University of Chicago Press. Brown, John S.; Allan Collins and Paul Duguid. 1989. Situated Cognition and the Culture of Learning. Educational Researcher; v18 n1: 32-42. Caillois, Roger. 2001. Man, Play and Games. Illinois: University of Illinois Press. Costikyan, Greg. 1994. I have no words & I must design. Interactive Fantasy: Journal of Role-Playing and Story-Making Systems 2. Costykian, Greg. 1999. Violence: The Roleplaying Game of Egregious and Repulsive Bloodshed. Hogshead Publishing Ltd. Crawford, Chris. 1982. The Art of Computer Game Design. Berkeley, CA: McGraw-Hill Osborne. Debray, Régis. 2004. Für eine Mediologie. In Kursbuch Medienkultur: Die maßgeblichen Theorien von Brecht bis Baudrillard. Eds. Claus Pias et al. 67-75. Stuttgart: Deutsche Verlags-Anstalt GmbH. Foerster, Heinz von. 1995. Ethics and Second-Order Cybernetics. In SEHR, v4, i2: Constructions of the Mind. http://www.stanford.edu/group/SHR/4-2/text/fo erster.html. Accessed 22 June 2009. Frasca, Gonzalo. 2001a. Rethinking Agency and Immersion: Videogames as a Means of Consciousness-raising. Los Angeles: Siggraph 2001 Conference. Frasca, Gonzalo. 2001b. Videogames of the Oppressed: Videogames as a Means of Critical Thinking and Debate. Atlanta: Georgia Institute of Technology. Frasca, Gonzalo. 2003. September 12th: A toy world. Online Flash game. http:// www.newsgaming.com/games/index12.htm. Accessed 22 June 2009. Fromme, Johannes; and Norbert Meder. 2001. Computerspiele und Bildung: Zur
236
/ 18!&;$' , theoretischen Einführung. In Bildung und Computerspiele: Zum kreativen Umgang mit elektronischen Bildschirmspielen. Eds. Johannes Fromme and Norbert Meder, 11-28. Opladen: Leske + Budrich. Glasersfeld, Ernst von. 2008. Konstruktion der Wirklichkeit und des Begriffs der Objektivität. In Einführung in den Konstruktivismus. Eds. Heinz Gumin and Heinrich Meier, 9-39. München: Piper. Global Kids and Gamelab. 2006. Ayiti: The Cost of Life. Online Flash game. http:// ayiti.newzcrew.org/ayitiunicef/. Accessed 22 June 2009. Global Kids Inc. 2006. Ayiti: The Cost of Life: A Game-based Lesson Plan Addressing Poverty as an Obstacle to Education in Haiti. Unicef Website. http://www. unicef.org/voy/explore/rights/explore_3170.html. Accessed 22 June 2009. Independent Lens. 2007. World Without Oil. Online Alternate Reality Game (closed). http://www.worldwithoutoil.org/metahome.htm. Accessed 22 June 2009. Jenkins, Henry and Kurt Squire. 2003. Understanding Civilization (III). Computer Games Magazine, issue September 2003. Juul, Jesper. 2002. The Open and the Closed: Games of Emergence and Games of Progression. In Computer Game and Digital Cultures Conference Proceedings. Ed. Frans Mäyrä, 323-329. Tampere: Tampere University Press. Kringiel, Danny. 2005. Spielen gegen jede Regel: Wahnsinn mit Methode. GEE: Games of Entertainment and Education 10/2005. Kücklich, Julian. 2004. Modding, Cheating und Skinning: Konfigurative Praktiken in Computer- und Videospielen. Dichtung Digital February 2004. http:// www.dichtung-digital.com/2004/2/Kuecklich-b/. Accessed 1 February 2006. Lischka, Konrad. 2002. Eine Welt ist nicht genug: Computerspiele als Utopien der Utopie. Telepolis 17 August 2002. http://www.telepolis.de/r4/ar tikel/12/12980/1.html. Accessed 22 June 2009. Luhmann, Niklas. 1997. Die Gesellschaft der Gesellschaft. Frankfurt am Main: Suhrkamp. Maxis. 1993. Simcity 2000. Redwood, CA: Electronic Arts. McLuhan, Marshall. 1964. Understanding Media: The Extensions of Man. New York: McGraw-Hill. Papert, Seymour. 1994. Revolution des Lernens: Kinder, Computer, Schule in einer digitalen Welt. Hannover: Verlag Heinz Heise. Piaget, Jean. 1975. Nachahmung, Spiel und Traum: Die Entwicklung der Symbol funktion beim Kinde. Stuttgart: Ernst Klett Verlag. Rooster Teeth Productions. 2004-2009. Red vs. Blue. Roosterteeth video archive. http://rvb.roosterteeth.com/archive/?sid=rvb. Accessed 22 June 2009. Shirts, R. Garry. 1969. Starpower. Del Mar, CA: Simulation Training Systems. Spiro, Rand J. et al.: Cognitive Flexibility, Constructivism, and Hypertext: Random Access Instruction for Advanced Knowledge Acquisition in Ill-Structured Do-
237
# mains. Educational Technology issue May 1991: 24-33. Squire, Kurt. 2005. Game-Based Learning: Present and Future State of the Field. Wisconsin: MASIE Center eLearning Consortium. http://cecs5580. pbwiki.com/f/10%20Game-Based_Learning.pdf. Accessed 22 June 2009. Stöcker, Christian. 2005. Interview mit Gamedesigner Molyneux. Spiegel Online Oct.17 2005. http://www.spiegel.de/netzwelt/web/0,1518,379337,00.html. Accessed 22 June 2009. Sutton-Smith, Brian. 1978. Die Dialektik des Spiels: Eine Theorie des Spielens, der Spiele und des Sports. Schorndorf: Karl Hoffmann. Tan, Wey-Han. 2006. Konstruktivistisches Potenzial in Lernanwendungen mit spielerischen und narrativen Elementen. Hamburg: Faculty of Educational Science at the University of Hamburg. Wiemken, Jens. 1997. Breaking the Rules: Zum kreativen Umgang mit Comput erspielen in der außerschulischen Jugendarbeit. In Handbuch Medien – Computerspiele. Eds. Jürgen Fritz and Wolfgang Fehr. Bonn: Bundeszentrale für politische Bildung. Wilensky, Uri. 1993. Abstract Meditations on the Concrete and Concrete Implica tions for Mathematics Education. In Constructionism. Eds. Idit Harel and Seymour Papert, 193-203. Norwood, NJ: Ablex Publishing Corporation. Wong, David. 2007. The Ultimate War Simulation Game. Cracked.com Website. http://www.cracked.com/article_15660_ultimate-war-simulation-game.html. Accessed 22 June 2009. Wright, Will. 2005. Time and Simulation. When 2.0 Workshop. Stanford, CA: Stan ford University. http://news.cnet.com/1606-2_3-5998422.html. Accessed 22 June 2009.
238
Tepidity of the Majority and Participatory Creativity #-. !(&-
Juha Varto: Tere Vadén, you have written a book with professor Juha Suoranta (Wikiworlds). Your theme is the world of learning and its new infrastructure. You write a lot about promises seen in participatory democracy that, however, are concealed by the aggression of commercial media. Is there a place for “participating”, really, in the current time and age? Tere Vadén: It seems to me that the possibilities of participating are real, and that they are having an effect. Just think of something like Wikipedia. For Scandinavians with a good tradition of libraries Wikipedia is easily less impressive than it should be. But this really is the first free encyclopedia. Thousands of people daily contribute to a common epistemological project; this is bound to have effects similar to or on the scale of the “scientific revolution” some centuries back. I think the crucial thing is to notice that “participating” is not enough. Participation has to turn into full-blooded ownership of projects and processes, as in peer production, peer governance and peer property. Juha Varto: Great mass movements in the 20th century show that participatory democracy most often (than not) means destruction to minorities, be the minorities ethnic, cultural, political, or sexual. The masses are dangerous, even in education or training. Most of the training of masses comes from the commercial media. Is there a positive impact an educational infrastructure may have, against all odds? Tere Vadén: The picture is skewed, to be sure, but also mixed. As an analogy, Western law can, in principle, protect the rights of minorities, even though it mostly does not, and in spirit and essential function it is always geared in favour of the middle-aged white man. The same goes for educational infrastructure: it is systematically skewed, but not universally bad. So I think you have to look at the details. Who, exactly, supports or wants this or that educational reform? Juha Varto: New infrastructure accessible via the Internet, e.g. wikiworld and other peer groups, may be open to everyone, but who are open to them? Libraries are open to everyone but only a fraction of people really seek after knowledge. Is the hegemony of knowledge building the most severe obstacle in modern society? Will all be changed because of the new mediation via the Internet? Tere Vadén: Certainly there are massive efforts underway to protect and amplify the hegemonies and hierarchies of knowledge – unfortunately the Finnish university 239
#-.*&-
system is on that road, too. This is precisely why the wikiworld is urgently needed, as an antidote and as a mole working under the walls of the gated communities of knowledge. I think the wildly influential phenomenon of piracy – which connects to the mole – shows both that we are dealing with more than a faction and that the hegemonies of knowledge are not as sure-footed as they would like to be. The Internet and the digital are not nearly enough. We need also a fundamental change in what is seen as fruitful and worthwhile knowledge. As an example, I believe that embodied knowledge, skills, and oral traditions are going to be crucial in the near future. This is something in which the Internet has precious little to offer. Juha Varto: Virtual realities are not born from virtue, as we know, but both virtual and virtue are a kind of mind-setting frame for utopias. Utopias were created, at least in earlier times, to mirror the ideas people had of worlds that would be without obstacles they already knew to harass them in their everyday life. Do you find virtually created utopias may act as a freedom we lack otherwise? Tere Vadén: They may, but they shouldn’t. That is a distraction we do not need. Juha Varto: In your book you often seem to assume that people are creative without limit if the limits outside them are demolished. Isn’t it the dullness that is the best characteristic of most of us? Do you really believe in limitless creativity? And most of all, what the heck we are doing with all that creativity? Tere Vadén: Well, the honest answer would be that, yes, individuals are mostly dull, repetitive, and lost, but the collective soup or “aggregation” of these repetitions sometimes results in wonderful things - we are not talking of the masses, here, obviously. Juha Varto: Marshall McLuhan passed away some 30 years ago but his ideas on mediation are quite current. We still struggle with the conditions of mediation: we ask if the medium is even more a message than any message intended. Do you see a way out of such a Modernist attitude? Preferring wikiworld to any other world also belongs to the same picture. Tere Vadén: I really don’t see a major difference between medium and mediated. Ultimately, both are elements that are experienced, that is, influence experience, shape it, give it material, as it were. A mediated world is also a world, maybe a weak one, non-convincing and non-intense, but still a world. If a person lives in a “weak” world, we can hope that she will find something more satisfying, but we cannot by rights say that she does not live. Juha Varto: If I try some name-dropping here: Heidegger, Habermas, and Gadamer 240
#) &X /')
still believed that communication aims at some consensus. Politically that was understandable, certainly so in the 50’s. But Derrida – and Deleuze perhaps – really asked whether in a world full with texts any consensus may be possible. We may wish for a hermeneutics but interpretations are free. When you look at the democratisation process ahead within the youth, the non-hierarchical belief in the viability of any-thing, do you see any principal problem on the horizon? Like an earthquake in epistemological fundament? Tere Vadén: The main epistemological consequence of wikification of information is, as I see it, Nietzschean – therefore maybe Deleuzian as well. Put it this way: every day at least one hundred million people are using Wikipedia. At least some of them, hopefully quite many, know that the page can be edited and that it is not reliable in the sense that Encyclopedia Britannica is reliable. They use information that they know to be unreliable, human-made, editable by themselves at any moment. This is the Nietzschean epistemic moment, the step on the ground that certainly is unsure, but can still be walked, joyously, collectively. There is a problem of consensus here: the deletionist tendency inside Wikipedia that wants to have only “important” and “no-original-research” articles and delete everything else is turning Wikipedia into a dry copy of already existing publications through a process that is essentially Habermasian. This, in itself, is a nice ideal and can be argued for forcefully. However, it takes away the more radical and more unique (digitally possible) Nietzschean potential. It seems to me that in practice this deletionist-consensus danger should be overcome by forking Wikipedia, by making different versions – politically, ideologically, socially, geographically motivated versions with a point of view. That is the next step toward democratisation. Juha Varto: Tradition certainly is crumbling down if hierarchies are not valued, and even the criteria of (any) truth are not agreed upon the way they were. Is this the reevaluation of all values that was insisted upon by a certain thinker in the 1860s? Tere Vadén: Absolutely. However, I don’t think this leads to an “anything goes” type relativity. As long as people are made of flesh, are born and die, anything does not go. This is easily seen in that the relativist-nihilist, for instance in the Dostoyevskian picture, thinks that if God is dead then anything is permitted, meaning that one can do whatever one wants. “What one wants” – that is not relative or nihilistic. If everything really was relative or “nihil” to the relativist or the nihilist they could equally well go and do what *somebody else* wants them to do, for instance, go clean toilets in a slum somewhere. Of course, this is never the way in which the supposed relativist or nihilist interprets the situation: clearly, anything does not go. The revaluation of all values is something else. As I see it, it is the common creation of ways of life through a process that takes several generations in interaction with their tradition and natural surroundings. 241
#-.*&-
Juha Varto: In your earlier book, ‘The Name of the Bear’, you wrote on the role language has in human existence. If I understood it right, you didn’t really see a difference between a name and an image, not at least if the effect of naming or showing is in question. Such an idea challenges the belief we have built upon, both in political democracy (as a faculty of premeditated choice) and all-around education. Knowledge and imagination may not be so far apart? Tere Vadén: Like the above, it is very hard for me to make a clear, hard-and-fast distinction between the two, because again both words and images are experienced, or are not at all. Sometimes I have called this view that punches holes in the supposed walls between, say, knowledge and imagination, “experiential democracy” or “democracy of experience”, meaning that any kind of experience (be it observation of the physical world, emotion, thinking, logical reasoning, imagination) can, in principle, challenge and influence any other type of experience. Experience is a whole in which there is no absolutely certain way of stopping what is happening in one field from having an influence on the other fields. One acquires a taste for a certain type of food or certain type of art, and suddenly the whole world has changed; one even walks in a different way. This is something that novelists and ordinary people understand very well, but that philosophers often like to forget. And yes, it does mean that our view of education and politics are very limited, one-eyed. Come to think of it, maybe it can be expressed like this: there is something like “general experience” that is always there as long as there is something, and then this general experience may according to circumstances (physical, social, linguistic etc.) form an upper, less permanent layer of itself into different semi-separated forms, like language, imagination, knowledge. However, through the persistent layer of “general experience” these specific forms are always in touch with each other (always: linear physical time does not function here). Sometimes it is necessary to sharpen one of the specific tentacles, for instance in order to become a painter. Sometimes it is necessary to broaden the whole sphere, for instance, in order to become human. So both specific and general education have their place. Moreover, strengthening the tentacle broadens the sphere, and broadening the sphere makes the tentacle more capable. This also means that the self-celebratory and cocky Western civilisation that excels with a tentacle or two – technological knowledge and engineering – can easily be seen to be inferior in terms of other tentacles and especially in terms of general experience. The mistakes of Western political democracy and education are, precisely, the mistakes of emphasising the (two) tentacles over the whole and trying to make the tentacles as independent as possible. Juha Varto: Even if Wikipedia and other projects of the kind seem to be conceptual and they are texts in the Derridean sense, image comes more and more into the foreground, in free information flow, too. Image is not an ornament anymore, nor illustrative, but a 242
#) &X /')
message from its own force. People may spend hours in Flickr or YouTube just in order to find a way to think something anew, to reorganise, or revamp one’s thinking. Do you see any clear-cut change of significance in the role of image in recent discourse on media? Tere Vadén: Well, of course I have heard that there is this change and see it happening. But from the perspective that I’m interested in, these changes are like fashion. Sometimes people wear this, sometimes that, and both are interesting as such, as is the fact of the change. The questions I’m interested in, like locality and nature, have their qualities whether one is thinking in images or in words, so the pictorial turn is not so fundamental. Juha Varto: The media, plural, are more often called ”new media”. The intertwining of textual, pictorial, illustrative, auditive, and, say, provocative characteristics ascended to a new level when ”mixed media” became really mixed, as in the early projects of Stelarc. Many assume that the sky is the only limit. Are there still separate domains in media as there were, i.e. art, education, entertainment, knowledge, after media became ”new”? Tere Vadén: I think that, for instance, both a more “immersive” type of education and an education that is clearly separated from entertainment have unique instrumental advantages. Both have a right to exist and should be used wisely, depending on the circumstances. So in my view the dissolution of education and entertainment through new media is both good and a bad thing; mostly good, though, since we at the moment still have too much artificial separation. At some other moment more separation might be needed. Juha Varto: You have also done research in logics and semantics. Formal value and intensional significance do not often meet, not even in structures that provide for our understanding. If you look at the recent history of logics and its epistemological focus, are there proceedings that go parallel with the (r)evolution of mediation? Tere Vadén: It seems to me that the interest in different types of game theoretical approaches somewhat parallels the development of the “questioning all truths” mentioned earlier. Also, very slowly, the questioning of the form/meaning boundary is seeping into semantics, for instance in the work on non-conceptual content. I think that the main influence of mediation on logics and semantics comes indirectly, from the fact that the sciences are less and less interested in the “help-maiden” status that formal philosophy has taken for itself. For over a century now, natural science has been much more adventurous and revolutionary than philosophy. This is because natural scientists have been able to align what they say with what they do very closely, while philosophers and humanists haven’t found a way of walking the walk as well as talking the talk. Typically, the formal philosophers that come 243
#-.*&-
to the brink of the collapse of the form-meaning distinction devote their work to limiting and controlling the effects of the collapse, instead of contemplating the consequences. Juha Varto: What about the catastrophic theory that in the 1970’s was assumed to satisfy the needs of new knowledge building? I refer to Rene Thom… Tere Vadén: I see this as a cousin of the game theoretical approaches. Both try to find semi-radical and semi-non-traditional ways of dealing with the Abgrund. Significantly, both game theory and chaos/catastrophe theory have contributed most effectively to military technology. So they get to the Abgrund, all right! My hunch is that the next step in knowledge building will come from approaches that are connected to what the West will inevitably see as “mysticisms”. It seems, for instance, that some forms of fundamentalist Islam have found a form of knowledge that is essentially resistant to both modern and post-modern corrosion. I think we make a big mistake if we think that good old-fashioned enlightenment or post-modern irony are going to “beat” fundamentalism, or that this fundamentalism is backwards, or retro. No, it is the next step, corresponding to the ideology of capitalism-with-X: capitalism-with-Russian-values, capitalism-with-Chinesecommunism, and so on. Juha Varto: What is the topic of your recent research? Is it purely academic? Tere Vadén: I’m doing two things: on one hand academic research on the open source movement, the Wikipedia, their practical forms and epistemic and sociopolitical consequences, on the other hand still trying to think about locality and language. The second leg is not academic in the sense that it is also experiential and does not need the University. In fact, it seems to me that we are in the middle of a historical rapture, where the West has already fallen but the sound has not yet been heard, and the green shoots of things to come are everywhere, unthought by the tradition of philosophy. This is also a unique moment for locality and language, comparable to the last centuries of (West-) Rome. A philosophical treasure trove is opening. Just take one example: fossil fuels are running out, necessitating a massive civilisational change and there is not a single philosopher in the tradition who has said anything sensible on the topic. Juha Varto: Is academic research interesting, still? Perhaps art is tempting you? Do you find the academic tradition flexible enough to satisfy your needs in significative creativity? Tere Vadén: To be perfectly honest: no, it isn’t. It is a job. I’m hoping this feeling will pass, but we will see. The university system in Finland is being quite thoroughly 244
#) &X /')
domesticated and bureaucratised at the moment – at the precise moment when even the instrumental reasons for the development (economic efficiency, managerial hierarchy) are becoming obsolete. It is very depressing.
245
Glossary Ajax (Asynchronous JavaScript and XML) – A Web development technique used to increase the speed, usability, and interactivity of a Web page.
API (a techie term for application programming interface) allows two applications to talk to each other. For example, Flickr’s API might allow you to display photos from the site on your blog.
app Popularised in the general lexicon by the iPhone, an app is simply an application that performs a specific function on your computer or handheld device.
astroturfing is a fake grassroots campaign that seeks to create the impression of legitimate buzz or interest in a product, service or idea. Often this movement is motivated by a payment or gift to the writer of a post or comment or may be written under a pseudonym.
blog is an online journal that is updated on a regular basis with entries that appear in reverse chronological order. Blogs typically contain comments by other readers, links to other sites and permalinks.
campaign is a set of coordinated marketing messages, delivered at intervals, with a specific goal, such as raising funds for a cause or candidate or increasing sales of a product.
cause marketing is a business relationship in which a for-profit and a nonprofit form a partnership that results in increased business for the for-profit and a financial return for the nonprofit.
civic media is any form of communication that strengthens the social bonds within a community or creates a strong sense of civic engagement among its residents.
cloud computing refers to the growing phenomenon of users who can access their data from anywhere rather than being tied to a particular machine.
copyleft is the practice of using copyright law to remove restrictions on distributing copies and modified versions of a work for others and requiring that the same freedoms be preserved in modified versions.
Creative Commons is a not-for-profit organisation and licensing system that offers creators the ability to fine-tune their copyright, spelling out the ways in which others may use their works.
246
crowdsourcing refers to harnessing the skills and enthusiasm of those outside an organisation who are prepared to volunteer their time contributing content or skills and solving problems.
digital inclusion is an effort to help people who are not online gain access with affordable hardware, software, tech support/information and broadband Internet service.
digital story is a short, personal nonfiction narrative that is composed on a computer, often for publishing online or publishing to a DVD. They typically range from 2-5 minutes in length (though there are no strict rules) and can include music, art, photos, voiceover and video clips.
ebook is an electronic version of a traditional printed book that can be downloaded from the Internet and read on your computer or handheld device.
embedding is the act of adding code to a website so that a video or photo can be displayed while it is being hosed at another site. Many users now watch embedded YouTube videos or see Flickr photos on blogs rather than on the original site.
fair use is a doctrine in U.S. law that permits limited use of copyrighted material without obtaining the permission of the copyright holder, such as use for scholarship or review.
feed is a format that provides users with frequently updated content. Content distributors syndicate a Web feed, enabling users to subscribe to a site’s latest content.
flashmob is a group of individuals who gather and disperse with little notice for a specific purpose through text messages, social media or viral emails.
folksonomy Users generate their own taxonomy to categorise and retrieve information on the Internet through the process of tagging. Ideally this allows for information sharing between users with a similar conceptual framework of terms.
geotagging is the process of adding location-based metadata to media such as photos, video or online maps. Geotagging can help users find a wide variety of businesses and services based on location.
GPL is short for GNU General Public License, often used with the release of open source software. An example of a copyleft license, it requires derived works to be made available under the same license. 247
GPS is shorthand for Global Positioning System, a global navigation satellite system. GPS-enabled devices – most commonly mobile handhelds or a car’s navigation system – enable precise pinpointing of the location of people, buildings and objects.
hashtag is a community-driven convention for adding additional context and metadata to your tweets. Similar to tags on Flickr, you add them in-line to your Twitter posts by prefixing a word with a hash symbol (or number sign).
lifestreaming is the practice of collecting an online user’s disjointed online presence in one central location or site. Lifestreaming services bring photos, videos, bookmarks, microblog posts and blog posts from a single user into one place using RSS.
mashup is the combination of data, functionality or other content from two or more sources into a new single integrated form. The term originated from the music scene. Now we are at the beginning of a process of specialisation into content creators and content providers. The term mashup implies easy and fast integration that will lead to higher productivity in the information society of the future.
metadata refers to information – including titles, descriptions, tags and captions – that describes a media item such as a video, photo or blog post. Metadata is typically structured in a standardised fashion using a metadata scheme of some sort, including metadata standards and metadata models. Tools such as controlled vocabularies, taxonomies, thesauri, data dictionaries and metadata registries can be used to apply further standardisation to the metadata.
microblogging is a form of multimedia blogging that allows users to send brief text updates or micromedia such as photos or audio clips and publish them, either to be viewed by anyone or by a restricted group which can be chosen by the user.
moblog is a blog published directly to the Web from a phone or other mobile device. Mobloggers may update their sites more frequently than other bloggers because they do not need to be at their computers to post.
net neutrality is the principle requiring Internet providers to act as common carriers and not discriminate among content or users – for example, by providing degraded service to rich-media sites, by reducing file-sharing services, by penalising customers who watch or download many videos or by blocking Internet applications and content from competitors.
nptech is shorthand for nonprofit technology. nptech encompasses a wide range of 248
technologies that support the goals of nonprofit, NGO, grassroots and other cause organisations.
open media In its most common usage, open media refers to video, audio, text and other media that can be freely shared, often by using Creative Commons or GPL licenses. More narrowly, open media refers to content that is both shareable and created with a free format, such as Theora (video), Vorbis (audio, lossy), FLAC (audio, lossless), Speex (audio, voice), XSPF (playlists), SVG (vector image), PNG (raster image, lossless), OpenDocument (office), SMIL (media presentations) and others.
open platform refers to a software system that permits any device or application to connect to and operate on its network.
open source refers to software code that is free to build upon. But open source has taken on a broader meaning – such as open source journalism and open source politics – to refer to the practice of collaboration and free sharing of media and information to advance the public good.
open video refers to the movement to promote free expression and innovation in online video. With the release of HTML5, publishers will be able to publish video that can be viewed directly in Web browsers rather than through a proprietary player.
OpenID is a single sign-on system that allows Internet users to log on to many different sites using a single digital identity, eliminating the need for a different user name and password for each site.
permalink is the direct link to a blog entry. A blog contains multiple posts, and if you cite an entry you will want to link directly to that post.
personal media or user-created material refers to grassroots works such as video, audio and text. When the works are shared in a social space, the works are more commonly referred to as social media.
platform is the framework or content management system that runs software and presents content. WordPress, for example, is a service that serves as a platform for a community of blogs. In a larger context, the Internet is becoming a platform for applications and capabilities, using cloud computing.
podcast is a digital file (usually audio but sometimes video) made available for download to a portable device or personal computer for later playback.
podsafe is a term created in the podcasting community to refer to any work that allows 249
the legal use of the work in podcasting, regardless of restrictions the same work might have in other realms, such as radio or television use.
public domain A work enters the public domain when it is donated by its creator or when its copyright expires. A work in the public domain can be freely used in any way, including commercial uses.
public media refers to any form of media that increase civic engagement and enhance the public good.
remix is an alternative version of a song, different from the original version. This name is also used for any alterations of media other than a song (film, literature etc.).
RSS RSS (Really Simple Syndication) – sometimes called Web feeds – is a Web standard for the delivery of content – blog entries, news stories, headlines, images, video – enabling readers to stay current with favourite publications or producers without having to browse from site to site.
screencast is a video that captures what takes place on a computer screen, usually accompanied by audio narration. A screencast is often created to explain how a website or piece of software works, but it can be any piece of explanatory video that strings together images or visual elements.
smart phone is a handheld device capable of advanced tasks beyond those of a standard mobile phone. Capabilities might include email, chat, taking photos or video or hundreds of other tasks.
social bookmarking is a method by which users locate, store, organise, share and manage bookmarks of Web pages without being tied to a particular machine. Users store lists of personally interesting Internet resources and usually make these lists publicly accessible.
social capital is a concept used in business, nonprofits and other arenas that refers to the good will and positive reputation that flows to a person through his or her relationships with others in social networks.
social enterprise is a social mission driven organisation that trades in goods or services for a social purpose.
social entrepreneurship is the practice of simultaneously pursuing both a financial and a social return on investment (the “double bottom line”). A social entrepreneur is someone who runs a social enterprise (sometimes called a social purpose business venture), pursuing both a financial and social return on investment. 250
social media are works of user-created video, audio, text or multimedia that are published and shared in a social environment, such as a blog, podcast, forum, wiki or video hosting site. More broadly, social media refers to any online technology that lets people publish, converse and share content online.
social networking is the act of socialising in an online community. A typical social network such as Facebook, LinkedIn, MySpace or Bebo allows you to create a profile, add friends, communicate with other members and add your own media.
social tools are software and platforms that enable participatory culture – for example, blogs, podcasts, forums, wikis and shared videos and presentations.
streaming media refers to video or audio that can be watched or listened to online but not stored permanently. Streamed audio is often called Webcasting.
sustainability In the nonprofit sector, sustainability is the ability is to fund the future of a nonprofit through a combination of earned income, charitable contributions and public sector subsidies.
tag cloud is a visual representation of the popularity of the tags or descriptions that people are using on a blog or website. Popular tags are often shown in a large type and less popular tags in smaller type.
tags are keywords added to a blog post, photo or video to help users find related topics or media, either through browsing on the site or as a term to make your entry more relevant to search engines.
tweet A post on Twitter, a real-time social messaging system. While all agree on the usage of tweet as a noun, people disagree on whether you “tweet” or “twitter” as a verb.
tweetup An organised or impromptu gathering of people who use Twitter. Users often include a hashtag, such as #tweetup or #sftweetup, when publicising a local tweetup.
UGC stands for user-generated content, an industry term that refers to all forms of user-created materials such as blog posts, reviews, podcasts, videos, comments and more.
unconference An unconference is collaborative learning event organised and created for its 251
participants by its participants. BarCamp is an example of a well-known unconference.
videoblog A videoblog, or vlog, is simply a blog that contains video entries. Some people call it video podcasting, vodcasting or vlogging.
viral The digital version of grassroots, ‘viral’ refers the process of an article, video or podcast becoming popular by being passed from person to person or rising to the top of popularity lists on social media websites.
virtual worlds are online communities that often take the form of a computer-based simulated environment, through which users can interact with one another and use and create objects. The term today has become synonymous with interactive 3D virtual environments, where the users take the form of avatars, which are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible (auditory and touch sensations for example).
Web 2.0 refers to the second generation of the Web, which enables people with no specialised technical knowledge to create their own websites to self-publish, create and upload audio and video files, share photos and information and complete a variety of other tasks. It describes the Web as a community controlled interactive tool rather than a publishing medium.
webcast A broadcast that takes place over the Web and uses both audio and visual effects. For example, a web-based conference call that sends a presentation with charts and graphs to go alongside the speech.
widgets A widget is a small piece of transportable code, for example a calculator or a countdown to a movie’s release. Widgets can be placed on websites like a social networking profile, a custom home page or a blog.
wiki Community publishing tool or website that allows users to edit and control content. Wikis are collaborative projects that can be used to create extensive databases with the resource developed and expanded by its users.
252
About the Authors Axel Bruns (DE/ AU), PhD, is an Associate Professor in Media and Communication in the Creative Industries Faculty at Queensland University of Technology in Brisbane, Australia. He is the author of Blogs, Wikipedia, Second Life and Beyond: From Production to Produsage (2008; http://produsage.org/) and Gatewatching: Collaborative Online News Production (2005), and the editor of Uses of Blogs with Joanne Jacobs (2006; all released by Peter Lang, New York). He is a Chief Investigator in the ARC Centre of Excellence for Creative Industries and Innovation (http:// cci.edu.au/). Bruns’s website, containing much of his work, is located at http:// snurb.info/. Brenda Castro (MX), MA, is a Graphic and Interaction Designer, currently working as User Interface Designer for experimental projects in Nokia Research Center. Her experience includes designing graphical interfaces for distance learning, mainly for the National Autonomous University of Mexico (CATED-UNAM), and concepting, as well as designing interactions and graphical interfaces for social networking and locative media, mainly for Nokia Research Center. She currently conducts her doctoral studies related to mobile learning in ePedagogy Design / School of Art and Design at Aalto University in Helsinki, formerly University of Art and Design Helsinki. Doris Gassert (CH), MA, studied English Philology and Media Studies at the University of Basel, with Law as minor field of study. She is a PhD candidate within the postgraduate research project “Aesthetics of Intermediality. Play - Ritual - Performance” at the Institute for Media Studies /i/f/m, University of Basel. In her doctoral thesis she investigates the intermedial relationship of film and computer. David Gauntlett (UK), PhD, is Professor of Media and Communications at the School of Media, Arts and Design, University of Westminster, UK. His teaching and research is in the area of media and identities, and the everyday creative use of digital media. He is the author of several books, including Moving Experiences (1995, 2005), Web Studies (2000, 2004), Media, Gender and Identity (2002, 2008), and Creative Explorations: New approaches to identities and audiences (2007), which was shortlisted for the Times Higher Young Academic Author of the Year Award. He produces the popular website about media and identities, Theory.org.uk. Mizuko Ito (JP/ USA), PhD, is a Japanese cultural anthropologist who is an Associate Researcher at the Humanities Research Institute at the University of California, Irvine. In addition, she is a Visiting Associate Professor at the Keio University Graduate School of Media and Governance. Her main professional interest is the use of media technology. Ito is known for her work exploring the ways in 253
( &&
which digital media are changing relationships, identities, and communities. With Misa Matsuda and Daisuke Okabe, Ito edited Personal, Portable, Pedestrian: Mobile Phones in Japanese Life (MIT Press, 2005). In 2006, Ito received a MacArthur Foundation grant to “observe children’s interactions with digital media to get a sense of how they’re really using the technology.” This work has resulted in the founding of the “Digital Media and Learning Hub” (housed in the UCHRI) and the recent publication of two books: Hanging Out, Messing Around, and Geeking Out and Engineering Play: A Cultural History of Children’s Software. Henry Jenkins (USA), PhD, is a media scholar and currently a Provost Professor of Communication, Journalism, and Cinematic Arts, a joint professorship at the USC Annenberg School for Communication and the USC School of Cinematic Arts. Previously, he was the Peter de Florez Professor of Humanities and Co-Director of the MIT Comparative Media Studies program with William Uricchio. He is also author of several books, including Convergence Culture: Where Old and New Media Collide, Textual Poachers: Television Fans and Participatory Culture and What Made Pistachio Nuts?: Early Sound Comedy and the Vaudeville Aesthetic. Jenkins is the principal investigator for Project New Media Literacies (NML), a group which originated as part of the MacArthur Digital Media and Learning Initiative. He continues to be actively involved with the Convergence Culture Consortium, a faculty network which seeks to build bridges between academic researchers and the media industry in order to help inform the rethinking of consumer relations in an age of participatory culture. His personal blog can be found at http://www.henryjenkins.org. Owen Kelly (UK), MA, is one of the first graduates in ePedagogy Design – Visual Knowledge Building, who currently teaches digital interactive media at Arcada University of Applied Sciences in Helsinki, Finland. He is the author of Community, Art and the State, and Digital Creativity. He co-authored Another Standard: culture & democracy, and The Creative Bits, and contributed to several other publications. His recent research projects include Marinetta Ombro, Arcada’s learning laboratory for online pedagogy and synthetic culture, and the development of the memi, a lifelong online learning space. He is a founding member of The League of Worlds, an annual conference dedicated to exploring the pedagogicial implications of simulation and virtuality. His personal website can be found at www.owenkelly.net. Joni Leimu (FI), MA, completed his MA studies in ePedagogy Design – Visual Knowledge Building with the MA thesis “Holoptimus Prime – harnessing metaverse in education” in 2009. For the past seven years, he has been employed by Hyria education college teaching graphic design, webdesign and digital cultures. Earlier, he worked as a photographer, journalist, graphic designer and web designer. 254
( &&
Torsten Meyer (DE), PhD, is Professor of Art Education at the University of Cologne (Köln) in Germany. Previously he was the head of the MultiMedia-Studio at the Faculty of Education at the University of Hamburg and co-ordinator of MA ePedagogy Design – Visual Knowledge Building in Hamburg. The main topic of Torsten Meyer’s teaching and research activities is the contention with communicational, sociological, psychological and cultural effects, prospects, possibilities and realities of so-called “new media” especially the Internet. He is the (co)editor of Kunst Pädagogik Forschung. Aktuelle Zugänge und Perspektiven (2009), Bildung im Neuen Medium. Wissensformation und digitale Infrastruktur. Education Within a New Medium. Knowledge Formation and Digital Infrastructure (2008) and the author of Interfaces, Medien, Bildung. Paradigmen einer pädagogischen Medientheorie (2002). His personal website can be found at http://medialogy.de. Eduardo Navas (USA), PhD, researches the crossover of art, culture, and media. His production includes art & media projects, critical texts, and curatorial projects. He has presented and lectured about his work and research in various places throughout the Americas and Europe. Navas collaborates with artists and institutions in various countries to organise events and develop new forms of publication. He has lectured on art and media theory at various colleges and universities in the United States and his main research emphasis is on the history of Remix in order to understand the principles of remix culture. Selected texts and research projects are available on Remix Theory, http://remixtheory.net. Christina Schwalbe (DE), MA, holds an engineering degree in media technology from the University of Applied Sciences in Hamburg and she graduated as a student of Pedagogy Design – Visual Knowledge Building in 2007 with the master’s thesis “Networking in universities – how universities can accept the challenge and take and active, formative role.” Since December 2007 she has been employed as a scientific assistant in the project ePUSH at the University of Hamburg and she has also been working as a coordinator in the eLearningBüro there. Her research interest is focused on teaching and learning in networked, digital structures and the correlation of communication structures and the development of media. Stefan Sonvilla-Weiss (AT/ FI), PhD, is Professor of eLearning in Visual Culture and head of the international MA programme ePedagogy Design – Visual Knowledge Building at Aalto University / School of Art and Design, formerly University of Art and Design Helsinki. He coined the term Visual Knowledge Building, refering to “a visualisation process of interconnected models of distributed socio-cultural encoded data representations and simulations that are structured and contextualised by a learning community.” In his research he tries to find answers to how real and virtual space interactions can generate novel forms of communicative, creative and social practices in global connected communities. He is the author of (IN)VISIBLE. 255
( &&
Learning to Act in the Metaverse (Springer, 2008), Virtual School – kunstnetzwerk.at (Peter Lang, 2003), and he has edited MASHUP CULTURES (Springer, 2010) and (e)Pedagogy Design – Visual Knowledge Building (Peter Lang, 2005). Noora Sopula (FI), MA, is a teacher of media technology. She has always been interested in technology and her first degree is a BSc in Telecommunications. Subsequently she completed another BSc, in media technology. After working several years in the field of technology, she started working as a teacher. While working as a teacher, she first conducted pedagogical studies and earned a teaching degree. In 2009 she completed her MA studies in ePedagogy Design – Visual Knowledge Building with the MA thesis “Pedagogical applications of touchbased user interfaces”. Wey-Han Tan (DE), MA, graduated 2006 in educational sciences (‘Diplompädagogik’) with minors in sociology and informatics and a focus on new media in education. In 2009 he earned a Master of Arts degree in Pedagogy Design – Visual Knowledge Building from the University of Art and Design Helsinki. Over the last few years he has given courses in the MultiMediaStudio of the University of Hamburg and at the University of Lüneburg, as well as having worked as a freelance Macromedia/Lingo-programmer. Currently he is employed at the University of Hamburg as coordinator of the eLearningBüro of the faculty for educational sciences, psychology and movement sciences. Tere Vadén (FI), PhD, is a philosopher working as a senior lecturer in the department of information studies and interactive media at the University of Tampere, Finland. His most recent publication is Wikiworld (Pluto Press 2010, co-authored with Juha Suoranta), a book discussing the future of learning and educational media within a wider social and economic framework of contemporary capitalism. Juha Varto (FI), PhD, is Professor of Research in Visual Arts and Education at Aalto University / School of Art and Design. His main interest is the methodology of artistic research as well as epistemological approach to art practices. His latest monographs include Dance with the world, towards the ontology of singularity (in Finnish, 2007), The Art and Craft of Beauty (2008), and Basics of Artistic Research (2009).
256