Ambient Intelligence with Microsystems
Kieran Delaney
Ambient Intelligence with Microsystems Augmented Materials and Smart Objects
Kieran Delaney Department of Electronic Engineering Cork Institute of Technology Bishoptown, Cork, Ireland
ISBN: 978-0-387-46263-9 DOI: 10.1007/978-0-387-46264-6
e-ISBN: 978-0-387-46264-6
Library of Congress Control Number: 2008928644 © 2008 Springer Science + Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science + Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com
Preface
1
Introduction
The future of information technology systems will be driven by concepts such as that of Ambient Intelligence (AmI). In this vision, Ambient Intelligence will surround us with proactive interfaces supported by massively and appropriately distributed computing and networking technology platforms. This provides a challenge for technology development that is likely to result in the vast integration of information systems into everyday objects, such as furniture, clothes, vehicles, roads and even materials like paint, wallpaper, etc. Thus, it is a vision that is fundamentally based upon deriving solutions enabled by current and future microsystems. The recent level of progress in the area of microsystems opens up numerous opportunities; however, the development of practical approaches to realise this potential is non-trivial. An effective methodology is to create some form of co-design process between hardware, software and user-design research that encapsulates the full requirements of the AmI vision and the physical capabilities and constraints of its component technologies. The approach has already led to significant developments, both in terms of theory and practical research; these include the Disappearing Computer initiative, Augmented Materials, Smart Matter, the eGrain and eCube programmes, as well as related initiatives in wireless sensor networking that have their origin in the concept of Smart Dust.
2
Scope
This book investigates the development of networkable smart objects for Ambient Intelligence (AmI) with specific emphasis upon the implementation of the microsystems and nanoscale devices required to achieve effective smart systems. In this context, it seeks to investigate the challenges and potential solutions that will ensure the technology platforms are created to be capable of being seamlessly v
vi
Preface
integrated into everyday objects. In particular, this includes the requirements and possibilities for integrated computation and MEMs sensors, embedded microelectronic electronics sub-systems, including the System-in-Package (SiP) and MultiChip Module (MCM), as well as novel assembly techniques for autonomous MEMs sensors. However, in order to do this effectively, many aspects of the creation of hierarchical systems must be investigated; thus, a series of chapters are also included here to provide an insight into this. This covers conceptual topics designed to create common multi-disciplinary visions, such as AmI, Pervasive Computing, Smart Dust, etc. The framework for part of this discussion on vision-statements will be the concept of Augmented Materials; these are materials with fully embedded distributed information systems, designed to measure all relevant physical properties and provide a full knowledge representation of the material; in effect, the material would “know” itself, and its current status. It is a concept that seeks to harness the steps used to physically fabricate and assemble smart objects as a natural programming language for these ‘materials’. This book also includes chapters describing technology platforms that are specifically important to the creation of smart objects (and indeed AmI itself), including sensor subsystems development (for example, using toolkits), wireless networking technologies and systems-level software. Numerous challenges, when viewed through a ‘whole-systems’ perspective, cross many of these ‘layers’ of technology and thus require solutions that are optimized through some form of co-design process. Two topics have been selected from among these problem-statements and are discussed in more detail, namely the well-heralded issue of energy management and scavenging and the more elusive, though no less important, issue of robustness and reliability. The challenge of co-design itself is also addressed in this context. To be successful in realising methodologies requires more that just a systemic technological solution. The nature of AmI and Smart Environments is such that multiple forms of augmented (tangible) artifact will need to function together. Importantly, there is the question of what to build: in other words, what user need is being served and is it meaningful? There also the issue of how to build it. Broader interaction between industry and academia is certainly a challenge here, particularly, given that researching networkable smart systems is going to require multiple academic disciplines. So, what are the approaches that can help companies to bridge this gap more easily? How can it provide added value to both industry and academia? These issues are dealt with directly in two dedicated chapters. Finally, there is the practical issue of creating Smart Systems. Finding solutions to the challenges of building networkable smart objects is best researched by prototyping them; often, a process where multiple prototypes will be build, offers the greatest insight. Three approaches to investigating, building and demonstrating prototypes are presented. The first approach uses existing devices and systems to built new experiences, elements of a ‘responsive environment’ that are crystallized through creating and demonstrating tangible systems. The second investigates how an augmented material might be built into a new object through a case study about
Preface
vii
a ‘smart table’; both ‘top-down’ and ‘bottom-up’ approaches are applied. The third approach discusses a case study following the design and implementation of a monitoring system derived from specific user requirements. The realization of networkable smart objects, and their integration into a larger AmI landscape, is a significant undertaking. The search for effective solutions is a hugely multi-disciplinary exercise, as rewarding as it is challenging. If it is undertaken purely on technological terms then, while numerous interesting ‘gadgets’ may emerge, the solutions are not likely to have an impact. Gadgets are consumable. If, however, we invest in a process where all of the important disciplines - technological, social, industrial – work together, completing the hard process of genuine collaboration, then the impact may well be huge.
Contents
Part I
The Concepts: Pervasive Computing and Unobtrusive Technologies
1
An Overview of Pervasive Computing Systems .................................... Juan Ye, Simon Dobson, and Paddy Nixon
3
2
Augmenting Materials to Build Cooperating Objects .......................... Kieran Delaney, Simon Dobson
19
Part II
Device Technologies: Microsystems, Micro Sensors and Emerging Silicon Technologies
3
Overview of Component Level Devices .................................................. Erik Jung
4
Silicon Technologies for Microsystems, Microsensors and Nanoscale Devices ............................................................................ Thomas Healy
Part III
49
81
Hardware Sub-Systems Technologies: Hybrid Technology Platforms, Integrated Systems
5
Distributed, Embedded Sensor and Actuator Platforms...................... John Barton, Erik Jung
105
6
Embedded Microelectronic Subsystems ................................................ John Barton
131
ix
x
Contents
Part IV
7
Networking Technologies: Wireless Networking and Wireless Sensor Networks
Embedded Wireless Networking: Principles, Protocols, and Standards ........................................................................................ Dirk Pesch, Susan Rea, and Andreas Timm-Giel
Part V
Systems Technologies: Context, Smart Behaviour and Interactivity
8
Context in Pervasive Environments ..................................................... Donna Griffin, Dirk Pesch
9
Achieving Co-Operation and Developing Smart Behavior in Collections of Context-Aware Artifacts ........................................... Christos Goumopoulos, Achilles Kameas
Part VI
10
11
157
187
205
System-Level Challenges: Technology Limits and Ambient Intelligence
Power Management, Energy Conversion and Energy Scavenging for Smart Systems.............................................................. Terence O’Donnell, Wensi Wang
241
Challenges for Hardware Reliability in Networked Embedded Systems ................................................................................ John Barrett
267
Part VII
System Co-Design: Co-Design Processes for Pervasive Systems
12
Co-Design: From Electronic Substrates to Smart Objects ................ Kieran Delaney, Jian Liang
285
13
Co-Design for Context Awareness in Pervasive Systems .................... Simon Dobson
297
Part VIII
14
User-Centered Systems: From Concept to Reality in Practical Steps
User-Centred Design and Development of Future Smart Systems: Opportunities and Challenges .................................. Justin Knecht
311
Contents
15
xi
Embedded Systems Research and Innovation Programmes for Industry ..................................................................... Kieran Delaney
Part IX
323
Applied Systems: Building Smart Systems in the Real World
16
Sensor Architectures for Interactive Environments ........................... Joseph A. Paradiso
345
17
Building Networkable Smart and Cooperating Objects .................... Kieran Delaney, Ken Murray, and Jian Liang
363
18
Dedicated Networking Solutions for a Container Tracking System..................................................................................... Daniel Rogoz, Fergus O’Reilly
387
Conclusion ......................................................................................................
409
Index ................................................................................................................
411
Contributors
John Barrett Department of Electronic Engineering, Centre for Adaptive Wireless Systems, Smart Systems Integration Group, Cork Institute of Technology, Rossa Avenue, Bishopstown , Cork, Ireland John Barton Tyndall National Institute, Lee Maltings, Prospect Row, Cork, Ireland Kieran Delaney Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Simon Dobson Systems Research Group, School of Computer Science and Informatics, UCD Dublin, Belfield, Dublin 4, Ireland Christos Goumopoulos Distributed Ambient Information Systems Group, Computer Technology Institute, Patras, Hellas Donna Griffin Centre of Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Thomas Healy Tyndall National Institute, Cork, Ireland Erik Jung Fraunhofer IZM, Gustav-Meyer-Allee 25, 13355 Berlin, Germany Achilles Kameas Distributed Ambient Information Systems Group, Computer Technology Institute, Patras, Hellas Justin Knecht Centre for Design Innovation, Sligo, Ireland
xiii
xiv
Contributors
Jian Liang Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Ken Murria Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Paddy Nixon Systems Research Group, School of Computer Science and Informatics, UCD Dublin, Belfield, Dublin 4, Ireland Terence O’Donnell Tyndall National Institute, University College Cork, Cork, Ireland Fergus O’Reilly Technologies for Embedded Computing (TEC) Centre, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Joseph A. Paradiso Responsive Environments Group at the MIT Media Laboratory, 20 Ames Street, Cambridge, MA, USA Dirk Pesch Centre of Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Susan Rea Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Daniel Rogoz Technologies for Embedded Computing (TEC) Centre, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland Andreas Timm-Giel TZI/iKOM/ComNets, University of Bremen, Bremen, Germany Wensi Wang Tyndall National Institute, University College Cork, Cork, Ireland Juan Ye Systems Research Group, School of Computer Science and Informatics, UCD Dublin, Belfield, Dublin 4, Ireland
Part I
The Concepts Pervasive Computing and Unobtrusive Technologies
1.1
Summary
Innovation, particularly when relating to Information and Communication Technologies (ICT) is guided by technology roadmaps and, increasingly, by vision statements. With multi-disciplinary research becoming a ‘norm’, a common framework, often built around the user as participant in a form of scenario, is required to even begin the process of ‘co-innovation’. In an increasing number of areas, this is being derived from vision statements that capture the imagination of both the research community and society at large. In this part, we discuss a selection of visions and concepts. Realising the vision of Pervasive Computing, in part of in total, is certainly an overarching aim for many research initiatives. The concept, which envisions services that respond directly to their user and environment, with greatly reduced explicit human guidance, is one of the most influential in the past twent years. A second vision statement, that of Augmented Materials, has evolved from other established concepts, such as Smart Dust, Smart Matter and the Diseppaearing Computer, to address a key requirement in the creation of Pervasive Computing Systems: the introducton of unobtrusive technologies.
1.2
Relevance to Microsystems
Simply put, a vision like Pervasive Computing, or concept like Augmented Materials, creates the conditions for determining what will be required in the future for hardware and software technologies. It provides a framework for determining what is currently possible – and perhaps even required – with existing solutions. It also points the way to new innovation; targets for new forms of hardware, including microsystems with new geometries, materials and functions, will emerge from the drivers created by these visions. In effect, these opportunities are only limited by the imagination. However, this major opportunity for ‘technology push’ is balanced by a growing analysis of user needs. The emphasis placed upon this is in fact a
1
2
Part I The Concepts: Pervasive Computing and Unobtrusive Technologies
litmus test of whether these visions are being correctly applied. Finding and understanding the real need remains a key goal; in this context, the difference is that, since these visions largely do not yet exist in reality, an appropriate amount of exploration must take place.
1.3
Recommended References
There are numerous publications that would support a deeper understanding of these concepts and the driving forces behind them. In addition to the references provided in each chapter the following three publications should provide further insight to interested reader: 1. Mark Weiser, “The Computer for the Twenty-First Century,” Scientific American, pp. 94-10, September 1991 2. Adam Greenfield, Everyware: The Dawning Age of Ubiquitous Computing, New Riders Publishing, 2006 3. Emile Aarts and José Encarnação, True Visions: The Emergence of Ambient Intelligence, Springer, 2006 4. Simon Dobson, Kieran Delaney, Kafil Mahmood Razeeb and Sergey Tsvetkov. A co-designed hardware/software architecture for augmented materials. In Proceedings of the 2nd International Workshop on Mobility Aware Technologies and Applications. Thomas Magedanz, Ahmed Karmouch, Samuel Pierre and Iakovos Venieris (ed). Volume 3744 of LNCS. Montréal, CA. 2005.
Chapter 1
An Overview of Pervasive Computing Systems Juan Ye, Simon Dobson, and Paddy Nixon
Abstract Pervasive computing aims to create services that respond directly to their user and environment, with greatly reduced explicit human guidance. The possibility of integrating IT services directly into users’ lives and activities is very attractive, opening-up new application areas. But how has the field developed? What have been the most influential ideas and projects? What research questions remain open? What are the barriers to real-world deployment? In this chapter we briefly survey the development of pervasive computing and outline the significant challenges still being addressed. Keywords Pervasive computing, Ubiquitous computing, Location, Adaptation, Behaviour, Context, Situation
1
Introduction of Pervasive Computing
This history of computing is peppered with paradigm shifts on how the relationship between humans and computers is perceived. After mainframe computing, minicomputing and personal computing, a fourth wave is now taking place – pervasive (or ubiquitous) computing, proposed by Mark Weiser in his seminal 1991 paper. Weiser describes a vision of pervasive computing that still inspires more than 15 years later: The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it. [1]
The essence of Weiser’s vision was the creation of environments saturated with computing capability and wireless communications, whose services were gracefully integrated with human user action [2]. Computing thus becomes pervasive; available always and everywhere.
Systems Research Group, School of Computer Science and Informatics, UCD Dublin, Belfield, Dublin 4, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
3
4
J. Ye et al.
We could distinguish (as some authors do) between ubiquitous computing that is provided by a continuous networked infrastructure of accessible devices, and pervasive computing that focuses on providing seamless and cognitively integrated services to users – however, this distinction is becoming increasingly unnecessary in the era of WiFi and Bluetooth networks and we shall focus almost exclusively on service provision. In pervasive systems, people rely on the electronic creation, storage, and transmission of personal, financial, and other confidential information. This in turn demands the highest security for these transactions, and requires access to time-sensitive data – all regardless of the physical location. Devices like personal digital assistants (PDAs), “smart” mobile phones, ultra-mobile laptops and office PCs, and even home entertainment systems are expected to work together in one seamlessly-integrated system. In addition, a pervasive computing environment assumes a number of invisible sensing/computational entities that collect information about the users and the environment. With the help of these entities, devices can deliver customised services to users in a contextual manner when they are interacting and exchanging information with the environment [3]. Simply put, pervasive computing is a post-desktop model of human-computer interaction where computation is embedded in everyday objects that gather information from users and their surrounding environments and accordingly provides customised services [4, 5]. Pervasive computing aims to empower people to accomplish an increasing number of personal and professional transactions using new classes of intelligent and portable appliances, devices, or artifacts with embedded microprocessors that allow them to employ intelligent networks and gain direct, simple, and secure access to both the relevant information and services. It gives people access to information stored on powerful networks, allowing them to easily take action anywhere and at any time. In principle, to be effective pervasive computing must simplify life by combining open standards-based applications with everyday activities. It must remove the complexity of new technologies, enable us to be more efficient in our work and leave us more leisure time; delivered thus, pervasive computing will become part of everyday life. Achieving this in practice will prove to be a challenge.
2
Representative Examples of Pervasive Computing Applications
Pervasive computing is maturing from its origins as an academic research topic to a commercial reality. It has many potential applications, from the intelligent office and the smart home to healthcare, gaming and leisure systems and public transportation. Three specific application domains are outlined here: healthcare, public transportation, and the smart home. Pervasive healthcare is an emerging research discipline, focusing on the development and application of pervasive/ubiquitous computing technology for healthcare
1 An Overview of Pervasive Computing Systems
5
and life wellness [6, 7]. Pervasive computing technologies introduce new diagnostic and monitoring methods that directly contribute to improvements in therapy and medical treatment [8]. These examples involve sensors and monitoring devices, such as blood pressure cuffs and glucose meters, which can collect and disseminate information to healthcare providers. They can support better understanding of facets of a patient’s daily lives and then appropriately modify therapies to the individual. One of the scenarios would be a hospital where a patient is constantly monitored, and the findings are linked to a diagnostic process. Thus, it could be possible to advise the hospital canteen to prepare special food for this particular patient and to adapt the patient’s specific medication according to his current health condition. Pervasive computing technologies can also improve the procedure of medical treatment (an example is given in Fig. 1.1). In emergency care, they can accelerate access to medical records at the emergency site or seek urgent help from multiple experts virtually. In the surgical field, they can collect and process an ever-increasing range of telemetric data from instruments used in an operating room and augment human ability to detect patterns that could require immediate action [9]. Pervasive computing technologies are also entering our everyday life as embedded systems in transportation [11, 12]. A number of applications have emerged. In tourist guides, a pervasive computing system can provide personalised services (like locating a specific type of restaurant or planning a daytrip) for visitors based on their location and preferences. In traffic control, a system can be immediately informed of incidences of congestion or the occurrence of accidents and notify all approaching drivers. In route planning, a system can suggest the most convenient routes for users based on the current traffic conditions and the transportation modes being used. At a public transportation hub, a system can provide high value-added services to improve customer convenience [13] (Fig. 1.2).
Fig. 1.1 An example scenario of pervasive computing technologies in Healthcare [10]
6
J. Ye et al.
Fig. 1.2 A “smart station vision” scenario of providing on-demand information services for customers from departure place to destination [13]
The introduction of pervasive computing into transportation is facilitated by a range of technologies, particularly networks and positioning systems. Pervasive computing technologies are also becoming essential components in the home environment. A house can be set up to act as an intelligent agent; perceiving the state of the home environment through installed sensors and acting through device controllers. The goal is to maximise the comfort and security of its inhabitants and minimise operation cost. For example, applications in a smart home can improve energy efficiency by automatically adjusting heating, cooling or lighting levels according to the condition of the inhabitants (for example, location or body temperature). They can also provide reminders of shopping orders according to the usage of groceries and schedule the entertainment system (for example, playing music or movie, or switching on a TV) according to the inhabitant’s hobbies and habits. In these cases, pervasive computing technologies are applied to identify, automate and predict the activity patterns of inhabitants from synthetic and real collected data [14, 15].
3
The History and Issues of Pervasive Computing
Pervasive computing represents a major evolutionary step in a line of work dating back to the mid-1970s. Two distinct earlier steps are distributed systems and mobile
1 An Overview of Pervasive Computing Systems
7
computing [16]. Fig. 1.3 shows how research problems in pervasive computing relate to those in distributed systems and mobile computing. The advent of networking enabled independent personal computers to evolve into distributed systems. The mechanisms for linking remote resources provided a means of integrating distributed information into a single structure and distributing computing capabilities. The network has pioneered the creation of a ubiquitous information and communication infrastructure, and thus it is a potential starting-point for pervasive computing [17]. A similar evolution is driving distributed systems to become pervasive by introducing seamless access to remote information resources and communication with fault tolerance, high availability, and security. Mobile computing emerged from the integration of cellular technology and the network. Short-range wireless and wide-area wireless (or wired communication) then boosted the development of mobile computing. Both the size and price of mobile devices (for example, laptop or mobile phones) are falling everyday and could eventually support pervasive computing with inch-scale computing devices readily available to users for use in any human environment. In mobile computing, the research problems overlapping with pervasive computing include mobile networking, mobile information access, adaptive applications, energy-aware systems and location sensitivity. While it is possible to get caught up in the “pervasive-ness” part of this new technology, it is also important to realise how much such systems rely on existing information bases and infrastructures. In transportation, for example, services such as Google Maps provide much of the raw information needed to create the valueadded, location-based service. Pervasive systems are therefore only part of a larger information infrastructure. It is necessary to appreciate both how small a part of the overall system may need to be pervasive, but equally how large is the impact of providing seamless integration of services in everyday life [16]. This brings us to several research issues. The first issue is the effective use of smart spaces. A smart space is a work or living space with embedded computers, information appliances, and multi-modal sensors that allow people to work and live efficiently (together or individually) with an unprecedented access to information and support from local computers [18]. Examples of suitable sites for smart spaces include a business meeting room, a medical consultation meeting room, a training and education facility, a house, a classroom, and a crisis management command center. A smart space should adapt to the changes in an environment, recognising different users, and providing personalised services for them. The second research issue is invisibility, which was described by Weiser as follows: “there is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment instead of forcing humans to enter theirs will make using a computer as refreshing as taking a walk in the woods”. Streitz and Nixon summarised two forms of invisibility [2]. Physical invisibility refers to the miniaturisation of computing devices and their embedding within and throughout the individual and the environment; for example in clothes, glasses, pens, cups, or even the human body itself. Cognitive invisibility refers
systems
Localized scalability
Invisibility
Uneven conditioning
Fig. 1.3 Taxonomy of computer systems research problems in pervasive computing [16]
Location sensitivity GPS, WaveLan triangulation, context-awareness...
Energy-aware systems goal-directed adaptation, disk spin-down...
Adaptive applications proxies, transcoding, agility...
Mobile information access disconnected operation, weak consistency... Smart spaces
Mobile computing
Distributed
Mobile networking Mobile IP, ad hoc networks, wireless TCP fixes...
Distributed security encryption, mutual authentication..
Remote information access dist. file systems, dist. databases, caching...
High availability replication, rollback recovery...
Fault tolerance ACID, two-phase commit, nested transactions...
Remote communication protocol layering, RPC, end-to-end args...
Pervasive computing
8 J. Ye et al.
1 An Overview of Pervasive Computing Systems
9
to the ability to use the system’s services in a manner that is free from distraction. A pervasive computing environment should interact with users at almost a subconscious level if it is to continuously meet the expectations of users; it should rarely present them with surprises. (This is also approximated by the minimal user distraction as described by Satyanarayanan [16].) The third research issue is localised scalability. Scalability is a critical problem in pervasive computing, since the intensity of interactions between devices will increase in these environments where more and more users are involved. The density of these interactions must be decreased by reducing distant interactions that are of little relevance to current applications. The fourth research issue is masking heterogeneity. The rate of penetration of pervasive computing technology into the infrastructure will vary considerably. To make pervasive computing technology invisible requires reductions in the amount of variation in different technologies, infrastructures, and environments.
4
Significant Projects
Pervasive computing projects have been advanced both in academia and industry. Some of the most influential projects include Aura in Carnegie Mellon University [19], Oxygen at MIT [20], Gaia in UIUC [21], Sentient Computing at AT&T Laboratories in Cambridge [22], the Disappearing Computer initiative from the EU Fifth Framework Programme [23], the TRIL Center [6], GUIDE at Lancaster University [24], Cooltown in Hewlett-Packard [25], and EasyLiving in Microsoft [26]. Some of these projects will be described here to provide a sense of the breadth of research taking place in this topic. The Aura project in CMU aimed to design, implement, deploy, and evaluate a large-scaled computing system that demonstrates the concept of a “personal information aura”, which spans wearable, handheld, desktop and infrastructural computers [19]. In Aura, each mobile user was provided with an invisible halo of computing and information services that persisted regardless of the location. The goal was to maximise available user resources and to minimise distraction and drains on user attention. To meet the goal, many individual research issues evolved within the Aura project, from the work on hardware and network layers through the operating system and middleware to the user interface and applications. The Oxygen project depicted computation as human-centered and freely available everywhere, like the oxygen in the air we breathe [20]. Oxygen enabled pervasive, human-centered computing through a combination of specific user and system technologies. The project focused on the following technologies: device, network, software, perceptual, and user technologies. The Disappearing Computer initiative sought to design information artefacts based on new software and hardware architectures that were integrated into everyday objects, to coordinate these information artefacts to act together and to investigate new approaches that ensure user experience is consistent and engaging in an environment
10
J. Ye et al.
filled with such information artefacts [23]. This initiative included GLOSS [27], e-Gadgets [28], Smart-its [29], and other projects. A typical example was the GLOSS project (GLObal Smart Spaces), which aimed to provide information technology that respected social norms – allowing established ways of interaction to be generated or saved as required [27]. The project provided a theoretical framework and a technological infrastructure to support emerging functionality paradigms for user interactions. The goal was to make computing cognitively and physically disappear. The TRIL Center is a coordinated group of research projects addressing the physical, cognitive and social consequences of aging, recognising the increase in the aging population globally. The center’s objective is to assist older people around the world to live longer from wherever they call home, while minimising their dependence on others and improving routine interactions with healthcare systems. It entails multi-disciplinary research on pervasive technologies to support older people living independently [6]. The Cooltown project in HP aimed to provide an infrastructure for nomadic computing; that is, nomadic users are provided with particular services that are integrated within the entities in the everyday physical world through which users go about their everyday lives [25]. This project focused on extending web technology, wireless networks and portable devices to bridge the virtual link between mobile users, physical entities, and electronic services. The Microsoft EasyLiving project developed prototype architecture and technologies for building intelligent environments [26]. This project supported research addressing middleware, geometric world modeling, perception, and service description. The key features included computer vision of person-tracking and visual user interaction, the combination of multiple sensor modalities, the use of a geometric model of the world to provide context, the automatic or semi-automatic calibration of sensors and model building. Fine-grained events, adaptation of user interfaces, as well as deviceindependent communication and data protocols and extensibility were also addressed.
5
Open Research Issues
Pervasive computing offers a framework for new and exciting research across the spectrum of computer science. New research themes cover basic technology and infrastructure issues, interactions where computers are invisible and pressing issues of privacy and security [3, 30].
5.1
Hardware Components
Hardware devices are expected to be cheaper, smaller, lighter, and have longer battery life without compromising their computing and communications capabilities. Their cost and size should make it possible to augment everyday objects with built-in computing devices (for example the prototype in Fig. 1.4). These everyday
1 An Overview of Pervasive Computing Systems
11
Fig. 1.4 The Mediacup is an ordinary coffee cup with sensors, processing and communication embedded in the base [31].
objects can then potentially gather information (including light, temperature, audio, humidity, and location) from its environment, then transmit it, and take actions based upon it. These devices should generally be low-power in order to free them from the constraints of existing or dedicated wired power supplies. Specialised circuit designs may permit operation over a much wider range of voltages or enable power savings using other optimisation techniques. Chalmers suggested that it may be possible to use solar cells, fuel cells, heat converters, or motion converters to harvest energy [30]. Other resource constraints can also be overcome. Satyanarayanan described Cyber-foraging as a potentially effective way to dynamically augment the computing resources of a wireless mobile computer by exploiting local wired hardware infrastructure [16].
5.2
Software Engineering
In pervasive computing systems, the number of users and devices will greatly increase, as will the degrees of interaction between them. A tremendous number of applications are distributed and installed separately for each device class, processor family, and operating system. As the number of devices grows, these applications will become unmanageable. Pervasive computing must find ways to mask heterogeneity since, in the implementation of pervasive computing environments, it is hard
12
J. Ye et al.
to achieve uniformity and compatibility. The challenges encompass a new level of component interoperability and extensibility and new dependability guarantees, including adaptation to changing environments, tolerance of routine failures, and security despite a shrunken basis of trust [32]. From a systematic perspective, infrastructures deployed in a pervasive computing system should be long-lived and robust. These infrastructures include sensors and devices, hardware for input and output interaction, software for manipulating and controlling interaction devices, and communication structures from a small to large scale. These infrastructures will be able to perform in situ upgrades and updates and the interactions within this infrastructure should be fluent. This can be enabled by developing an appropriate programming primitive. This new programming model will deal with sensor communication, the semantics of the system (for example, knowledge, data, and software for applications), the corresponding implementations, and so forth [3].
5.3
Context-awareness
Perception or context-awareness is an intrinsic characteristic of intelligent environments. Context can be any information about a user, including environmental parameters such as location, physiological states (like body temperature and heart rate), an emotional state, personal history, daily activity patterns, or even intentions and desires. All of this context is acquired from various kinds of sensors, which are distributed in a pervasive computing environment. Compared to traditional data in a database, context has much richer and more flexible structures and, thus, it is much more dynamic and error-prone. This requires a new data management model to represent context in a sharable and reusable manner and to resolve uncertainty by merging multiple conflicting sensor data streams. It is also required to deal with a huge amount of real-time data and contain a storage mechanism for fresh and out-dated context. The research in modeling context has developed from the simplest key-value pattern [33] to object-oriented models [34], logical models [35], graphical models [36], and ontology models [37]. After analysing the typical context models in pervasive computing, Strang [38] and Ye [39] regarded ontologies as the most promising technique to model and reason about context. In terms of software, the error-prone nature of context and contextual reasoning alter the ways in which we must think about decision and action. Any decision may be made incorrectly due to errors in input data, and we cannot blame poor performance on poor input data quality: we must instead construct models that accommodate uncertainty and error across the software system, and allow low-impact recovery.
5.4
Interaction
By interaction, we mean the way that a user interacts with an environment, with other people and with computers. As pervasive computing environments become
1 An Overview of Pervasive Computing Systems
13
increasingly part of our everyday lives, people will start interacting with these environments more intensively. The way that people interact with each other is enriched with a hybrid mix of communication technologies and interaction devices, including multi-media and multi-modal technologies [30]. Interactive elements in an environment will range from small-scale embedded or wearable devices that focus on the individual to large-scaled installations that focus on the general public. Each interactive element may bring about significant overhead and complexity in the users’ interaction, particularly if it has a different mode of interaction from other devices or it is a poor fits with users’ everyday activities. It has long been the objective of interface design to remove physical interfaces as a barrier between the user and the work s/he wishes to accomplish via the computer. Input devices like the keyboard, mouse and display monitor have been commercial standards for nearly fifteen years [40]. This type of physical interface is anything but transparent to the user and it violates the vision of pervasive-ness without intrusion. As the vision becomes fulfilled and computational services are spread throughout an environment, advances are needed to provide alternative interaction techniques. Put another way, the essential quality of pervasive interfaces is that they be scrutable, in that they support the construction of predictive and explanatory mental models by users [41]. Proactivity and transparency should be balanced during the interaction. A user’s need for, and tolerance of proactivity is likely to be closely related to his/her level of expertise during a task and to his/her familiarity with the environment. To strike the balance between proactivity and transparency, a system should be able to infer these factors by observing user behaviour and context. We have to explore a range of new technologies that support interaction with, and through, diverse new devices and sensing technologies. These include gesture-based approaches that exploit movement relative to surfaces and artifacts, haptic approaches that exploit the physical manipulation of artifacts, as well as speech-based interfaces. We should also treat pervasive computing as part of the language and culture and open up powerful associations with other disciplines that handle activity, space and structure [30].
5.5
Security, Privacy and Trust
With the growth of the internet, security has become an important research topic, including the issues of authority, reliability, confidentiality, trustworthiness, and so on. More specifically, the security issue involves the cryptographic techniques used to secure the communication channels and required data, the assessment of the risk of bad things happening in an environment or specific situation, and the development of safeguards and countermeasures to militate against these risks [5]. Security is a much more severe issue in pervasive computing, since pervasive computing is hosted in a much larger network that involves a huge number of different types of computing devices. These devices can be “invisible” or anonymous (that is, with unknown origin). They can also join or leave any network in an ad hoc manner. These factors intensely complicate security in pervasive systems.
14
J. Ye et al.
Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how and to what extent information is communicated to others [42]. Privacy is about determining how to control and manage users’ privacy, which is an existing problem in distributed and mobile computing. To provide personalised behaviour for users, a pervasive computing system needs to perceive all kinds of user context, including tracking user movement, monitoring user activities and exploring user profiles (like habits or interests) from browsed web pages. This massive amount of user information is collected in an invisible way and can potentially be inappropriately presented or misused. In this context, privacy control is not only about setting rules and enforcing them, but also about managing and controlling privacy adaptively according to changes in the degree of disclosure of personal information or user mobility [5]. In a pervasive computing environment, mobile entities benefit from the ability to interact and collaborate in an ad-hoc manner with other entities and services within the environment. The ad-hoc interaction means entities will face unforeseen circumstances ranging from unexpected interactions to disconnected operations, often with incomplete information about other entities and the environment [5]. The mechanism of trust is required to control the amount of information or resources that can be revealed in an interaction. Risk analyses evaluate the expected benefit that would motivate users to participate in these interactions. Trust management is needed to reason about the trustworthiness of potential users and to make autonomous decisions on who can be trusted and to what degree.
6
Changing Perspective Through Augmented Materials
From the perspective of this book, of course, embedding hardware components into everyday artifacts, evolved from the approaches related to those shown in Fig. 1.4, is an exciting prospect This will have a significant impact on the design of hardware, since it must integrate into materials that would not usually be considered as substrates for integrated devices; furthermore, they must withstand treatment (such as going through a dishwasher cycle!) not normally inflicted on computing devices. From a software perspective, combining pervasive computing with truly embedded devices emphasizes many of the issues raised in this chapter. In particular, such systems have limited interface bandwidth, possibly coupled with a rich variety of sensors. They must therefore rely substantially both on local inference and on connections to the wider world to access non-local information. At a system’s level, perhaps the greatest challenge is in the deployment, selfmanagement, self-organisation, self-optimisation and self-healing of networks of embedded systems: the self-* properties identified within autonomic systems. Such properties apply to computing capabilities [43], but perhaps more significantly they also apply to communications capabilities [44] of systems that must manage themselves with minimal human direction in very dynamic environments. Such self-reliance exacerbates the need for end-to-end management of uncertainty and
1 An Overview of Pervasive Computing Systems
15
so magnifies the need for different programming approaches. Dobson and Nixon have argued [45] for models that embrace explicit modeling of context and the environment, which may then be used to derive communications behaviour and evolve it in a principled way over time. Other approaches, based on inherently self-stabilising algorithms, similarly promise to exploit, rather than conflict with, dynamic interactions and changing goals, although the realization of all these techniques remain elusive.
7
Conclusions
Pervasive computing systems offer the potential to deploy computing power into new areas of live not necessarily addressed by traditional approaches. It is important to note that many of these areas simultaneously address issues of wellness, social inclusion, disability support and other facets of major significance to society. The challenges remain daunting, however, at hardware-, software- and systemslevel. Pervasive systems must offer seamlessly-integrated services in a dynamic environment, with little explicit direction, as well as uncertain sensing and reasoning and must do so over protracted periods without management intervention. Existing research has generated existence proofs that applications can be constructed in the face of these challenges, but it remains to be demonstrated as to whether more complex systems can be deployed. To address these problems, we need to broaden our discourse in certain areas and revisit long-standing design assumptions in others. Interfaces must be considered widely, and firmly from the perspective of user modelling and model formation. Traditional programming language structures and design methods do no obviously provide the correct abstractions within which to develop pervasive applications. Correct behaviour must be maintained even in the presence of known-to-be-faulty input data, where it may be more appropriate to refuse to act rather than act incorrectly – or it may not - depending entirely on the application. We are confident that the existing research strands will be broadened and deepened as these challenges are answered. Acknowledgements This work is partially supported by Science Foundation Ireland under the projects, “Towards a semantics of pervasive computing” [Grant No. 05/RFP/CMS0062], “Secure and predictable pervasive computing” [Grant No. 04/RPI/1544], and “LERO: the Irish Software Engineering Research Centre” [Grant No. 03/CE2/I303-1].
References 1. M. Weiser. “The Computer for the 21st Century”. Scientific American, pp. 94–104. September 1991. 2. N. Streitz and P. Nixon. “The Disappearing Computer”. The Communication of ACM, 48(3), pp. 33–35. March 2005. 3. P. Nixon and N. Streitz. EU-NSF joint advanced research workshop: “The Disappearing Computer. Workshop Report and Recommendation”. http://www.ercim.org/EU-NSF/index. html. April 2004.
16
J. Ye et al.
4. M. Jonsson, (2002). “Context shadow: An infrastructure for context aware computing”. Proceedings of the Workshop on artificial intelligence in mobile systems (AIMS) in conjunction with ECAI 2002, Lyon, France. 5. P. Nixon, W. Wagealla, C. English, and S. Terzis, “Privacy, Security, and Trust Issues in Smart Environments”. Book Chapter of Smart Environments: Technology, Protocols and Applications, pp. 220–240. Wiley, October 2004. 6. The INTEL Technology Research Center for Independent Living. http://www.trilcentre.org. 7. L. Coyle, S. Neely, G. Stevenson, M. Sullivan, S. Dobson and P. Nixon. “Sensor fusion-based middleware for smart homes”. International Journal of Assistive Robotics and Mechatronics 8(2), pp. 53–60. 2007. 8. J. Bohn, F. Gartner and H. Vogt. “Dependability Issues of Pervasive Computing in a Healthcare Environment”. Proceedings of the first International Conference on Security in Pervasive Computing, in Boppard, Germany, pp.53–70. 2003. 9. G. Borriello, V. Stanford, C. Narayanaswami, and W. Menning. “Pervasive Computing in Healthcare”. Proceedings of the International Conference on Pervasive computing, pp. 17–19. 2007. 10. K. Adamer, D. Bannach, T. Klug, P. Lukowicz, M.L. Sbodio, M. Tresman, A. Zinnen, and T. Ziegert, “Developing a Wearable Assistant for Hospital Ward Rounds: An Experience Report”. Proceedings of the International Conference for Industry and Academia on Internet of Things 2008. 11. K. Farkas, J. Heidemann and L. Iftode. “Intelligent Transportation and Pervasive Computing”. IEEE Pervasive Computing 5(4), pp. 18–19. October 2006. 12. R. Cunningham and V. Cahill. “System support for smart cars: requirements and research directions”. Proceedings of the 9th workshop on ACM SIGOPS European workshop: beyond the PC: new challenges for the operating system, pp.159–164. 2000. 13. The JR-EAST Japan Railway Company Research & Development. The Smart Station Vision Project. http://www.jreast.co.jp/e/development/theme/station/station08.html 14. D. J. Cook, M. Youngblood, E. O. Heierman, III and K. Gopalratnam. “MavHome: An AgentBased Smart Home”. Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, pp. 521–524. 2003. 15. B. Logan, J. Healey, M. Philipose, E. M. Tapia, S. S. Intille, “A Long-Term Evaluation of Sensing Modalities for Activity Recognition”. Proc. 9th International Conference on Ubiquitous computing (Ubicomp 2007) pp. 483–500. 16. M. Satyanarayanan, “Pervasive Computing: Vision and Challenges”. IEEE Personal Communications, 8(4), pp.10–17. August 2001. 17. D. Saha, A. Mukherjee. “Pervasive Computing: A Paradigm for the 21st Century”, Computer, 36(3), pp. 25–33. March 2003. 18. V. Stanford, J. Garofolo, O. Galibert, M. Michel, C. Laprun. “The NIST Smart Space and Meeting Room Projects: Signals, Acquisition, Annotation and Metrics”. Proc. ICASSP 2003 in special session on smart meeting rooms, vol.4, pp. IV-736-9 April 6–10, 2003. 19. J. P. Sousa, and D. Garlan. “Aura: an Architectural Framework for User Mobility in Ubiquitous Computing Environments. Software Architecture: System Design, Development, and Maintenance”. Proc. 3rd Working IEEE/IFIP Conference on Software Architecture, Jan Bosch, Morven Gentleman, Christine Hofmeister, Juha Kuusela (Eds), Kluwer Academic Publishers, pp. 29–43. August 25–31, 2002. 20. R. Weisman, Oxygen burst. The Boston Globe, June 21, 2004. 21. M. Román, C. K. Hess, R. Cerqueira, A. Ranganathan, R. H. Campbell, and K. Nahrstedt. “Gaia: A Middleware Infrastructure to Enable Active Spaces”. In IEEE Pervasive Computing, pp. 74–83, Oct–Dec 2002. 22. Cambridge. Sentient computing. http://www.cl.cam.ac.uk/research/dtg/research/wiki/Sentient Computing. 23. The Disappearing Computer Initiative. http://www.disappearing-computer.net.
1 An Overview of Pervasive Computing Systems
17
24. K. Cheverst, N. Davies, K. Mitchell, A. Friday and C. Efstratiou. “Developing a Contextaware Electronic Tourist Guide: Some Issues and Experiences”. Proceedings of CHI 2000, pp. 17–24, in Netherlands. April 2000. 25. J. Barton, T. Kindberg. “The challenges and opportunities of integrating the physical world and networked systems”. Technical report TR HPL-2001-18 by HP Labs. 2001. 26. The Easy Living Project. http://research.microsoft.com/easyliving/. 27. J. Coutaz, J. Crowley, S. Dobson and D. Garlan. “Context is key”. Communications of the ACM 48(3), pp. 49–53. March 2005. 28. The e-Gadgets Project. http://www.extrovert-gadgets.net/. 29. The Smart-Its Project. http://www.smart-its.org/. 30. D. Chalmers, M. Chalmers, J. Crowcroft, M. Kwiatkowska, R. Milner, E. O’Neill, T. Rodden, V. Sassone, and M. Slomen. “Ubiquitous Computing: Experience, Design and Science”. Technical report by the UK Grand Challenges Exercise. February 2006. 31. H. Gellersen, A. Schmidt, M. Beigl. “Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts”. Mobile Networks and Applications, 7(5), pp. 341–351. 2002. 32. T. Kindberg and A. Fox. “System Software for Ubiquitous Computing”. IEEE Pervasive Computing 1(1), pp.70–81. January, 2002. 33. A. K. Dey. “Understanding and using context”. Personal Ubiquitous Computing, 5(1):4–7. 2001. 34. A. Schmidt, M. Beigl, and H. W. Gellersen. “There is more to Context than Location”. Computers and Graphics, 23(6), pp. 893–901, 1999. 35. C. Ghidini and F. Giunchiglia. “Local Models Semantics, or Contextual Reasoning = Locality + Compatibility”. Artificial Intelligence, 127(2):221–259, 2001. ISSN 0004-3702. 36. K. Henricksen, J. Indulska, and A. Rakotonirainy. “Modeling context information in pervasive computing systems”. Proceedings of the First International Conference on Pervasive Computing, pp.167–180, London, UK, 2002. Springer-Verlag. 37. H. Chen, T. Finin, and A. Joshi. “An Ontology for Context-Aware Pervasive Computing Environments”. Special Issue on Ontologies for Distributed Systems, Knowledge Engineering Review, 18(3):197–207. May 2004. 38. T. Strang and C. Linnhoff-Popien. “A context modeling survey”. In Proceedings of the Workshop on Advanced Context Modelling, Reasoning and Management, Nottingham/ England. Sepember 2004. 39. J. Ye, L. Coyle, S. Dobson and P. Nixon. “Ontology-based Models in Pervasive Computing Systems”. Knowledge Engineering Rev. 22(04), pp. 513–347. 2007. 40. G. D. Abowd. “Software Engineering Issues for Ubiquitous Computing”. Proc. 21st International Conference on Software Engineering, pp.75–84. 1999. 41. M. Czarkowski and J. Kay. “Challenges of scrutable adaptation”. Proc. 11th International Conf on Artificial Intelligence in Education, pp. 404–407. IOS Press. 2003. 42. A. F. Westin. “Privacy and Freedom”. Publisher: Bodley Head.1970. 43. J. Kephart and D. Chess. “The vision of autonomic computing”. IEEE Computer 36(1), pp.41–52. January 2003. 44. S. Dobson, S. Denazis, A. Fernández, D. Gaïti, E. Gelenbe, F. Massacci, P. Nixon, F. Saffre, M. Schmidt and F. Zambonelli. “A survey of autonomic communications”. ACM Transactions on Autonomous and Adaptive Systems 1(2), pp. 223–259. December 2006. 45. S. Dobson and P. Nixon. “More principled design of pervasive computing systems”. In Rémi Bastide and Jörg Roth, eds, Human computer interaction and interactive systems. LNCS 3425. Springer Verlag. 2004.
Chapter 2
Augmenting Materials to Build Cooperating Objects Kieran Delaney1, Simon Dobson2
Abstract The goal of pervasive computing systems and ambient intelligence (AmI) provides a driver to technology development that is likely to result in a vast integration of information systems into everyday objects. The current techniques for implementing such integration processes view the development of the system and object elements as very much separate; there is a significant inference load placed upon the systems to accommodate and augment the established affordances of the target object(s). This does not conflict with the ultimate vision of AmI, but it does limit the ability of systems platforms to migrate quickly and effectively across numerous varieties of object (in effect, creating a bespoke technology solution for a particular object). To begin the process of addressing this challenge, this paper describes the proposed development of augmented materials. These are materials with fully embedded distributed information systems, designed to measure all relevant properties, and provide a full knowledge representation of the material; in effect, the material would “know” itself, and its current status. The basic premise is not new; many systems techniques have proposed and implemented versions of this idea. Advances in materials technology, system miniaturisation, and contextaware software have been harnessed to begin to prove the possibility of integrating systems directly into the fabric of artefacts (e.g. smart paper, etc). Where augmented materials would differ from current approaches is in its focus on integrating networks of element into materials and employing the actual material and object fabrication processes to programme them. Keywords Augmented Materials, Smart Objects, Micro-Electro-mechanical Systems (MEMS), Sensors, Embedded Systems Wireless Sensor Networks, Smart Dust, Ubiquitous Computing, Ambient Intelligence
1 Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland 2 Systems Research Group, School of Computer Science and Informatics, UCD Dublin, Belfield, Dublin 4, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
19
20
1
K. Delaney, S. Dobson
Introduction
The future of information technology systems is being driven by visions like ubiquitous, or pervasive, computing [1, 2] and ambient intelligence, or AmI [3]. In the AmI vision, which may simplistically viewed as a user-oriented evolution of pervasive computing, we surround ourselves with proactive, contextually effective and “always-on” interfaces that are supported by globally distributed computing and networking technology platforms. Such concepts are in many ways so broad that they have helped to prompt other new approaches, also represented through vision statements; in simple terms, these are software-centric (e.g. autonomic computing [4], proactive computing [5], etc) and hardware- or object-centric (Smart Dust [6], Smart Matter [7], Internet of Things [8], Disappearing Computer [9]). Logically, the goal of AmI provides a driver to technology development that will result in the close integration of information systems with what we consider to be everyday objects; research to build information systems into many different objects, such as furniture [10], clothes [11], Textiles [12], vehicles [13], aircraft [14], roads [15] and even materials such as paint [16] and paper [17, 18], is already well underway. This research is in many ways about providing required bespoke solutions to specific application domains. Concepts such as the Internet of things [8] seek to create methods to build active networks of these objects, which could potentially provide a foundation for realizing AmI. This encapsulates long-term research challenges framed in the context of hardware systems innovation by “hard problems” [19] that include reaching targets for size and weight, energy and the user interface. Thus, determining general approaches to creating these networks of embedded objects (and systems) becomes integrated with the methods employed to fabricate the objects themselves. It is now becoming commonly understood that solving such problems requires effective points of convergence for relevant research disciplines. In fact, although creating and sustaining coherent multidisciplinary initiatives is typically difficult, they can bring significant direct success and build greater frameworks to facilitate new discoveries (for example, miniature RF ID tags, created for applications such as asset tracking, are now being used to study the behaviour of wasps [20]) and lower barriers to broader implementation; new international collaborative programmes are driving a sharp reduction in the complexity associated with fabricating many types of objects, at least in prototype form [21, 22]. In Europe, a focal point of this type of multidisciplinary research has been the “Disappearing Computer” [9], a programme consisting of 17 collaborating projects [23]. The goal of this programme was to explore how people’s lives can be supported and enhanced through developing new methods for the embedding computation in everyday objects, or artefacts, through investigating new styles of functionality that can be engineered and through studying how useful collaborative behaviour in such interacting artefacts can be developed. A subsequent Disappearing Computer programme, entitled PALCOM [24], has focused further upon user-centric methods, developing the concept of palpable computing as an approach to “make technologies
2 Augmenting Materials to Build Cooperating Objects
21
a lot easier to understand, use and construct on-the-fly” [25]. Related initiatives have been replicated globally [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 7], and are increasingly driving the research agenda. For example, European research projects on cooperating objects, launched in 2005 [36], have helped to seed significant new EU initiatives on networked embedded and control systems [37] and the new European Technology Platforms programmes, EPOSS, on Smart Systems Integration [38] and ARTEMIS, on embedded systems [39]. One of the central challenges in developing the effective and scalable immersive technologies necessary to realise AmI is implementing a methodology that genuinely integrates the fabrication of “smart”, cooperating objects on a physical level with their creation on a digital level. The research programmes driving Moore’s Law can support this by providing improved performance coupled with scope for greater miniaturisation. However, this alone will not be effective in realising seamless integration. New avenues of research in wireless sensor networks [40, 41, 42, 43, 44] can underpin the development of novel sensor node formats and help in the derivation of the heterogeneous architectures required to build infrastructures for effective cooperating objects. These architectures will need to support adaptive, reliable behaviour for sustained periods, particularly where the objects are deemed to be of high value. Thus, a level of autonomous capability is required. A comprehensive solution to the goal of creating unobtrusive integrated services in everyday environments requires a fully realised co-design process enabling specific types of distributed platform to be developed and physically immersed in everyday objects. The nature of these platforms requires both self-managing and long-life embedded systems behaviour to be successfully integrated into this system model. Thus, realising these systems to a certain extent represents a convergence of grand challenges. One such conceptual system is “Augmented Materials [45]”, a challenge to develop “self-aware” materials to be used to compose augmented, or smart, objects.
2
The Augmented Materials Concept
We propose that physically embedded networks of distributed sensors and actuators can be systemically programmed to augment the behaviour of synthetic materials. We further proposes that the implementation of typical material processing techniques can provide a natural programming construct (or language) for the creation and assembly of functionally effective augmented (or smart) objects and ‘intelligent’ everyday artefacts from these materials. The idea is that the materials are infused with systems capability that allows a digital representation to be developed at a selected formation stage (e.g. curing) and maintained thereafter as an ability to report on status; status means the ability to effectively represent all non-negligible energy transitions taking place within and, at the interfaces of, the material. An effective implementation of this capability would yield a situation where any
22
K. Delaney, S. Dobson
subsequent materials processing sequences would behave as programming (or actuation) steps. In this context, we propose to develop new techniques for embedding proactive systems into everyday artifacts. The basic intention is find a successful way to embed micro- and nano-sensing, processing and communications elements into a physical substrate. Through this approach, the material provides a physical means of “matching” to an environment, and a route to sensing it and the physical processes acting on the object translate into interface actions or semantic cues to the embedded software.
2.1
The First Vision Statement
The following vision statement was originally created as an attempt to capture what might be required in order to fully realise the ‘invisible computing’ concept proposed by Weiser. Its first iteration acknowledged the nature of the assembly of electronics, but did not immediately address true feasibility in the context of current microelectronics devices and systems. In the early stages we choose to defer to Moore’s Law [46] as the means through which form and function would ultimately merge. Of course, that assumption and the vision in itself provides no easy insight into what adaptation of this concept may be practically realised (now and in future). This was a process that commenced subsequently and is still underway; in fact, the greatest value of this first ‘story’ was in generating a common picture across relevant disciplines and enabling its feasibility to begin to be tested in the first place. The multidisciplinary interface generated has turned out to be one that is deep and growing. The concept of creating an augmented material is analogous to mixing additional component elements into an established material composite in order to affect a particular physical attribute (e.g. adding nanoscale elements to a ceramic in order to increase tensile strength). In augmented materials, the nodes, or elements, will be deployed into synthetic material through a typical mixing process, designed to distribute them randomly, but uniformly within the material. Once the elements are uniformly distributed, or mixed, a process of self-organisation can take place. This involves the digital creation of networks of elements and the definition of the elements’ functions based upon relative location in the material and the most appropriate physical parameters for these elements to monitor. For example a specific process to implement the network in augmented materials might follow this conceptual outline (see Fig. 2.1): ●
●
●
System elements are introduced while the material is being made and the fabrication process forms part of the system programming sequence. The physical distribution of the system is primarily correlated to its nature and secondarily to its shape; the pattern of how each maps to the other is an important part of the system’s physical implementation model. In implementation mode, the system will operate on an interrupt based characteristic, focused upon all energy transitions in the material; as a result the system node distribution will not necessarily be uniform.
2 Augmenting Materials to Build Cooperating Objects
23
New Vision Deploy Autonomous Sensors (< 5mm3 modules)
Enable Scaled Distributed Systems Research
Light Light
Embed I-Seeds Systems into Materials
Proximity Proximity Strain Strain
Heat Heat Pressure Pressure
Fig. 2.1 The first vision of augmented materials
●
Most likely, the network will be systemically compartmentalized to specific parameters.
In the case of certain materials, where the formation process includes a liquid or viscous fluid stage, the elements could be designed and developed to be capable of limited 3-D motion. This would allow the physical self-organisation of the elements from the uniformly random to 3-D distributions that would optimise the element capacity and performance to all requirements in effectively measuring the physical parameters of the material. These distributions would become localised network groups that specialize in the measurement of specific physical stimuli for the entire material structure. This would contribute to the optimization of the element and network resources for the augmented material in question. The implementation of local network groups enables the initiation of a heterogeneous network from which the digital representation of the material could be composed, stored and communicated to an internet-level user-interface. This representation would be the first stage in implementing responsive capability; actuators are next distributed in the material in a manner correlated to its sensing architecture to create the capability for a controlled physical response to stimuli (for example automatically adjusting shape in response to a mechanical stress) or to a change in context (for example, a change in user-defined circumstances).
24
K. Delaney, S. Dobson Light
Light
Light
Light
Light Proximity
Heat
Mobility
Heat
Strain Mobility Heat
Pressure
Pressure
Pressure
Pressure
Pressure
Fig. 2.2 The variation of local network groups could depend upon the nature of the material in which these groups are embedded
This heterogeneous network is by definition strongly correlated to the physical behaviour of the material itself. One could expect that element location, function and the structure of the local network groups would vary significantly depending upon parameters such as the rigidity of the material (see Fig. 2.2). This would require system specifications such as the resolution of the sensory devices to also vary according to these parameters, with a resultant impact upon the computational, power and memory requirements of the elements themselves. The nature of the material would also impact upon the physical size of the elements and upon their design; as a result, new design tools may need to become increasingly complex. The successful implementation of this concept of augmented material requires reliable communications between the local network groups to develop a full digital representation of the materials status (i.e. ‘self-knowledge’). This global material network should be capable of adjusting its structure and stored knowledge according to events that affect the material. An example is when the material is cut to a new shape (or shapes); under these conditions the network should alter to a new structure (or structures) correlated to the new shape(s). Further, for the augmented materials concept to be relevant the structure and data management actions of the embedded network should adapt also to the process of combining materials together to create smart objects. In this case, a networking action analogous to that of physically bonding two materials together should take place (see Fig. 2.3). This “digital bonding” should link the two material networks and extend the local network groupings across material boundaries to accommodate elements with common or similar tasks, and possibly alter the network structure (and/or element behaviour) based upon any relevant constraints developed by the fact that the materials are now physically bonded (e.g. when a flexible material is bonded to a rigid materials the resulting combination is rigid). Through the disciplines of physics and chemistry, developing a new material (or improving the performance of existing materials) is a long-term process that may take over ten years. Thus, fully realising augmented materials, where the physical and digital are so closely integrated represents a significant and very long-term challenge in itself. However, the framework for implementing augmented materials
2 Augmenting Materials to Build Cooperating Objects
25
Fig. 2.3 The goal of an augmented material includes the successful implementation of “digital bonding”, or linking material networks in an object in a manner that is correlated to the actual physical infrastructure
in the future can be investigated now using current research and technology platforms; methodologies should be created to guide effective implementation of augmented materials as a practically integrated, physically heterogeneous infrastructure. In short, it is possible to create versions of augmented materials that roadmap the approach to this concept; in fact, practical examples exist that could be seen as direct building blocks [47, 48]. More specifically, numerous significant challenges in miniaturisation, sensing, networking and material integration will need to be addressed before the full concept becomes possible; however, roadmaps to achieve this are in progress [49]. As previously noted, the vision describes developments that have many parallels with work ongoing in wireless sensor networking. There remains the question of how existing technology platforms (i.e. internet-level systems, laptops, mobile phones, PDAs, etc) may be merged effectively with this approach, both to create gateway capability for the integrated “material” networks and to provide seamless services between augmented objects and those objects that will not be readily accessible to these materials (e.g. wooden furniture, etc).
2.2
Evolving the Vision
The process of creating objects through combining materials will be part of the systems programming (or configuration) sequence. Digital representations of physical interfaces will be required as the physical interfaces are created. The representations will also be based on material behavioral parameters, this time associated with
26
K. Delaney, S. Dobson
the interactive effects between each material. The systems for each individual material will remain effectively autonomous, but will, through triggers from the assembly sequence, encompass the effects of the combination process. In some cases, where the effects on each material system are small, this will create an interface layer, which will behave rather like a new digital representation (a pseudo-material). In other cases, the impact will be large enough to fully alter each material’s behaviour, creating a coalescence of the two systems into a digital composite, which represents the status of both materials. The affordances of the object will be mapped through “material-system” to “material-system” association processes. This will enable an abstract representation that can be subsequently developed as a framework through the implementation (and evolution) of context-aware dynamic and user-driven systems. The full efficacy of the augment materials model will be tested by its capabilities in permitting the context layer to evolve well beyond predictable behaviour when the user is manipulating groups of such materially augmented objects. In this respect a direct evolution from current ‘context-awareness’ research is required.
2.2.1
Computation and Composition
An intelligent material must in many ways climb two learning curves simultaneously: the physical curve of miniaturisation, integration and subsystems assembly of components in small packages; and the informational curve controlling how sensor information can be fused and used to drive higher-level processes. Both of these areas hinge on a strong notion of composition, and this is the unifying theme of the approach (see Fig. 2.4). At its lowest level, computation in a smart material consists of providing a suitable programming interface for use on the individual components. A good example
Individual module local sensing and processing Smart material global representation Cluster of materials sensitive to external semantics
Fig. 2.4 Local sensing is aggregated and provides global representations of smart materials that can ‘cluster’ to create object level behaviour
2 Augmenting Materials to Build Cooperating Objects
27
would be to provide software abstractions for the various sensors, actuators and communication sub-systems. A single component might (for example) sense the tensile stresses in its vicinity and make these available through an inductive communications medium. There is a substantial challenge in providing a usable programming model for such a constrained device, but it is a challenge common to most embedded systems. A smart material consists of a large number of components scattered through a substrate [50]. Each component has a local view of the material, reflecting only part of the material’s overall environment. The next stage of information integration is to synthesize the local capabilities of the individual components. Extending the example above, the material-level challenge is to fuse the local stresses into a view of the material’s current deformation under load. The material level can determine global information not available at the component level. The challenge is to allow this global information to be built up and maintained in a distributed and scalable manner, accounting for delays and failures at the component level. Using a distributed representation minimizes the consequences of local failures, improves parallelism and allows the use of cheaper individual components; it also introduces all of the standard distributed systems issues of concurrency and coupled failure modes, albeit on a smaller scale than is usually considered. The final stage of integration is inter-material, the purpose of which is to provide the “external semantics” of the materials and their behaviour in the world. A good example is where the materials of two smart objects are brought close together. At the component level this might manifest itself as a new ‘wireless’ communication channel between two previously separated component populations: this can be interpreted at the material level as proximity, possibly computing which material has moved to bring the two together, and their orientations. At the inter-material level this proximity might be interpreted as (for example) placing a smart book onto a smart table, which has semantic implications in terms of information transfer [51]. It is easy to see that there are challenges in composition across these stages. Components must be coordinated into a communications structure, and must provide their information and computation in a distributed and fail-soft architecture. Materials must be able to handle a range of information depending on the sensors available, and draw common inferences from different underlying evidence [52]. Clusters of these materials must compose to provide intelligible behaviour consistent with the expected properties of the artefacts they embody. Most programming environments for ubiquitous computing make heavy demands on both power and computation – a good example is the Context Toolkit [53]. While the lessons of such systems are vital, the techniques used are inappropriate to augmented materials. Other approaches such as swarm intelligence [54] do not appear to be able (at least in their current state) to capture phenomena with high semantic content – although they offer insights into lower-level issues such as communication and discovery. ‘Amorphous’ computing paradigms, while tackling lower-level issues than are appropriate for smart materials, offer insights into resource discovery and ad hoc routing in compositional systems [55].
28
K. Delaney, S. Dobson
Event-based systems [56] are widely used in wide-scale distributed systems where they provide loose coupling between distributed components to help tolerate failures and variable latencies. As mentioned above we regard a ‘smart’ material as a “widely distributed system writ small”, in the sense that the properties observed are more similar to those of wide-area systems than traditional embedded systems. This means that techniques used on wide-area systems (for example locationsensitive event infrastructures) can be usefully re-applied. Event-based systems have known problems as programming environments, however, especially in representing complex algorithms involving shared knowledge. A compromise is to use events to maintain a distributed model of context, which is then queried and accessed as a unit. Shared models have been used extensively, for example as a component in n-tier enterprise architectures [57]. We may adapt these architectures to provide lightweight distributed representation and querying in the style of distributed blackboard systems. Inter-material composition sits comfortably in the domain of context-aware systems in which the major issues are in task modeling [58] and knowledge representation. Materials need a clear model of “what they do” at the highest level that relates closely to “what they sense” at the lower levels. In many ways the current research can be likened to the challenges of largerscale compositional environments, for example [59], in the sense of combining low- and high-level information cues and utilising dynamic populations of resources. This means that the approach can both build on and influence work in the wider community.
2.3
A Systems Description for Augmented Materials
2.3.1
The Local Systems Architecture.
The development of the augmented materials network will be based upon defined local and global systems architectures. The local systems architecture will be represent by small sets of nodes designed to measure physical parameters at specific locations in the material. The systems description could be determined by the development of two element categories – sensing elements and aggregating elements – which are evenly distributed through the substrate. These two classes’ gossip, but in different ways: ●
●
Sensor elements gossip with nearby aggregating elements by sending changes in their local states, which are then aggregated to provide a summary of the state of the local area Aggregate elements gossip with other aggregators, but exchange management information about which aggregate is summarizing a locale
The global systems architecture will collate and represent this local data at a material level.
2 Augmenting Materials to Build Cooperating Objects
2.3.2
29
The High-level Systems Description
An artefact may be constructed from an augmented material at four distinct levels (see Fig. 2.5). At the physical level, the material exhibits certain structural properties such as stiffness, ductility, conductivity and so forth, which condition the physical applications for which it can be used. At the element level, each processing ‘node’ in the material functions as an independent component capable of local sensing, local processing and communications with nearby elements. At the material level, the individual elements co-ordinate their actions to present a global behaviour; at this level the local sensing information is typically integrated into a global view of the material. At the artefact level, the material can “understand” its function and relationships with other augmented materials in its vicinity. A good example here might be offered by building materials, where compositions of individual elements with embedded sensing and actuation could be used to significantly improve the capabilities of “adaptive” architecture [60] by combining physical properties sensed at the materials level (temperature, wind induced stresses) with artefact-level goals (heat retention, stability). A further categorisation of the construction of artefacts from augmented materials is required for practical reasons; many viable cooperating objects will be composed from materials that are not directly conducive to this physical integration process or to the concept. Thus, an approach to integrating such forms into the augmented material construct is required. An artefact that is composed from a single augmented material, or from a number of shapes formed from that single material, is described as being intrinsically augmented. An artefact that is composed from a number of differing (but fully developed) augmented materials is described as being compound augmented. An artefact that combines an augmented material with a physically connected computational or sensory capability in the form of a dedicated module is described as a hybrid augmented artefact. An artefact that applies the augmented material system
ARTEFACT
PSEUDO-AUGMENTED
MATERIAL
HYBRID AUGMENTED
ELEMENT
COMPOUND AUGMENTED
PHYSICAL
INTRINSIC AUGMENTED
(a) Augmented Materials Systems Levels
(b) Categorisation of Artefacts
Fig. 2.5 A description of (a) the hierarchical levels within the augment materials system and (b) a categorisation of artefacts made in full or part from an augmented matter process
30
K. Delaney, S. Dobson
and networking approach through computational or sensory modules physically distributed (and bonded) to one or more of its non-augmented material layers is known as a pseudo-augmented artefact (see Fig. 2.5).
3 3.1
Previous Research Top-Down and Bottom-up Methodologies
Numerous research domains are associated with increasing the functional capabilities of material systems. These domains may be viewed as taking ‘top-down’ or ‘bottom-up’ approaches. In the domain of material research, dominated by a ‘bottomup’ perspective, the influence of biological systems is having its impact; a particular example is that of self-repairing polymeric composites [61]. In this case, a healing capability is imparted through “incorporation of material phases that undergo self-generation in response to damage”. A related research activity is that of selfregulating materials [62]. These can be created by using magnetostrictive particles as “tags” in a host composite material; their interrogation and response indicates the location of damage sites. These techniques are clearly relevant to augmented materials and exhibit a form of autonomic behaviour that would have clear value when integrated into larger intelligent systems. Logically, as materials demonstrate this type of increasing versatility, an infrastructure must be created to merge digital and physical behaviour and to harness this potential. The concept of immersing the computer more fully into the fabric of our daily lives, as represented in the Disappearing Computer programme [9], is central to achieving a genuine representation of AmI. Smart systems development in such programmes are typically dominated by a ‘top-down’ or systems-oriented perspective. This particular programme applied a multi-project approach to developing solutions for diffusing information technology into everyday artefacts; a focus on physical integration was provided by a small selection of projects, such as FiCom [63], and a focus on systems by the majority of projects, including Smart-ITs [64], Extrovert Gadgets [65] and GLOSS [66]. Although dominated by a ‘top-down’ perspective, specific strands of research took a more targeted approach, seeking to reinvent hardware systems in more radical forms. The FiCom, or Fibre Computing, project investigated new forms of silicon substrate to provide literally flexible platforms that could be more effectively integrated into many kinds of objects. Other research in creating novel silicon substrates has also been undertaken separately, including the implementation of spherical silicon circuits and transducers [67]. In these cases, the inherent potential of silicon is being investigated, though not necessarily yielding results that directly accelerate the realisation of AmI. Research in this area has uncovered intriguing possibilities for using silicon in application domains, such as the development of smart bandages. Perhaps more effective, though not less inventive, is the increased
2 Augmenting Materials to Build Cooperating Objects
31
investigation, and use, of thin silicon, specifically for smart card technologies and high-density 3-D integration [68, 69, 70]. Numerous approaches have proved successful and are a focus of system-in-a-package solutions for electronics applications [71, 72, 73]. The progress of silicon technologies under Moore’s Law [46] has also underpinned implementation of system-on-a-chip solutions. These siliconoriented approaches to interconnection and packaging offer significant potential to develop new interfaces between materials and electronics systems and, as such, represent a key investigative medium through which the ‘top-down’ or ‘bottom-up’ approaches may ultimately merge.
3.2
Research on Hybrid Systems Integration
Physical Integration techniques, related to the system-in-a-package platform, are under investigation within the field of microelectronics itself. The topic of integral passive components [74] is active; embedding passive components into package and board substrates has the potential to minimize the assembly overhead for lowvalue passive components and thus reduce overall costs. In many respects, it also reflects the challenges inherent in creating any integrated technology platform; managing material behaviour invariably requires trade-offs in the design of fabrication and assembly techniques in order to create a balance between functionality and physical integrity [75]. The system-in-a-package platform has provided a driver for numerous fabrication and assembly techniques, including flex technologies [76], 3-D multichip systems [77], flip-chip technologies for circuits and sensors [78] and others; these support implementation of embedded systems and, ultimately, AmI. Some of these techniques have progressed to, or influenced, other vibrant research areas, such as wireless sensor networking, where there is a driver influenced by the vision of smart dust [6] for the miniaturisation of sensor nodes – current approaches vary but include using innovative thin silicon fabrication and 3-D assembly techniques [79]. These techniques are important to the establishment of miniaturised nodes that can be composed as embedded elements for an augmented material. An inherent part of the development of augmented materials is distributed, embedded sensing and, ultimately, actuation. Previous research does provide certain insights into approaches that may be suited to the distributed nature of an augmented material system. For example, an investigation on compliant systems [80] has provided a mathematical framework for distributed actuation and sensing within a compliant active structure. The method, which synthesizes optimal structural topology and placement of actuators and sensors, was applied to a shape-morphing aircraft wing demonstration with three controlled output nodes. Other investigations, focused within the domain of electronic packaging, investigate sensor devices that could be adapted to monitor material behaviour [81]. They also highlight the negative impact of embedding electronics in polymeric materials [82] and the necessity for care in the design of both the sensor/ aggregator element substrates and in the integration process itself.
32
3.3
K. Delaney, S. Dobson
Wireless Sensor Networks
Much of the more practical wireless sensor network (WSN) initiatives describe physically large and heterogeneous systems based upon specific drivers, such as the EU water-framework directive [83]. Other areas provide clear potential for significant markets. The progress of RFID technology is particularly interesting in this regard. The emergence of cost-effective tag production technologies [84, 85] has opened select exploitation routes and avenues for innovation that relate closely to the immersive concepts of the Disappearing Computer (e.g. tag readers embedded in shelves progressing to a “smart shelf”), expressed as an “internet of things” [8]. The nature of WSN research, and its numerous challenges, has necessitated the development of a toolkit approach [86, 87] for supporting investigative programmes. This approach is not alone useful in sensor networking, but a requirement in the studying the architectural requirements for the effective, autonomous operation of distributed embedded systems. Toolkits were developed as part of the Disappearing Computer programme in projects such as “Smart-ITs” [63], “Extrovert Gadgets” [64] for this very reason. Specific toolkits [88] can be evolved in the Augmented Materials programme to implement practical investigations of local sensor (node- or element-level) and global (network-level) material behaviour. Autonomous sensor platforms, including inertial sensor systems [89], wearable sensors [90] and environmental sensors [91] currently exist in a wireless sensor node form factor; these are suited to providing a foundation for investigative studies on the architectures of the distributed, embedded elements. The ability to enable the control of certain aspects of the behaviour of autonomous systems is particularly important. Emerging subsystems, such as modular robots [92, 93], self-sensing sensors and actuators [94, 95] and reconfigurable wireless sensor nodes [96] are very relevant to this approach and can be integrated with the toolkits to develop the simplest feasible sensing and computational elements. Current research into the development of chemical sensor arrays [97] can also provide insight into the challenges and opportunities for employing distributed sensing techniques using suitable element architectures.
3.4
Related R&D Concepts
3.4.1
Smart Floors, Smart Matter and Digital Clay
Once concepts and grand challenges become in some manner established, they tend to engender new iterations; thus, either directly or otherwise, they are constantly evolving. From Weiser’s vision of ‘invisible computing’ and the growth of ubiquitous/pervasive computing through ambient intelligence, to more recent ideas, for example ‘Everyware’ [98], there is a constant flux around creating a deeper understanding of the future technologies that should exist in a knowledge society.
2 Augmenting Materials to Build Cooperating Objects
33
An underlying theme in these concepts, that of unobtrusive and intuitive interaction, has provided a driver for hardware- or object-oriented concepts, such as physical computing [99], haptic computing [100], sentient computing [101] and tangible bits [102]. In this context, ‘objects’ of particular importance in our everyday environment have become the focus of augmentation research. One of many possible examples is the smart floor. An avenue of recent research in this domain has yielded the ‘magic carpet’ [103], which is comprised of a grid of rugged piezoelectric wires hidden under a 6 × 10 foot carpet coupled with Doppler radars to measure the upper body motion of users. The ‘Litefoot’ system [104] is a 1.76 meter square, 10 centimeter high slab, filled with a matrix of optical proximity sensors. The ‘smart floor’ [105] used load cells, steel plates, and data acquisition hardware to gather (ground reaction force) GRF profiles and non-invasively identify users to an accuracy of over 90%. A pressure sensitive floor system [106] was developed as a high-resolution pressure sensing floor prototype with a sensor density of one sensor per square centimeter designed to support multimodal sensing; the design integrated closely with video, audio and motion-based sensing technologies. This is indicative of the benefits of creating systems that support interoperability as is highlighted in research on networked embedded systems and cooperating objects. In fact, individual objects typically provide incomplete, or narrowly defined, services. Thus, objects should access broadened capabilities through cooperating; as systems that contain sensors, controllers and actuators they should communicate with each other and be able to achieve a common goal autonomously. This is inherent in the underlying platforms required for ubiquitous computing and to some extent this has been extrapolated through the concept of the ‘internet of things’ [8], where the principles that created the internet are being employed to investigate how networks of everyday objects can reach an equivalent level of scale, computing power and of course, effectiveness. Networking and distributed computation can also be built into individual ‘objects’ to address aspects of their performance. The Z-tiles project [107] developed another form of smart floor by building a self organising network of nodes, each connected together to form a modular and flexible, pixilated, pressure-sensing surface. This project is particularly interesting in relation to the concept of augmented materials because it utilizes a distributed networking approach that offers performance and scalability. In particular, as individual Z-tiles provide building blocks for both the physical floor space and for the underlying sensor and computational network, it is much closer to an instantiation of aspects of the augmented material concept than many other integrated sensing techniques. A number of other research areas also correlate with aspects of this concept. The emergence of nanotechnology and its potential, as crystallized in visions like ‘smart dust’ [6] has prompted a number of concepts based upon the merger of matter with electronics. Starting with the identification and use of materials with ‘smart’ properties, such as shape-memory alloys, and then the evolution of ‘intelligent materials’ as an topic of study, the concept of ‘Smart Matter’, introduced by PARC in the early 1990s [7], became a focal point giving a specific focus to an
34
K. Delaney, S. Dobson
evolution of the above approaches. The concept, which “consists of many sensors, computers and actuators embedded within materials” targeted MEMs specifically, and linked itself to nanotechnology, distributed management techniques and ultimately to distributed control, proposing a multihierarchy [108] as a control organization supporting systems stability. This initiative remained active until around 2000 and then devolved into research activities on wireless sensor networks, MEMs technologies and robotics. The activities on robotics evolved the concept to a form of “digital clay” [109], formed from stripped down modular robots – the use of the term clay conveys the intention that the modular robots have no active coupling or motion features and any adjustments in assembly must be made by the user.
3.4.2
Programmable Matter, Claytronics and Paintable Computing
Amorphous computing [54] focuses upon investigating ‘system-architectural, algorithmic, and technological foundations for exploiting programmable materials’ where ‘atoms’ are based upon an IC with an on-board microprocessor, program memory and a wireless transceiver, that has been miniaturised to the size of a small match head and powered parasitically. The term programmable matter as outlined here refers to one interpretation, that of a collection of “millimeter-scale” units that integrate computing, sensing, actuation and locomotion mechanisms. It has also been utilized to describe methods for “exploiting the physics of materials for computation” [110], which resulted in the creation of the field programmable matter array; liquid crystals are cited as a potential example of this type of matter [111]. In the context of augmented materials, there is a clear overlap when one considers programmable matter to include materials that incorporate large numbers of distribute programmable elements that react to each other and to the environment surrounding the material. Fully realised, it evolves to a material in which the properties can change on demand, thus, enabling the material to programme itself into any form. Here, the programmable matter in question would be based upon artificial atoms, of which the quantum dot is the most cited example, and, hypothetically, would be composed of structures such as ‘Wellstone’ – a nanoscale thread covered with quantum dots [112]. The focus of these research topics is very much in the domain of nanotechnology. However, specific concepts that build upon programmable matter can be related to the creation of augmented materials, particularly in providing routes to novel, highly miniaturised elements and in providing the scope for analysis of systems architectures that transition micro-nano-scale boundaries while maintaining connectivity with established the heterogeneous infrastructure. One such concept is that of claytronics [113], which explores methods to reproduce moving physical objects. A similar concept, known as ‘utility fog’ (i.e. polymorphic smart materials [114]) has also been described. Claytronics is based upon the idea of dynamic physical rendering, where programmable matter is used to mimic a physical artifact’s original shape, movement, visual appearance, sound and tactile qualities. The programmable element in this case is the claytronic atom, or catom; this is a mobile, reconfigurable computational unit that has no moving parts,
2 Augmenting Materials to Build Cooperating Objects
35
but is capable of communicating with, and sticking to, other catoms. According to the concept, power would typically be externally sourced through a table, or similarly suitable support artifact. The core of the concept is in creating convincing physical moving 3-D replicas of people or objects, including tangible and convincing representations of attendees at virtual meetings; a case study on a 3-D fax machine using claytronics is described in [115]. One suggestion derived from this case study was the use of catoms of different sizes where a skeleton of the object, or entity in question, is created using larger modules and smaller modules then selectively latch onto this skeleton to complete the ‘copy’. This suggests a form of heterogeneity that can be harnessed by other approaches to optimise performance; in this context, the table structure and larger catoms could be formed as load-bearing, sensor-aware augmented materials that act to ensure the completeness of the rendering process, while the miniature catoms fill in the detail of the ‘copy’. A second concept is paintable computing [116]. This is described as ‘an agglomerate of numerous, finely dispersed, ultra-miniaturized computing particles; each positioned randomly, running asynchronously and communicating locally’. In some ways, this is close to the description of the formative processes for pure augmented materials as both approaches describe elements, or particles, that are dispersed randomly and are capable of local communication. The physical test-bed developed as part of the paintable computing investigation is also of interest, the Pushpin computing wireless sensor network platform [117]. This is a multihop wireless sensor network of 100 nodes built onto a tabletop of one-square-meter area. The nodes have the form factor of a pushpin and can be inserted into a large specially constructed power plane. The pushpins, which are easily moved across the power plane, use IR transceivers to communicate locally and have a modular, stacked architecture that permits high levels of reconfigurability; this essentially creates a 2-D sensor layer suitable for the study of numerous distributed ad-hoc sensing applications. The ability of the system to determine relative location, as is required with paintable computing and augmented materials, makes this a highly flexible emulation tool. Further work was performed by the same researchers [118] on rich sensory systems (i.e. electronic skin) through the development of a sphere tiled with a multimodal sensor/actuator network, known as a TRIBBLE (Tactile Reactive Interface Built by Linked Elements). A similar approach to Pushpin Computing was adopted within the Pin&Play project [119]. The nodes are attached to a physical medium in the same manner as the pushpins; in this project, the board was built using multiple mesh layers to provide a medium for both data and power, permitting the network to be developed in this way. Implementing network connectivity using surfaces is an approach that is also employed in the ‘Networked Surfaces’ concept [120]. Objects are augmented with specific conducting paths, which when these objects are physically placed on the surface, enable connection through a handshaking protocol; the protocol assigns functions such as data or power transmission to the various viable conducting paths and, thus, create the network. Both of these concepts provide insight into enabling methodologies for networking in augmented materials at a prototype level – the challenge in this context is to evolve the approach from 2-D surfaces to 3-D embedded elements.
36
K. Delaney, S. Dobson
3.4.3
Cooperating Objects and Spimes
The evolution and use of smart objects and appliances, such as those employed for ‘networked surfaces’ has progressed currently to gadget-level operability; the feasibility of integrating sensing, actuation and computation into objects has been amply demonstrated. Further, the potential of smart objects has been demonstrated to the extent that engaging in investigations to effectively network these objects together (i.e. cooperating objects) is now the primary research challenge. A full conceptual construct, called the Spime, has been developed to investigate what this might ultimately become. The term Spime has been proposed to describe an object that can be tracked through space and time throughout its lifetime [121]. Specifically, it should be possible to track the entire existence of an object, from before it was made, through its manufacture, its ownership history, its physical location, until its eventual obsolescence and the re-use/recycling process for new objects. It requires, at least, the convergence of six emerging technologies: 1. A small, inexpensive means of remotely and uniquely identifying objects over short ranges, for example RFID technology 2. A mechanism to precisely locate something on Earth (e.g. GPS) 3. A way to mine large amounts of data, similar to internet search engines 4. Tools to virtually construct nearly any kind of object, similar to computer-aided design 5. Processes to rapid prototype virtual objects into real ones, such as threedimensional printers 6. Effective object life-cycles: ‘Cradle-to-cradle’ life-spans for objects and cheap, effective recycling The Spime offers an appropriate framework for the type of object that should be constructed from augmented materials; thus, these technical requirements must be supported from within the fabric of an augmented material, if a genuine level of ‘self-awareness’ is to be developed in the material composite. However, for reasons of cost and fabrication, it is impractical to disperse ‘heavy’ electronics subsystems, such as GPS trackers, within augmented materials. Thus, in seeking an optimization of the full electronic system it will be necessary to develop a heterogeneous format, which embeds individual GPS-like subsystems at the object-level and which employs augmented materials, if constructed effectively, as a distributed sensor/actuator foundation in the realization of Spime-like behaviour. This follows the description of the hybrid augmented artifact in section 2.3.2.
4 4.1
Practical Augmented Materials Miniaturised Sensing Modules, or Elements
A core part of the successful development of an augmented material is to build suitable networkable sensor modules. These modules would be capable of (a) localised
2 Augmenting Materials to Build Cooperating Objects
37
sensing of relevant physical parameters, (b) local management of the data collected from sensors, (c) self management in terms of performance, lifetime and long-term data integrity and (d) communication with “nearest neighbor” modules, or local aggregators, to manage the data effectively. Most, if not all, of these targeted capabilities are the subject of current research in wireless sensor networking programmes throughout the world (for example, within the node-building programme at Tyndall National Institute [122], which is addressing the challenge of implementing miniaturised, 3-D and planar wireless sensor nodes. The focus of this programme operates across a number of node sizes, but mostly towards an architecture and assembly process for nodes with a typical mean dimension of 10mm and 5mm. In line with the goal of ‘smart dust, the research programme also seeks to push the boundaries of microelectronics by building a very highly miniaturised node, of the order of 1mm in dimension; novel thin silicon and thin flexible circuit assembly techniques must be employed to achieve this. The priority in developing modules with a size at, or below, 5mm is in providing adequate miniaturisation techniques for the purpose of transposing effective elements of the system’s functionality. In Fig. 2.6, a conceptual schematic (developed to support the original augmented materials concept) is shown to illustrate how a augmented material sensor element might be fabricated as a 5mm module. The hardware miniaturisation process applies numerous enabling technologies for high-density integration where the packaging material is largely removed, and the targeted form factor of the modular units is a stack of ICs. One methodology, upon which the above module concept is based, is the use of thin flex and thinned silicon, which is assembled using a process of flipping a number of silicon ICs and bonding them (face-down) to the flex; the element package ‘stack’ is created by folding the flex (as shown is Fig. 2.7). A fully functioning module could include bare die versions of commercially available IC microprocessors, wireless chipsets, and Chip Stacking (Thickness~50 microns) Flex Folding thickness (3~5 microns per layer)
Three Dimensional Interconnect of Thin ICs using Flex Folding
Assembly of Miniaturised Nodes Substrates and Components
Fig. 2.6 Conceptual Version of a Flex and 3-D Silicon Assembly for a miniaturised Sensor Element (e.g. 5mm modules)
38
K. Delaney, S. Dobson
Fig. 2.7 Practical prototyping process for building 5mm modules [123]
Fig. 2.8 The figure shows an early conceptual drawing of a generalized highly miniature sensor element, of the order of 1mm in size. Research on areas of the fabrication and assembly requirement for this version have shown that while prototype versions could be build significant challenges may exist in translating to an effective volume of production
micro-sensors; however, it is more likely that specific ASIC designs will have to be made to achieve optimised functions, particularly for network-level performance. The implementation of 1mm modules, the “intelligent seeds” will require significant levels of innovation in hardware platforms (See conceptual outline, created in the early stages of the vision development process in Fig. 2.8). In this regard, the development of novel substrates, represented in the form of the silicon fibres (of the order of 50 microns wide by 1 micron thick), is extremely important, and issues such as substrate processing and handling become central to effective fabrication and assembly. Techniques with high potential for success here include self-assembly processes, which bridge that gap between Micro- and Nano-scale assembly.
2 Augmenting Materials to Build Cooperating Objects
4.2
39
Networkable Embedded Sensing Elements
The practical implementation of augmented materials in the medium term requires the physical integration of networkable sensor elements (into materials) to be adapted to the current capabilities of hardware interconnection and packaging technology platforms; effective use of these platforms should permit the augmented materials concept to be proved successfully even if the elements ultimately require further miniaturisation. A current approach is based upon embedding the miniaturised sensor elements in a mold of the material, creating pre-forms (see Fig. 2.9) that in isolation behave as autonomous sensing/computation elements and, when physically bonded together (e.g. through a heat step), form the augmented material construct as a two-dimensional layer. This bonding process should enable networking (and “digital bonding”), creating local-global data management and communications infrastructure and providing a viable augmented material behaviour for further study. The implementation of a full network of sensor elements integrated within a material (to provide a full state description of that material) brings challenges in complexity, and ultimately, cost. The use of relatively simple sets of physical sensors to monitor material behaviour offers a potentially promising approach to managing these challenges. However, utilising these types of sensors may require significant additional computational power and memory to resolve and manage the data. It is important to investigate the most appropriate optimisation of an augmented material network, ensuring the highest level of simplicity, by evaluating heterogeneous networks of sensing and computing elements; toolkits for wireless sensor networks will be employed as they offer the greatest flexibility in completing the analyses. The development of viable control systems behaviour using augmented materials is ultimately an enabling feature that would maximise the value of this technology platform. In principal, this would include embedded networks of actuators in augmented materials that are controlled by the network of sensor elements; full realization of this aspect of the system is a significant challenge.
Fig. 2.9 This shows the first stage implementation of augmented materials using nodes integrated into material pre-forms that can be physically bonded together using a heat step
40
5
K. Delaney, S. Dobson
Conclusion
Creating smart spaces is the focus of much research attention, not least because it forms a core part of realising Ambient Intelligence in the future. Ambient Intelligence describes “the convergence of ubiquitous computing, ubiquitous communication, and interfaces adapting to the user. Humans will be surrounded all the time wherever they are by unobtrusive, interconnected intelligent objects”. These objects (e.g. furniture, DIY tools, office equipment, etc) will be infused with sensory and computational capability to create an information society characterised by high user-friendliness and individualized support for human interaction. A number of conceptual frameworks exist to enable this, including augmented materials; materials that can describe - even ‘know’ - their own status and can be used to build smart objects using the object fabrication process as a programming method. In practical terms this will become a heterogeneous system linking networks of sensors that are physically embedded in objects with internet-level information management systems that enable collections of smart objects to collaborate to provide proactive services to the user. The full realization of augmented materials is a significant challenge on a number of levels. Key research issues in this regard will include (but are not limited to): investigating effective processing and assembly technologies for 3-D integration of computational platforms and sensor subsystems into the appropriate element sizes; creating a packaging technique for the element suited to the parameters of the target material to be augmented; researching power supply, management and optimisation issues based upon the availability of constrained power sources (e.g. rechargeable, portable batteries) supplying numerous elements with a requirement for distributed power management at the material level. From a material-and network-level, the following issues will be important: investigating the computational requirements of individual elements, specifically processor requirements for networking and local sensor data management as well as associated memory capacity needs; investigating the most appropriate means of network communication and analysing the effectiveness of combining both wired and wireless communications formats for appropriate augmented materials assembly formats. It will also be particularly important, as part of a multi-disciplinary effort, to relate progress in this domain to that in other overlapping areas, including researching the potential system analogies with wireless sensor networks and smart, or cooperating, object infrastructures.
References 1. M. Weiser. “The Computer for the 21st Century”. Scientific American, pp. 94–104. September 1991 2. G. D. Abowd E. D. Mynatt, Charting past, present, and future research in ubiquitous computing, ACM Transactions on Computer-Human Interaction (TOCHI), Volume 7, Issue 1 (March 2000),
2 Augmenting Materials to Build Cooperating Objects
3. 4. 5.
6. 7. 8. 9. 10.
11.
12.
13. 14. 15. 16. 17. 18. 19. 20.
21. 22. 23. 24. 25.
26. 27. 28.
41
Special issue on human-computer interaction in the new millennium, Part 1, Pages: 29–58, Year of Publication: 2000, ISSN:1073-0516 IST Advisory Group (ISTAG) Scenarios for Ambient Intelligence in 2010: http://www.cordis. lu/ist/istag-reports.htm. J. Kephart and D. Chess. The vision of autonomic computing. IEEE Computer 36(1), pp. 41–52. January 2003 R. Want, T. Pering, D. Tennenhouse, Comparing autonomic and proactive computing, IBM Systems Journal, Volume 42, Issue 1 (January 2003), Pages: 129–135, Year of Publication: 2003, ISSN:0018-8670 B. Warneke, M. Last, B. Leibowitz and K.S.J. Pister, Smart Dust: Communicating with a Cubic-Millimeter Computer; IEEE Computer 34(1), p. 43–51, January 2001 http://www2.parc.com/spl/projects/smart-matter/ N. Gershenfeld, R. Krikorian and D. Cohen, The Internet of Things; October 2004; Scientific American Magazine The Disappearing Computer initiative: http://www.disappearing-computer.net/ O. Omojola, E. Rehmi Post, M. D. Hancher, Y. Maguire, R. Pappu, B. Schoner, P. R. Russo, R. Fletcher, N. Gershenfeld, An installation of interactive furniture, IBM Systems Journal, Volume 39, Issue 3–4 (July 2000), Pages: 861–879, Year of Publication: 2000, ISSN:0018-8670 S. Park, S. Jayaraman, Enhancing the quality of life through wearable technology, IEEE Engineering in Medicine and Biology Magazine, May–June 2003, Volume: 22, Issue: 3, page(s): 41–48 S. Jung, C. Lauterbach, M. Strasser, W. Weber, Enabling technologies for disappearing electronics in smart textiles, IEEE International Solid-State Circuits Conference, 2003. Digest of Technical Papers. ISSCC. 2003, vol.1, page(s): 386–387 EU Project CVIS - Cooperative Vehicle-Infrastructure Systems Project: www.cvisproject.org; (IST-2004-027293) M. Abdulrahim, H. Garcia and R. Lind, “Flight Characteristics of Shaping the Membrane Wing of a Micro Air Vehicle”, Journal of Aircraft, Vol. 41, No.1, January–February 2005, pp. 131–137 The Virginia Smart Road: http://www.vtti.vt.edu/virginiasmartroad.html http://technology.newscientist.com/channel/tech/dn13592-intelligent-paint-turns-roads-pinkin-icy-conditions.html Y. Chen J. Au P. Kazlas A. Ritenour H. Gates & M. McCreary Electronic paper: Flexible active-matrix electronic ink display, Nature 423, 136 (2003) F. Eder, H. Klauk, M. Halik, U. Zschieschang, G. Schmid, C. Dehm, Organic electronics on paper, Appl. Phys. Lett., Vol. 84, No. 14, 5 April 2004, pp. 2673–2675 Want, R, Pering, T, Borriello, G, Farkas, K I, Disappearing hardware - ubiquitous computing, IEEE Pervasive Computing. Vol. 1, no. 1, pp. 36–47. Jan.–Mar. 2002 S. Sumner, E. Lucas, J. Barker and N. Isaac ‘Radio-Tagging Technology Reveals Extreme Nest-Drifting Behavior in a Eusocial Insect’, Current Biology, Volume 17, Issue 2, 23 January 2007, Pages 140–145 The Fab@Home project. http://www.fabathome.org http://reprap.org/bin/view/Main/WebHome N. Streitz and P. Nixon, “The disappearing computer”, Special Issue, Communications of the ACM 48(3), March 2005 The PalCom Project, http://www.ist-palcom.org P. Andersen, J. E. Bardram, H. B. Christensen, A. V. Corry, D. Greenwood, K M. Hansen, R. Schmid, Open Architecture for Palpable Computing: Some Thoughts on Object Technology, Palpable Computing, and Architectures for Ambient Computing, ECOOP 2005 Object Technology for Ambient Intelligence Workshop, Glasgow, U.K. 2005 European Network for Intelligent Information Interfaces: http://www.i3net.org/ http://convivionetwork.net/ M. Hawley, R. Dunbar Poor and M. Tuteja, Things That Think, Personal and Ubiquitous Computing, Volume 1, Number 1 / March, 1997, Pages 13–20, ISSN 1617–4909 (Print) 1617–4917 (Online)
42
K. Delaney, S. Dobson
29. Garlan, D., Siewiorek, D., Smailagic, A., Steenkiste, P. “Project Aura: Toward DistractionFree Pervasive Computing”, IEEE Pervasive Computing, April–June 2002 30. I. MacColl, D. Millard, C. Randell, A. Steed, B. Brown, S. Benford, M. Chalmers, R. Conroy, N. Dalton, A. Galani, C. Greenhalgh, D. Michaelides, T. Rodden, I. Taylor, M. Weal, Shared visiting in EQUATOR city: Collaborative Virtual Environments, Proceedings of the 4th international conference on Collaborative virtual environments, Bonn, Germany, Pages: 88–94, Year of Publication: 2002, ISBN:1-58113-489-4 31. The Easy Living Project. http://research.microsoft.com/easyliving/ 32. Center for Embedded Networked Sensing (CENS), National Science Foundation #CCR-0120778 33. G. Pottie, W. Kaiser, Wireless Integrated Network Sensors, Communications of the ACM, 43(5), May 2000 34. The T-Engine Forum: http://www.t-engine.org/english/press.html 35. Adaptive Interfaces Cluster: http://www.adaptiveinformation.ie/home.asp 36. Cooperating Embedded Systems for Exploration and Control featuring Wireless Sensor Networks (Embedded WiSeNts): http://www.embedded-wisents.org 37. EU IST Programme on Networked Embedded and Control Systems: http://cordis.europa. eu/fp7/ict/necs/home_en.html 38. EPOSS – European Technology Platform on Smart Systems Integration: www.smart-systemsintegration.org/public 39. Artemis: Advanced Research and Development on Embedded Intelligent Systems, http:// www.cordis.lu/ist/artemis/index.html 40. M. Tubaishat and S. Madra. Sensor networks: an overview. IEEE Potentials 22(2), pp. 20–23. April 2003. 41. I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, “Wireless sensor networks: a survey” Computer Networks Volume 38, Issue 4, 15 March 2002, Pages 393–422. 42. A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, J. Anderson, “Wireless Sensor Networks for Habitat Monitoring”, WSNA’02, September 28, 2002, Atlanta, Georgia, USA. 43. Vin de Silva and Robert Ghrist. Homological sensor networks. Notices of the American Mathematical Society 54(1), pp. 10–17. January 2007. 44. L. Lazos and R. Poovendran, “Stochastic coverage in heterogeneous sensor networks,” ACM Transactions on Sensor Networks 2(3), August 2006, pages 325–358. 45. S. Dobson, K. Delaney, K. Razeeb and S. Tsvetkov, “A Co-Designed Hardware/Software Architecture for Augmented Materials”, 2nd International Workshop on Mobility Aware Technologies and Applications (MATA’05), October 2005. 46. G.E. Moore “cramming more components onto integrated circuits”, Electronics, Vol.38 1965 – pp114–117 47. M. Broxton, “Localization and Sensing Applications in the Pushpin Computing Network”, Master of Engineering in Computer Science and Electrical Engineering at the Massachusetts Institute Of Technology, February 2005 48. L. McElligott, M. Dillon, K. Leydon, B. Richardson, M. Fernström, J. A. Paradiso, ‘ForSe FIElds’ - Force Sensors for Interactive Environments, Lecture Notes In Computer Science; Vol. 2498, Proceedings of the 4th international conference on Ubiquitous Computing, Göteborg, Sweden, Pages: 168–175, Year of Publication: 2002, ISBN:3-540-44267-7 49. International Technology Roadmap for Semiconductors; http://www.itrs.net/home.html 50. J. Barton, B. Majeed, K. Dwane, K. Delaney, S. Bellis, K. Rodgers, S.C. O’Mathuna, “Development and Characterisation of ultra-thin Autonomous Modules for Ambient System Applications Using 3D Packaging Techniques”, 54th Electronics Components and Technology Conference (ECTC2004), June 1–4 2004, Las Vegas, USA 51. J. Barton, K. Delaney, S. Bellis, S.C. O’Mathuna, J.A. Paradiso, and A. Benbasat. Development of Distributed Sensing Systems of Autonomous Micro-Modules. 53rd Electronic Components and Technology Conference. 2003. 52. Simon Dobson. Applications considered harmful for ambient systems. Proceedings of the International Symposium on Information and Communications Technologies, pp. 171–6. 2003.
2 Augmenting Materials to Build Cooperating Objects
43
53. D. Salber and A. Dey and G. Abowd. The Context Toolkit: aiding the development of contextenabled applications. Proceedings of the ACM Conference on Computer-Human Interaction, CHI’99, pp. 434–441. 1999. 54. Eric Bonabeau, Marco Dorigo and Guy Theraulaz. Swarm intelligence: from natural to artificial systems. Oxford University Press. 1999. 55. H. Abelson, D. Allen, D. Coore, C. Hanson, G. Homsy, T. Knight, R. Nagpai, E. Rauch, G. J. Sussman and R. Weiss. Amorphous computing. Communications of the ACM 43(5), pp. 74–82. May 2000. 56. R. Meier. Communications paradigms for mobile computing. ACM SIGMOBILE 6, pp. 56–58. 2002. 57. P. Nixon, F. Wang, S. Terzis and S. Dobson. Engineering context-aware systems. Proceedings of the International Workshop on Engineering Context-aware Object-oriented Systems. 2002. 58. A. Dey and G. Abowd. Towards a better understanding of context and context awareness. Technical report GIT-GVU-99-22, College of Computing, Georgia Institute of Technology. 1999. 59. Mandayam Raghunath, Chandra Narayanaswami and Claudio Pinhanez. Fostering a symbiotic handheld environment. IEEE Computer 36(9) pp.56–65. Sept 2003. 60. Buildings with minds of their own. The Economist. 2 December 2006. 61. S.R. White, N.R Sottos, J. Moore, P. Geubelle, M. Kessler, E. Brown, S. Suresh and S. Viswanathan, “Autonomic healing of polymer composites,” Nature 409, pp. 794–797, 2001. 62. V. Giurgiutiu, Z. Chen, F. Lalande, C.A. Rogers, R. Quattrone and J. Berman, “Passive and Active Tagging of Glass-Fiber Polymeric Composites for In-Process and In-Field Non-Destructive Evaluation”, Journal of Intelligent Material Systems and Structures, November 1996. 63. T. Healy, J Donnelly, B. O’Neill, K. Delaney, K. Dwane, J. Barton, J. Alderman, A. Mathewson, “Innovative Packaging Techniques for Wearable Applications using Flexible Silicon Fibres”, 54th Electronics Components and Technology Conference (ECTC 2004), June 1–4 2004, Las Vegas, USA 64. H. W. Gellersen, A. Schmidt, M. Beigl, “Multi-sensor context-awareness in mobile devices and smart artifacts”, Mobile Networks and Applications, 7(5), October 2002. 65. A. Kameas, S. Bellis, I. Mavrommati, K. Delaney, A. Pounds-Cornish and M. Colley, “An Architecture that Treats Everyday Objects as Communicating Tangible Components”, Proc. First IEEE International Conference on Pervasive Computing and Communications (PerCom’03); pp 115–124, March 23–26, 2003, Dallas-Fort Worth, Texas USA. 66. J. Coutaz, J. Crowley, S. Dobson and D. Garlan. “Context is key”. Communications of the ACM 48(3), pp. 49–53. March 2005. 67. R. Toda, I. Murzin, N. Takeda, “MEMS Devices Fabricated on Spherical Silicon”, Proceedings of the 14th European Conference on Solid-state Transducers (Eurosensors XIV), August 27–30, 2000, Copenhagen, Denmark. 68. K.Y. Chen, R. Zenner and M. Arneson, “Ultra Thin Electronic Package”, IEEE Transactions on Advance Packaging 23(1), 2000, pp. 22–26. 69. G. Kelly, A. Morrissey and J. Alderman, “3-D Packaging Methodologies for Microsystems”; IEEE Transactions on Advanced Packaging 23(4), November 2000, pp 623–630. 70. S.F. Al-Sarawi, D. Abbott and P. Franzon, “Review of 3D VLSI Packaging Technology”, IEEE Transactions on Components, Packaging, and Manufacturing Technology, Part B, February 2002. 71. T. Paczkowski and M. Erickson, “Stacked Packaging Techniques for Use in Hearing Aid Applications”; Proceedings of SPIE, The International Society for Optical Engineering, 3582, 1998, pp 737–742. 72. A.S. Laskar, and S. Blithe, “Epoxy Multiple Chip Modules: A Solution to the Problem of Packaging and Interconnections of Sensors and Signal Processing Chips”, Sensors And Actuator A 36(1), March 1993, pp 1–27. 73. M. De Samber, and C. van Veen, “A New Wafer Level Chip Size MCM-D Technology For Cellular Applications”; Proceedings of SPIE, The International Society For Optical Engineering 4428, 2001, pp 345–351.
44
K. Delaney, S. Dobson
74. R. Rector Jr, J. Dougherty, V. Brown, J. Galvagni, and J. Prymak, “Integrated and integral passive components: a technology roadmap”, Proc. 47th Electronic IEEE Components and Technology Conference, pp 713–723, May 1997, San Jose, CA, USA. 75. S. Ramesh, C. Huang, Shurong Liang and E.P. Giannelis, “Integrated thin film capacitors: interfacial control and implications on fabrication and performance”, Proc. 49th IEEE Electronic Components and Technology Conference, pp 99–104, June 1999, San Diego, CA, USA 76. J.U. Meyer, T. Stieglitz and O. Scholz, “High Density Interconnects for Flexible Hybrid Assemblies for Active Biomedical Implants”, IEEE Transactions on Advanced Packaging 24(3), pp 366–374, 2001. 77. S. Linder and H. Baltes, “Fabrication Technology for Wafer Through Hole Interconnections and Three Dimensional Stacks of Chip and Wafer”; Physical Electronics Laboratory, ETHHoenggerberg, HPT-H6, 8039 Zurich, IEE 1994. 78. X. Liu, S. Haque and G.Q. Lu, “Three Dimensional Flip Chip on Flex Packaging for Power Electronics Applications”; IEEE Transactions on Advance Packaging 24(1), February 2001, pp.1–9. 79. B. Majeed, K. Delaney, J. Barton, N. McCarthy, S.C. O’Mathuna, J. Alderman “Fabrication and Characterisation of Flexible Substrates for use in the Development of Miniaturised Wireless Sensor Network Modules”, Journal of Electronic Packaging, Sept. 2006, Volume 128, Issue 3, pp. 236–245. 80. B. Trease, S. Kota, Adaptive and controllable compliant systems with embedded actuators and sensors, Active and Passive Smart Structures and Integrated Systems 2007, edited by Y. Matsuzaki, M. Ahmadian, D. Leo, Proc. of SPIE Vol. 6525, 65251R, (2007) 81. Barrett, J. Cahill, C. Compagno, T. Flaherty, M.O. Hayes, T. Lawton, W. Donavan, J.O. Mathuna, C. McCarthy, G. Slattery, O. Waldron, F. Vera, A.C. Masgrangeas, M. Pipard, P. Val, C. Serthelon, I. “Performance and reliability of a three-dimensional plastic moulded vertical multichip module (MCM-V)”; 45th IEEE Electronic Components and Technology Conference, 1995. 82. Egan, E., Kelly, G., Herard, L. (1999), “PBGA warpage and stress prediction for efficient creation of the thermomechanical design space for package-level reliability”, Proceedings of the 49th IEEE Electronic Components and Technology Conference, ECTC’99, San Diego, CA, pp.1217–23. 83. The European Union Water Framework Directive: http://www.wfdireland.ie/ 84. R. Want, “An introduction to RFID technology”, IEEE Pervasive Computing, Volume 5, Number 1, January – March, 2006, pp. 25–33. 85. G. Roussos, “Enabling RFID in Retail”, IEEE Computer, Volume 39, Number 3, March 2006, pp 25–30. 86. J. Polastre, R. Szewczyk, D. Culler, “Telos: Enabling Ultra-low Power Wireless Research”, Proceeding of IPSN/SPOTS, Los Angeles, CA, USE, April 25–27, 2005 87. J. Hill, D. Culler, “Mica: a Wireless Platform for Deeply Embedded Networks” IEEE Micro, vol. 22, no. 6, pp. 12–14, November/December, 2002 88. S.J. Bellis, K. Delaney, B. O’Flynn, J. Barton, K.M. Razeeb, and C. O’Mathuna, “Development of field programmable modular wireless sensor network nodes for ambient systems”, Computer Communications, Special Issue on Wireless Sensor Networks and Applications 28(13), August 2005, Pages 1531–1544. 89. J. Barton, A. Lynch, S. Bellis, B. O’Flynn, F. Murphy, K. Delaney, S.C. O’Mathuna, P. Repetto, R. Finizio, C. Carvignese; L. Liotti, “Miniaturised Inertial Measurement Units (IMU) for Wireless Sensor Networks and Novel Display Interfaces”, Proc. ECTC 2005, 55th Electronic Components & Technology Conf., Wyndham Palace Resort And Spa, Lake Buena Vista, Florida, May 31–June 3, 2005, pp 1402–1406 90. S. Brady, L.E. Dunne, A. Lynch, B. Smyth and D. Diamond, “Wearable Sensors? What is there to sense? ”, Stud Health Technol Inform. 117, pp. 80–88, 2001. 91. Sensor Array Projects and Networks: http://www.lternet.edu/technology/sensors/arrays.htm 92. S. Askins, W. Book, “Digital Clay: User Interaction Model for Control of a Fluidically Actuated Haptics Device”, Proceeding of the 1st International Conference on Computational
2 Augmenting Materials to Build Cooperating Objects
93.
94. 95. 96.
97.
98. 99. 100. 101. 102.
103.
104.
105.
106.
107.
108. 109. 110. 111. 112. 113.
45
Methods in Fluid Power Technology (Sim2003), November 26–28, 2003, Melbourne, Australia. Y. Zhang, M. Yim, C. Eldershaw, D. Duff and K. Roufas, “Scalable and reconfigurable configurations and locomotion gaits for chain-type modular reconfigurable robots”, IEEE Symposium on computational intelligence in robotics and automation (CIRA), Japan, 2003. B. Hanson, M. Levesley, “Self-sensing applications for electromagnetic actuators”, Sensors and Actuators A (2004), Elsevier BV L. Shang, L-S. Peh, A. Kumar, N.K. Jha, “Temperature-Aware On-chip Networks”, IEEE Micro, Volume 26, Number 1, January – February 2006, pp 130–139 L.E. Holmquist, F. Mattern, B. Schiele, P. Alahuhta, M. Beigl and H.W. Gellersen. Smart-Its Friends: A Technique for Users to Easily Establish Connections between Smart Artefacts, Proc. of UBICOMP 2001, Atlanta, GA, USA, Sept. 2001. C. Krantz-Ruckler, M. Stenberg, F. Winquist and I. Lundstrom, “Electronic tongues for environmental monitoring based on sensor arrays and pattern recognition: a review”, Analytica Chimica Acta, 426 (2001), p.217 A. Greenfield, Everyware: The Dawning Age of Ubiquitous Computing, 2006, ISBN:0321384016, Berkeley, CA: New Riders D. O’Sullivan and T. Igoe, Physical Computing: Sensing and Controlling the Physical World with Computers, 2004, Thomson Course Technology PTR; ISBN: 159200346X http://www.worldhaptics.org/index.htm M. Addlesee, R. Curwen, S. Hodges, J. Newman, P. Steggles, A. Ward and A. Hopper, Implementing a sentient computing system, IEEE Computer, August 2001 H. Ishii, B. Ullmer, Tangible bits: towards seamless interfaces between people, bits and atoms, Conference on Human Factors in Computing Systems, Proceedings of the SIGCHI conference on Human factors in computing systems, Atlanta, Georgia, United States, Pages: 234–241, Year of Publication: 1997, ISBN:0-89791-802-9 J. Paradiso, C. Abler, K. Hsiao, M. Reynolds, The magic carpet: physical sensing for immersive environments, Conference on Human Factors in Computing Systems, CHI ‘97 extended abstracts on Human factors in computing systems: looking to the future, Atlanta, Georgia, SESSION: Late-breaking/short demonstrations, Pages: 277–278, Year of Publication: 1997, ISBN:0-89791-926-2 M. Fernström and N. Griffith, LiteFoot - Auditory Display of Footwork, International Conference on Auditory Display (ICAD), University of Glasgow, UK. 1st-4th November 1998 R. J. Orr, G. D. Abowd, The smart floor: a mechanism for natural user identification and tracking, Conference on Human Factors in Computing Systems 2000, CHI ‘00 extended abstracts on Human factors in computing systems, The Hague, The Netherlands, Pages: 275–276. P. Srinivasan, D. Birchfield, G. Qian, A. Kidane, Design of a Pressure Sensitive Floor for Multimodal Sensing, Ninth International Conference on Information Visualisation (IV’05) pp. 41–46 B. Richardson, K. Leydon, M. Fernström, J. A. Paradiso, Z-Tiles: Building blocks for modular, pressure-sensing floorspaces, CHI 2004 | Late Breaking Results Paper 24–29 April | Vienna, Austria T. Hogg and B. A. Huberman, Controlling Smart Matter, Smart Materials and Structures, vol. 7, pp. R1–14, 1998. http://www2.parc.com/spl/projects/modrobots/lattice/digitalclay/ T. Toffoli, N. Margolus, Programmable matter: concepts and realization, Physica D, Volume 47, Issue 1–2 (January 1991), Pages: 263–272, ISSN:0167-2789 S. Harding, Evolution In Materio, PhD Thesis, University of York, 2006 W. McCarthy, Hacking Matter: Levitating Chairs, Quantum Mirages, and the Infinite Weirdness of Programmable Atoms, (2003) ISBN 0-465-04428-X Seth Copen Goldstein, Jason D. Campbell, Todd C. Mowry, “Programmable Matter”, IEEE Computer, June 2005 (Vol. 38, No. 6) pp. 99–101
46
K. Delaney, S. Dobson
114. J. Hall, Utility fog: A universal physical substance, NASA. Lewis Research Center, Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace p 115–126 (SEE N94-27358 07-12); United States; 1993 115. P. Pillai, J. Campbell, G. Kedia, S. Moudgal, K. Sheth, A 3D Fax Machine based on Claytronics, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2006, page(s): 4728–4735 116. W. J. Butera, Programming a Paintable Computer, PhD Dissertation, Massachusetts Institute of Technology, Feb. 2002 117. J. Lifton, M. Broxton, J. A. Paradiso, “Experiences and directions in pushpin computing”, Proceedings of the 4th international symposium on Information processing in sensor networks, 2005 118. J.A. Paradiso, J. Lifton and M. Broxton, “Sensate Media - Multimodal Electronic Skins as Dense Sensor Networks”, BT Technology Journal, Vol. 22, No. 4, October 2004, pp. 32–44. 119. K. van Laerhoven, A. Schmidt, and H. Gellersen. Pin&Play: Networking objects through pins. In G. Boriello and L. Holmquist, editors, Proceedings of Ubicomp 2002, volume 2498, pages 219{229, Sept. 2002. 120. J. Scott, F. Hoffmann, M. Addlesee, G. Mapp, A. Hopper, Networked Surfaces: A New Concept in Mobile Networking;, Third IEEE Workshop on Mobile Computing Systems and Applications, 2000. Volume, Issue, 2000 Page(s):11–18 121. Bruce Sterling Shaping Things (2005). MIT Press, ISBN 0-262-69326-7 122. K. Delaney, J. Barton, S. Bellis, B. Majeed, T. Healy, C. O’Mathuna and G. Crean, “Creating Systems for Ambient Intelligence”, pp. 489–514 in Siffert and Krimmel (eds), Silicon: Evolution and Future of a Technology. Springer-Verlag 2004. 123. B. Majeed, Investigation of Ultra-Thin Folded Flex Assembly for Highly Miniaturised System-in-a-Package Technology Platforms, PhD Thesis in Microelectronics at University College, Cork, January 2008
Part II
Device Technologies Microsystems, Micro Sensors and Emerging Silicon Technologies
1.1
Summary
Sensors and actuators represent an important interface between the human user and electronic systems. Many of these devices are fabricated in silicon. While it should not be expected that silicon will be used in all of the devices and subsystems that will grow and integrate to form Ambient Intelligence (AmI), the material’s role in driving Integrated Circuit (IC) technology and its use in Micro-Electro-Mechanical Systems (MEMS) make it central to any viable AmI solution. This part provides an overview of relevant silicon sensor devices and, in particular a selection of MEMS devices, which have been developed and which are likely to play a significant role in future smart systems. The second chapter in this section looks at silicon itself, providing an insight into how silicon circuits are fabricated. More importantly, the chapter also looks at silicon as a material with the potential to evolve. Current ‘traditional’ silicon sensor devices will complete only part of an AmI system. New forms of sensing will need to emerge and existing devices will need to be transformed, becoming embedded in objects and spaces that cannot currently be accessed. Silicon has a significant role to play here, beyond the established circuits, devices and subsystems. The material’s versatility means it will be the substrate for many of the new sensing (and actuation) solutions that will be created in building the AmI infrastructure.
1.2
Relevance TO Microsystems
As this section is about Microsystems devices, the relevance is obvious. This section provides a snapshot of this large technology area to those with limited knowledge of silicon and microsystems devices. For those with more experience and expertise in MEMs technologies and their component materials, this section provides a frame of reference for the role of these technologies in creating the AmI infrastructure into the future.
48
1.3
Part II Device Technologies
Recommended References
There are numerous publications that would support a deeper understanding of MEMS devices and silicon technologies; this is a very large area of research and innovation. There are numerous references provided in the two chapters. The following two references should also offer useful sources of further information to those who may interested in learning more. The first is the MEMS/Nano Reference shelf itself, of which this book forms a part, which is a growing repository of information to those interested in the broad technology issues for MEMS or in the specific challenges for individual devices. The second is a text providing a detailed insight into many aspects of silicon circuit fabrication, the behaviour of the material itself and its future directions. 1. S. D. Senturia (Series Editor), The MEMS/Nano Reference Shelf, Springer Publishing 2. P. Siffert, E.F. Krimmel; “Silicon: Evolution and Future of a Technology”, (Eds.) 2004, XIX, 534 p. 288 illus. ISBN: 3-540-40546-1
Chapter 3
Overview of Component Level Devices Erik Jung
Abstract Ambient intelligence (AmI) relies upon the integration of sensors with read-out and signal conditioning circuits, on feed-back mechanisms (e.g. actuators) and not least on the integration of telecommunication components to link these building blocks to a central unit or to a set of distributed computing entities. Sensors represent the ‘eyes’, ‘ears’, ‘nose’ and ‘touch’ equivalents of the human senses and based upon these, a multitude of Ambient Intelligence (AmI) scenarios have been developed [1–3]. Beyond that, sensors provide access to parameters not perceived by humans, enabling additional monitoring, prediction and reaction scenarios [4]. This chapter provides an overview of the sensors and in particular the micro-electromechanical system (MEMS) devices that have developed to provide an AmI sensor interface in the future. Keywords Micro-electro-mechanical system (MEMS) devices, Bulk micromachining (BMM), Surface micro-machining (SMM), low power sensors, acceleration, gyroscope, pressure, vibration, shock, humidity, microphones, Bio-and chemo-electrical sensors, energy scavenging, energy storage.
1
Introduction
Among the array of sensor variants suggested for use in AmI are thermal, pressure, radiation, vibration, acceleration sensors, and also optical and bio-sensors, which leverage innovative detection mechanisms (e.g. plasmon resonance) and bio-electronic coupling, incorporating living cells in combination with sensitive electronics. Circuitry to connect the sensor signal (usually an analog output) to the digital world requires low power, high sensitivity and - not least - ruggedness against the environmental conditions to which the sensor might be exposed (e.g. high energy radiation). The latter is also true for the electronics managing the communication to the outer world. The frequency of operation might be determined by the ambient
Fraunhofer IZM, Gustav-Meyer-Allee 25, 13355 Berlin, Germany
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
49
50
E. Jung
conditions, or by the transmission range to the central unit or to the next distributed entity [5]. The protocols employed by the telecommunications circuitry need to perform with low power consumption and high reliability for appropriate signal integrity in potentially noisy environments [6]. The autonomy of AmI systems may build on ultra low power consumption circuitry, on low transmission rates and on low duty cycles to maintain battery lifetime for an extended duration. However, modern microfabrication has also come up with innovative concepts for energy harvesting from the ambient environment. Solar energy can facilitate applications where there is exposure to the open sky and sunlight. Thermal energy harvesters, which rely on small temperature differences between two terminals, have been reported to generate µW when attached to human skin. Larger temperature differences are also possible, for example, as observed when the attachment is made to technical equipment and this will be advantageous to their use. Vibration and acceleration harvesters convert mechanical energy (e.g. from a moving human) into µW of electrical power. Harvesting energy from ambient radiation has also been proposed and the establishment of a dedicated infrastructure for this has recently received attention [7, 8]. An issue common to today’s harvesters is the need to convert the harvested electrical charge with varying voltages and currents into a useful stream of electrons and into a storage device. These converters need to be optimized for the expected mission profile, and as of today, are still major contributors to losses in the system [9]. Self discharge of storage devices is another issue that prevents, in many cases, a full autonomy for AmI building blocks. Assembly and packaging has become established as a potentially pivotal aspect in the realization of AmI building blocks. These ‘modules’ need to be unobtrusive and they should not prevent the entity (human, animal, technical equipment) that is being monitored from working with best efficiency. They need to be rugged enough to withstand the rigors of everyday use. They need to have all of the interfaces in place to monitor their ambient conditions (e.g. gas sensors, which require fluidic ducts) without compromising their manufacturability. And – not least - the fabrication process should be low cost, second sources should be available and the technologies employed should be scalable from small to large volumes. Combining modern sensor technology, electronic mixed signal processors and embedded transceivers with advanced assembly techniques, ambient intelligence is now becoming reality.
2 2.1
Sensors for Ambient Intelligence Low Power Sensors using MEMS technology
Sensors are the gateways from the ambient environment to the electronic intelligence. Many sensors encountered in the past, however, needed to be driven by a supply of significant electrical power. This would render them useless in the
3 Overview of Component Level Devices
51
context of “intelligent ambient sensing”, as power requirements are one crucial aspect of system autonomy and central to a system’s acceptance by the user. Over the past decade, silicon micromachining has resulted in the replacement of a large number of conventional sensors by their micro-electro-mechanical system (MEMS) counterparts, fabricated mostly in silicon. Two major techniques are available for sensor fabrication when using the semi-conducting silicon material; these are namely bulk micromachining and surface micromachining. Bulk micro-machined (BMM) devices rely upon the structured removal of large amounts of silicon from the wafer, thereby creating, for example, thin membranes, hinged proof masses or robust capacitive sensors [10]. Depending upon the etching process, crystalline orientation will provide a preferential removal (anisotropic etching) of silicon or the sequence will be, more or less, independent of the crystal orientation to the etched plane (isotropic etching) [10]. Fig. 3.1 shows, through a schematic, the difference between the two etching processes, while Fig. 3.2 depicts a bulk micro-machined capacitive acceleration sensor. Capping (i.e. placing a cap over all, or part, of a sensor device) provides protection and may even add to the functionality of the device (See Fig. 3.3). Capping can be performed by a multitude of processes; the workhorses are glass frit bonding and anodic bonding [11]. Electrochemical etching provides another method to exert process control [12]. In all cases, a stop layer, or exact time control of the process, is required to form the desired structure in the z-dimension, while the lateral features are defined by a masking process. The resulting structures are defined within the bulk of the silicon wafer, hence “bulk micromachining”. Surface micromachining (SMM), however, defines the structures by selective removal and deposition of thin layers on top of the surface of the silicon (or an alternative material) substrate (See Fig. 3.4).
Fig. 3.1 Schematic of isotropic and anisotropic etching
52
E. Jung
Fig. 3.2 Bulk micro-machined comb structures for capacitive sensing (courtesy Freescale)
Fig. 3.3 Bulk micro-machined accelerometer, with top and bottom cap providing electrode functionality (Fraunhofer IZM Chemnitz)
Polysilicon, oxides and nitrides, as well as metal layers, are typical candidates to build these structures; they are defined laterally by photo-masking and, in the z-dimension, by the deposition thickness of the respective (multi-) layers (See Figs.3.5 and 3.6). Surface micromachining, dry, plasma-based etching processes have emerged and were associated with surface micro-machining. Over the past decade these have evolved to also cover the domain of bulk micromachining with highly increased material removal rates [14, 15]. (See Fig. 3.7) One of the advantages of the MEMS processes is the potential for miniaturisation, as shown in Fig. 3.8, which comes also with an improvement in power requirements (e.g. driving proof masses in the mg area instead of several grams).
3 Overview of Component Level Devices
53
Fig. 3.4 Surface micromachining process
Fig. 3.5 Surface micromachined gyroscope with protective cap open (courtesy Bosch)[13]
Mechanical sensors like accelerometers, vibrometers, pressure sensors, gyroscopes and similar structures rely on mechanical features – either surface or bulk micromachined - coupled with either on- or off-chip readout electronics. Fig. 3.9 and Fig. 3.10 show examples of systems with off-chip electronics, mounted closely together in a common package. Fig. 3.11 and Fig. 3.12 show examples with on-chip electronics, benefiting from the short interconnect distances between the sensor and the readout circuit as well as from the overall reduction of real estate.
54
E. Jung
Fig. 3.6 Surface micromachined capacitive uniaxial accelerometer (courtesy Freescale)
Fig. 3.7 DRIE for micromachining (courtesy Alcatel Micromachining Systems)
The advantage of on-chip MEMS integration is clear for highly robust MEMS manufacturing processes, with small areas used for the MEMS itself. If the MEMS device has a low yield and requires a large area, the cost advantage is lost, since the per-area price for the multi-mask processes required for the microcontroller will dictate the total area cost of the system. Other sensors, like thermopiles or gas sensors can be built directly on top of the CMOS circuitry, as they are either manufactured in CMOS, or a CMOS compatible process. Humidity or chemical sensors are created in this way, adding sense materials on top (e.g. interdigitated electrodes – Fig. 3.13). Combining mechanical microfabrication and system integration techniques, fully integrated cameras have now become a reality, also enabling high resolution
3 Overview of Component Level Devices
1984 - 1997 Hybrid piezo-electrical metal can
1997 Silicon MEMS capacitive PLCC28
55
2002 Silicon MEMS capacitive SOIC16w
2006 Silicon MEMS capacitive SOIC1nn
Fig. 3.8 Acceleration sensors evolution- MEMS and packaging technology improvements have shrunk the sensors from ~9cm3 to 0.2cm3(image courtesy of Bosch [16]
Fig. 3.9 Airbag sensor with accelerometer and microcontroller (courtesy BOSCH)
visual sensors to be incorporated in ambient sensors [17, 18], as in Fig. 3.14 and Fig. 3.15. The smaller sizes typically allow faster response times, lower energy consumption and smaller overall systems with lower cost. While the latter is a paramount requirement for “ambient sensors”, to secure distribution in hundreds of thousands, the miniaturization is an enabler for unobtrusive, scalable systems components (and systems) which will make them more acceptable to the user. For autonomously
56
E. Jung
Fig. 3.10 Multi Axis Accelerometer for harsh environment with microcontroller unit (courtesy CSEM)
Fig. 3.11 Kavlico’s barometric atmospheric pressure sensor with on-chip electronics
operating ambient sensing systems, power management is the next obstacle. Sensing principles that provide low power, compared to the alternatives, are advantageous (e.g. piezoresistive vs. capacitive sensors for pressure sensing, impedance changes vs. calorimetry for humidity) and need to be considered during the system design phase. A number of sensors with high miniaturization potential, low power requirements and low cost are described in the following section
2.2
Acceleration Sensors
Bulk micromachining of acceleration sensors (See Fig. 3.16 and Fig. 3.17) has been a workhorse technology for many years. A proof mass is suspended on an elastic structure and is shifted from its position during acceleration. A capacitive or elec-
3 Overview of Component Level Devices
57
Fig. 3.12 Analog Device’s Gyroscope with CMOS electronics
Fig. 3.13 Packaged humidity sensor - sensitive polymer on interdigitated electrodes integrated on digital conversion circuit (courtesy of Sensirion)
trostatic signal is picked up and converted into a digital output by the associated microcontroller. However, due to the improved compatibility with CMOS processes [19] and the adequate sensitivities obtained for the sensors realized in surface micro machining, the majority of commercial sensors are now fabricated in this way (by SMM instead of BMM).
58
E. Jung
Fig. 3.14 The Opto Package courtesy of SCHOTT, packaging a high resolution camera chip with through silicon vias in an ultra-small footprint
Fig. 3.15 Integrated optics for a wafer level fabricated camera system (courtesy Tessera)
2.2.1
Surface Micro-machined Proof mass, Passive Capacitance
The advantage of surface micro-machining for accelerometers, leveraging CMOS compatible processes, has resulted in the favoring of this technology over BMM. Adding the sensor to read-out circuitry on-chip, as processed, for example by Analog Devices [20, 21], minimizes the total size significantly while providing shortest interconnects between the sensor’s output and the readout interface (See Fig. 3.18). Commercial SMM sensors can achieve read-out ranges from 2g to 250g, surviving shocks well above 3000g. As the combined process requires very high process yields for the MEMS in order not to sacrifice “expensive” CMOS real estate,
3 Overview of Component Level Devices
59
Fig. 3.16 Bulk micromachined acceleration sensor with capacitive signal readout (courtesy Fraunhofer IZM Chemnitz)
Fig. 3.17 Electromagnetic coil, bulk micromachined proof mass moving an a high density micromachined metal coil (image courtesy Freescale)
alternative concepts to integrate the read-out circuit at the side, or on top of the sensor, have been developed and are in mass fabrication as well. The improved yield of the system comes at a cost of reduced signal strength from the sensor due to the larger interconnect lines and an overall size increase in the system (See Fig. 3.19).
2.3
Gyroscope Sensors
2.3.1
Bulk Micromachining
Bulk micromachining of a silicon wafer, creating a multi-layered sensor with high sensitivity employs a reference electrode versus which the sensing electrode is shifted during a rotational event (Coriolis force). A capacitive signal can be derived from the frequency shift, indicating the dynamic rotation angle. See Fig. 3.20 [22].
60
E. Jung
Fig. 3.18 Accelerometer in surface micromachining, integrated in a CMOS design. (Courtesy of Analog Devices)
Fig. 3.19 Surface micro machined accelerometer, capped and mounted on top/ at side to the microcontroller (courtesy Freescale)
Fig. 3.20 Gyroscope fabricated by bulk micromachining and layer bonding (courtesy Fraunhofer IZM Chemnitz)
3 Overview of Component Level Devices
2.3.2
61
Surface Micromachined
Using SMM, and also applying the Coriolis force, a vibrating structure is deformed against a counter-electrode on a buried layer, providing a capacitive read-out of the rotational angle. This approach allows integration of the MEMS structure with the CMOS readout electronics, increasing the sensitivity to the small capacitive signal (See Fig. 3.21) [23].
2.4
Pressure Sensor
Pressure sensors can by fabricated by bulk micro-machining from the backside of a silicon wafer, removing the bulk silicon and leaving a thin, deformable membrane (e.g. of nitride or oxide). Bonding this device to a supportive substrate will result, either in a differential (reference pressure) or in an absolute (vacuum) pressure sensor (See Fig. 3.22 and Fig. 3.23). Bulk micro-machined pressure sensors have a better sensitivity, in general, than the surface micro-machined devices, typically associated with a better gauge factor when comparing crystalline silicon with polysilicon [24].
Fig. 3.21 Gyroscope in surface micromachining integrated with CMOS technology (courtesy Analog Devices)
62
E. Jung
Fig. 3.22 Bulk micromachined sensor with piezoelectric bridge (Frauhofer IZM Chemnitz)
Fig. 3.23 Schematic of a bulk micromachined differential pressure sensor with piezoelectric bridge (courtesy Intersema Senoric S.A. [25])
2.5
Vibration Sensor
Sensors for specific vibrating frequencies have micromachined tongues, which resonate at the target frequencies they are designed to detect. Capacitive sampling provides the feedback of the dominant frequencies in a vibration spectrum.
3 Overview of Component Level Devices
63
Fig. 3.24 Surface micromachined vibration sensor with bulk micromachined silicon cap, frit bonded (courtesy IZM-Chemnitz)
While this can be also achieved with accelerometers, the sensitivity, especially for multi-frequency spectra, is much better with dedicated vibration sensors (See Fig. 3.24).
2.6
Shock Sensors
For shock sensors, the design is set for a freely moving proof mass to connect the electrodes when a certain level of acceleration is reached. This digital on/off state is used to detect a shock using a no-power sensor, triggering (e.g. a distress signal). As no comparator and evaluation circuitry is needed in this case, the total system for shock detection can operate at extremely low power, being of small size and low cost (see Fig. 3.25).
64
2.7
E. Jung
Humidity Sensors
Using interdigitated electrodes on a CMOS circuit and, covering them with a humidity sensitive polymer, a direct digital sensor can be created that provides low power and a small size. Companies, such as Sensirion, have realized such a chip with sensitivities of the order of 4.5% r.h. over the range 0-to-100% r.h, with power requirements of only 30µW.
2.8
Microphones
Acoustic sensors can be realized in low power, ultra-small sizes by MEMS technology as well. Silicon-based microphones use bulk micromachined sensors (similar to pressure sensor fabrication) mounted at a small distance to a counterelectrode. This capacitive arrangement provides a direct signal conversion of (acoustic) pressure changes to an electronic signal conversion. Sonion MEMS has created a wafer-level integrated solution, realizing the world’s smallest integrated microphone [27] (See Fig. 3.26). Infineon has demonstrated a hybrid integrated dual microphone, leveraging directional sound discrimination through the housing [28] (See Fig. 3.27).
2.9
Bio-and Chemo-electrical Interfaces for Sensors
With the advent of bio- and chemo-terrorism, sensors to determine threat levels, and provide early warnings, have become a highly researched topic.
+Y Trigger
Reset Common
Ground
-Y Trigger Fig. 3.25 MEMS based shock sensor (relay) with zero power requirements (courtesy Stanley Associates) [26]
3 Overview of Component Level Devices
65
Fig. 3.26 Fully integrated silicon MEMS microphone in wafer level assembly technique (courtesy of Sonion MEMS)
Fig. 3.27 Dual microphone with bulk micromachined silicon acoustic sensors (Infineon)
Leveraging sensitive materials, tailored to the specific threat, sensing principles on resistive change (MOS sensors, [29, 30]), on resonant frequency shifts (e.g. SAW, QMB), change of optical properties e.g. in surface plasmon resonance (SPR) [31]) or impedance changes [32, 33] have been developed. Sensors, based on chemo-sensitive polymers, provide low power requirements while preserving excellent sensitivity. Hydrogels, for example, can be fine-tuned to react quite specifically to a certain agent. This can induce nonlinearity in their swell behavior, which then can be detected by, for example, a piezoresistive strain gauge. The impedance measurement of chemo-sensitive polymers (See Fig. 3.28) is a similar low power detection mechanism. Even the direct interfacing of live cells providing a bio-electric feedback have been researched (See Fig. 3.29) [34]. In contrast to the aforementioned sensors, microphones, pressure sensors and bio-chemo-sensors need to directly interface with the ambient environment, challenging the assembly and packaging techniques with the task of providing a robust protection, while selectively allowing the parameters that must be measured to enter the sensor.
66
E. Jung
Fig. 3.28 Gel based chemical sensor with 16 sensing elements on a 2×5mm chip with 0,5µW power requirement (courtesy Seacoast Science)
Fig. 3.29 Interfacing of live cells to electronic sensor circuitry (image courtesy Fraunhofer IZM and Fraunhofer IBMT)
3
Energy Scavenging Devices
In order to increase the autonomy of ambient intelligence sensor nodes, energy scavenging from the environment is the most attractive approach. Scavenging is usually employed to charge a storage device (capacitor, rechargeable battery), as in ambient intelligence environments it is not assured that the scavenging source is always available (e.g. solar energy, vibration energy). Generally, the scavenging procedure follows the path given in Fig. 3.30, indicating typical losses. A multitude of scavenging mechanisms has been reported to address the various available supplies from the ambient. Here, depending upon the scenario, the best
3 Overview of Component Level Devices
67
Fig. 3.30 Energy scavenging - from source to use
scavenging mechanism must be used. For example, in ambient environments with large thermal gradients, thermal scavenging is best while for environments with a high level of vibration mechanical energy scavenging will be the better choice.
3.1
Electromagnetic Scavenging
These scavengers are the closest to established generators; by using a moving magnet in a coil, or vice versa, they generate voltage by induction (See Figure 3.31). A large number of versions have been reported in the past, from multi-axis linear actuation to rotational actuation to pendulum-type actuation. Some generators use gear wheels to keep the generator operating under optimum operating conditions over a wide range of low acceleration states, for example in [35]. Microfabrication techniques derived from watch making, as well as semiconductor related MEMS fabrication processes, have been employed to realize electromagnetic scavengers. Typical power ratings are in the lower mW range for micro-scale generators, scavenging on accelerations in the sub g range. Fig. 3.32 shows a state-of-the art micro-generator, which realizes roughly 15mW in a ~6.4mm diameter size, when an external proof mass is moved (e.g. due to a human wearing a timepiece). In times during which the generator is not active, power storage needs to be used to bridge the inactive time. Thus, electromagnetic micro-generators are best used in situations, where permanent movement is available and, for example, when the movement itself is the parameter to be measured.
68
E. Jung
Fig. 3.31 The principle of a moving magnet electromagnetic scavenger based on micro fabrication techniques
Fig. 3.32 Mechanical energy scavenging generator (courtesy Kinetron)
3.2
Electrostatic Scavengers
Electrostatic scavengers use movement-induced change in capacitance of a precharged capacitor to generate power (See Fig. 3.33 and Fig. 3.34). The initial charging can take place by piezoelectric discharge, a permanent radioactive emitter or by photogeneration [36]. Due to the low parasitic capacitance requirements, the conversion circuitry is quite tricky [37]. Therefore, electrostatic scavengers have not found as extensive use as electromagnetic scavengers.
3 Overview of Component Level Devices
69
Fig. 3.33 Vertically active electrostatic scavenger
Fig. 3.34 Laterally active electrostatic scavenger (courtesy IMEC)
3.3
Piezo-Scavengers
Piezoelectric scavengers use the movement of a piezoelectric bimorph to produce voltages that are compatible to off the shelf conversion components (See Fig. 3.35) [38]. They have been the target of extensive research. Piezo-scavengers have a power output of about 10µW under resonant vibration conditions. These can be tuned by the size of the bimorph, as well as by lining the piezo-active material onto a controlled substrate to vibrate at the resonant frequency (See Fig. 3.36) [39]. Typical piezo-reactive materials used include barium-titanate (BaTiO3), leadtitanate (PbTiO3) or lead-zirconate-titanate [Pb(Zr, Ti)O3, however, they are quite hard to process and need high sintering temperatures.
70
E. Jung
Fig. 3.35 Principle of a piezoelectric bimorph used as energy scavenger
Fig. 3.36 Energy scavenging from vibrating ambient environment, fabricated by MEMS technology (courtesy TIMA)
Macro-fibre composites laminate piezoelectric materials into a macro compound to leverage an improved voltage regime [40]. Piezo-scavengers, based on MFC’s, have been commercialized and found use in a number of commercial sensing applications as well (See Fig. 3.37) [41]. For event-based energy supply, piezo-electric scavengers can be used very efficiently. As shown in Fig. 3.38, Enocean has commercialized a piezo-scavenging remote switch module, enabling a transmitter to send event monitoring signals to a receiver in order to initiate an activity (e.g. switch on the light, trigger building automation events, etc.)
3 Overview of Component Level Devices
71
Fig. 3.37 Wireless ambient sensor node using piezo electric scavenging of ambient vibrations (courtesy Transparent Assets)
Fig. 3.38 Commercial piezo-scavenger used for remote switching (courtesy Enocean)
3.4
Solar Energy
Converting solar energy into electricity is currently part of a very strong movement in the “green energy” sector. For decades, solar energy has been used to power remote appliances and pocket calculators. With the increased interest in powering distributed network nodes for ambient intelligence applications, solar energy has found a new market. The principle is based on the generation of electron-hole pairs within a semiconductor material, usually silicon, with the absorption of a photon with sufficient energy (See Fig. 3.39).
72
E. Jung
Fig. 3.39 The principle of the solar cell energy source
By connecting individual modules in series or in parallel to a voltage/current regulator, the charging of a battery can be done very efficiently with power densities of 10-to-14mW/cm2 under direct exposure to sunlight. The benefit of high power density is unfortunately limited by the fact that normal sun light is present only for a small fraction of the time; additionally, deployment of sensor nodes equipped with solar cells is challenging at the moment. Enocean has commercialized a sensor node with solar energy supply for full autonomy in building automation applications (See Fig. 3.40) [42].
3.5
Thermal Scavengers
Thermal scavengers rely on the Seebeck effect, converting a temperature differences into electricity. When a connected point of two conductors is subjected to a thermal gradient, it will generate a voltage difference due to the different work energies of the associated materials. Connecting a multitude of mV generating contacts in series, a thermo electric module can be realized (See Fig. 3.41). MEMS technology was recently used here to maximize the number of individual thermocontacts per area. Improvements in the materials used resulted in modules that were made with semiconductor materials of p- and n-types (e.g. bismuth-telluride (Bi2Te3), lead-telluride (PbTe) and iridium-antimony (IrSb3)) having high thermal efficiency factor. Today’s thermoelectric scavengers achieve in the mW/cm2 power densities using, for example, Bi2Te3 as an active material with a thermal difference to initiate operation at 5K [43]. Miniature generators have been demonstrated (Fig. 3.42) by companies like Micropelt, Thermo-Life and found use, from as early as 1998, in consumer products (Fig. 3.43) [44].
3 Overview of Component Level Devices
Fig. 3.40 Solar scavenging for energy supply to a sensor node (courtesy of Enocean)
Fig. 3.41 Principle of a thermoelectric scavenger
73
74
E. Jung
Fig. 3.42 Thermoelectric scavenger with high efficiency (courtesy Thermo Life)
Fig. 3.43 Watch using thermoelectric scavenging for supply [44] (courtesy SEIKO)
3.6
Radio-active Generators
Using low intensity radio active sources, electrostatic or piezoelectric generators have been proposed [45, 46]. Due to the perceived hazard of handling even minute amounts of ionizing matter and regulatory aspects, none of them has attracted the community’s interest besides exotic applications like space exploration.
3 Overview of Component Level Devices
4
75
Energy Storage Systems
Energy storage is the single most critical issue when conceiving an ambient intelligent, autonomous environment. Even low power transmitters require mW power figures to transmit their signals to a receiver – with small energy storage units this will drain the supply rapidly. Today, autonomous (non-scavenging) systems will run anywhere between hours and months. With energy scavenging, this can be extended significantly. However, due to increased complexity, high duty-cycle requirements and long range transmission, certain applications needs are still power hungry, leaving true autonomous systems still inadequate in many areas. Regarding range of available energy storage units, today’s applications are still limited to the use of batteries and capacitors. Novel developments like high density electrolytes, novel dielectrics with both high permittivity and break down voltage, as well as increased reactive surfaces, have pushed the storage capacities up by a factor of 10 in the past decade. However, additional improvements seemed to have slow down. 4.1.1
Batteries
Modern lithium ion, lithium polymer batteries can accommodate energy densities of ~0,7MJ/kg. Recharging is not affected by the memory effects observed in NiCd or NiMH cells. However, recharging still requires adequate electronic control to prevent overheating or even combustion [47]. 4.1.2
Super-capacitors
Super- and ultra-capacitors are based on electrochemical double layers with a nanoporous dielectric providing the high dielectric constant. In contrast to, for example, lead acid batteries with ~0.1MJ/kg, both super- and ultra-capacitors boast 5× to 10× higher energy storage capabilities. Aside from just improving the dielectric constant, the permissible voltages can also be increased. This approach, pioneered by EEstor [48], has been reported to achieve energy storage densities of 1MJ/kg. This kind of ultra-capacitor would eliminate concerns about the long term autonomy of sensor nodes for ambient intelligence systems.
4.2
Novel Approaches to Energy Storage
4.2.1
Carbonization based Battery Structures
Micro-manufacturing of polymers, carbonized in a proprietary process, is the basis of the carbon micro-battery approach (See Fig. 3.44) [49]. Controlled structuring of polymers, for example by lithography or molding creates fractal surfaces, which
76
E. Jung
Fig. 3.44 Carbonized polymer structures providing very high electrode area for novel micro batteries (courtesy Carbon Micro Batteries, LLC)
Fig. 3.45 Micro Fuel Cell (15×10×1mm) in micro-structured layer technology (courtesy Fraunhofer IZM)
maximizes the electrode area. A metric of 400Wh/l is claimed as a goal for commercially viable battery technology based on carbonization.
4.2.2
Fuel Cells
Micro fuel cells have been discussed as potential candidates to supply mobile applications with energy. These cells can be fabricated in miniature footprints [50], providing energy supply densities in the 0.1W/cm2 range, equivalent to roughly
3 Overview of Component Level Devices
77
double the density of today’s lithium polymer cells. However, they need a continuous supply of fuel (e.g. hydrogen, methanol, ethanol) to operate. Tank sizes add to the overall volume. Fraunhofer IZM, as shown in Fig. 3.45, has demonstrated a combination of a miniature hydrogen tank and micro-fuel cell, providing a 3× increased energy density as compared to an A-cell primary battery. As the fuel cell alone is not an energy supply, but a reactive converter, any size of fuel tank can be accommodated, benefiting from the up-to 8MJ/kg energy density of hydrogen. This would provide for years of autonomous operation, especially for stationary sensor nodes in an ambient intelligence network, in contrast to months with current battery technology. Micro energy storage will likely remain the Achilles’ heel for true autonomous ambient intelligence systems for a while to come. However, lower power electronics, higher storage capacity, rechargeable energy supplies and efficient scavenging and conversion concepts will ultimately pave the way towards true autonomy.
5
Conclusions
This chapter provides an overview of Micro-Electro-Mechanical System (MEMS) sensors, particularly from the perspective of providing the low power sensing devices that will support the development of Ambient Intelligence (AmI). The sensor parameters that are described include acceleration, gyroscopy, pressure, vibration, shock, humidity, sound (i.e. using microphones), and bio-and chemoelectrical signals. Two primary techniques are employed to fabricate the MEMS devices, namely bulk micro-machining (BMM) and surface micro-machining (SMM). MEMS sensor devices can be fabricated independently within silicon material, or they can be integrated with the circuitry used to condition the output signal of the sensor. The ability to provide low power operation in a miniaturised system is a particular advantage of MEMS sensors. However, the challenge of AmI is such that additional approaches will be needed to facilitate autonomy; that is, long-term operation without human intervention. Amongst the possible solutions to this challenge is the development of energy scavenging and energy storage technologies, also employing MEMS, which will permit sensor systems to continue to collect energy during their operational lives and, in this manner, extending this lifetime significantly.
References 1. S. Bahadori et al., “Towards Ambient Intelligence For The Domestic Care Of The Elderly”, Ambient Intelligence, Springer, 2006, ISBN 978-0-387-22990-4, pp. 15–38 2. M. Streitz, “The Disappearing Computer”, Communciatons of the ACM, 48 (3). pp. 32–35. ISSN 0001-0782
78
E. Jung
3. M. Woitag et al., “Bewegungserfassung und Bewegungsüberwachung im häuslichen Umfeld”, Proc. 1st Ambient Assisted Living, Berlin 2008, pp. 249–252 4. A. Hein et al., “Activity Recognition for ambient assisted living: Potential and Challenges”, Proc. 1st Ambient Assisted Living, Berlin 2008, pp. 263–267 5. W.R. Heinzelmann et al., “Energy-efficient communication protocol for wireless microsensor networks”, System Sciences 2000, Jan. 2000, pp. 10, vol. 2 6. W. Hascher, “nanoNET: sichere Verbindung für Sensor-/Aktor-Netzwerke der Zukunft”, Elektronik 2002, H. 22, S. 38 bis 48. 7. G. Schulte, “Novel wireless power supply system for wireless communication devices in industrial automation systems”, IECON 02, pp. 1358–1362, 2002 8. A. Karalis, “Efficient wireless non-radiative mid-range energy transfer”, Annals of Physics 323 (2008) 34–48 9. P. Mitcheson, “Power Processing Circuits for MEMS Inertial Energy Scavengers”, Dtip 2006, Stresa, April 2006 10. S. Büttgenbach, “Mikromechanik”, Teubner, 2nd edition, 1994, ISBN 978-3519130710 11. E. Jung et al., “Packaging of Micro Devices for Automotive Applications-Techniques and Examples”, AMAA 2003, ISBN 978-3540005971 12. B. Kloeck, “Study of electrochemical etch-stop for high-precision thicknesscontrol of silicon membranes”, IEEE Transactions on Electron Devices, Volume: 36, Issue: 4, Part 2, pp.663– 669, 1989 13. H.-P. Trah, R. Müller Fiedler, ‘Mikrosystemtechnik im Automobil’, PhysikJournal, Nov.2002/1, ISSN 1617-9439, pp. 39–44 14. Kovacs, G.T.A. et al., “Bulk micromachining of silicon” Proceedings of the IEEE, Volume: 86, Issue: 8, pp. 1536–155, Aug 1998 15. J.M. Thevenoud et al., “Fabrication of 3D Packaging TSV using DRIE”, to be published in Proc. DTIP 2008, Nice, 2008 16. S. Knies et al., MEMS packaging for automotive applications”, DTIP 2005, Montreux, June 2005 17. J.Leib et al., “New wafer-level-packaging technology using silicon-via-contacts for optical and other sensor applications”, ECTC 2004, pp. 843–847 18. M. Feldmann, “Wafer-Level Camera Technologies Shrink Camera Phone Handsets”, Photonics Spectra, August 2007 19. C. Hierold et al., “A pure CMOS surface-micromachined integrated accelerometer”, Sensors and Actuators A: Physical, Volume 57, Issue 2, November 1996, pp. 111–116 20. http://www.analog.com/en/content/0,2886,764%255F%255F7537,00.html 21. Chau et al., “An integrated force-balanced capacitive accelerometer for low-g applications”, Sensors & Actuators A, Vol. 54, Issues 1–3, June 1996, Pages 472–476 22. Wiemer et al., “Bonding and reliability for 3D mechanical, optical and fluidic systems”, Smart System Integration, Paris, 2007 23. http://www.analog.com/library/analogdialogue/archives/37-03/gyro.html 24. Eaton et al., “Comparison of Bulk- and Surface- Micromachined Pressure Sensors”, Micromachined Devices and Components, Proc SPIE, Vol 3514, p. 431 25. http://www.intersema.ch 26. http://www.stanleyassociates.com/capabilities/AEandT/No-Power%20MEMS%20Shock%20 Sensors.pdf 27. P. Rombach, M. Miillenborn, U. Klein, R. Frehoff, “A low voltage silicon condenser microphone for hearing instrument applications”, Joint ASAIEAA Meeting 1999, Berlin, Germany, 14/03-19/99, NO. 2AEA-3 28. J. Van Doorn, “Microphone with improved sound inlet port”, US Patent No. 7072482 29. X. Chen et al., “BaZrO3Thin Films For Humidity Gas Sensor”, MRS Bulletin 2007 30. C. Imawan et al., “Structural and gas-sensing properties of V2O5–MoO3 thin films for H2 detection”, Sensors and Actuators B: Chemical, Volume 77, Issues 1–2, 15 June 2001, Pages 346–351
3 Overview of Component Level Devices
79
31. Chinowsk et al., “Performance of the Spreeta 2000 integrated surface plasmon resonance affinity sensor”, Sensors and Actuators B 6954 (2003) 1–9 32. T. Misna et al., “Chemicapacitive microsensors for chemical warfare agent and toxic industrial chemical detection”, Sensors and Actuators B: Chemical, Volume 116, Issues 1–2, 28 July 2006, Pages 192–201 33. http://www.seacoastscience.com/Downloads/Seacoast_White_Paper_DEC%202006.pdf 34. W. Baumann et al., “Microelectronic sensor system for micro-physiological application on living cells”, Sensors and Actuators B, B 55 (1999), pp.77–89 35. P. Knapen, “Electric power supply system for portable miniature size power consuming devices”, US Patent No. 4644246 36. L. Amit et al., “Radioisotope Powered Electrostatic Microactuators and Electronics”, TRANSDUCERS 2007, June 2007, pp. 269–273 37. P. Mitcheson et al., “Power Processing Circuits For Mems Inertial Energy Scavengers”, Proc. DTIP 2006, Stresa, 2006 38. G. K. Ottman et al., “Adaptive piezoelectric energy harvesting circuit for wireless remote power supply”, IEEE Transactions on Power Electronics, vol. 17, pp. 669–676, 2002. 39. M. Marzencki, Y. Ammar, S. Basrour, “Integrated power harvesting system including a MEMS generator and a power management circuit”, to be published in Sensors and Actuators A, 2008 40. H. Sodano, “A Review of Power Harvesting from Vibration using Piezoelectric Materials”,The Shock and Vibration Digest, 36(3), 197–205, 2004 41. www.transparentassets.com 42. W. Granzer et al., “A modular architecture for building automation systems,” in Proc. 6th IEEE WFCS, 2006, pp. 99–102 43. I. Stark et al., “Low power thermoelectric generator”, US Patent No. 6958443 44. S. Kotanagi, “Thermoelectric generation unit and portable electronic device using the unit”, US Patent No. 6560167 45. R. Duggirala et al., “An autonomous self-powered acoustic transmitter using radioactive thin films” in Ultrasonics Symposium, 2004, Volume: 2, pp. 1318–1321 46. A. Lal et al., “Pervasive power: a radioisotope-powered piezoelectric generator”, IEEE journal on Pervasive Computing, March 2005, Volume: 4, Issue: 1, pp. 53–61 47. G. Chagnon, P. Allen, K. Hensley, K. Nechev, S. Oweis, R. Reynolds, A. Romero, T. Sack, M. Saft, Performance of SAFT Li-ion batteries for high power automotive application, in: Proceedings of the Electric Vehicle Symposium EVS-18, Berlin, October 2001 48. http://pesn.com/2007/01/17/9500448_EEStor_milestones/ 49. B. Park et al., “A Case for Fractal Electrodes in Electrochemical Applications”, J. Electrochem. Soc., Volume 154, Issue 2, pp. P1–P5 (2007) 50. R. Hahn et al., “Development of a planar micro fuel cell with thin film and micro patterning technologies”, Journal of Power Sources, Volume 131, Issues 1–2, 14 May 2004, Pages 73–78
Chapter 4
Silicon Technologies for Microsystems, Microsensors and Nanoscale Devices Thomas Healy
Abstract This chapter provides a brief overview of the most relevant current silicon processing technologies. A number of high potential future techniques are also presented. Systems based upon silicon are almost ubiquitous in today’s world; as a material, silicon is required to accommodate the growing needs of an increasingly demanding society. A consequence of this is a constant drive for cheaper solutions in providing these systems, which further supports a culture of innovation in silicon technologies. The selected future techniques described here, which have been developed to answer specific challenges in integrating electronic systems into the real-world environment, provide an insight into the ways in which silicon processing is being transformed. They also represent only a sample of the current innovative research in the field silicon processing. Keywords Silicon, Micro-Electro-mechanical Systems (MEMS), Sensors, Embedded Systems Wireless Sensor Networks, Smart Dust, Ubiquitous Computing, Ambient Intelligence
1
Introduction
Historically, the first successful fabrication techniques produced single transistors on individual silicon die (1–2mm2 in size) [1]. Early integrated circuits, fabricated at Texas Instruments [2], included several transistors and resistors making simple logic gates and amplifier circuits; today millions of transistors can be created on a single die to build extremely powerful circuits. In this chapter a basic insight into the processes involved in conventional planar integrated-circuit (IC) fabrication will be presented, including how these processes have been developed to accommodate the emerging technological requirements of an increasingly technologically demanding society.
Tyndall National Institute, Cork, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
81
82
T. Healy
As silicon is the primary material used in the IC industry today, it will be the main focus of this review. From an economical point of view, the fact that silicon is an abundant element in nature provides for a very cheap starting material in conventional IC processing. It also brings major processing advantages, such as being a high quality insulator, easily oxidized to form silicon dioxide, which is an excellent barrier layer for selective diffusion processing; this makes silicon the dominant material used in the IC industry today. In recent years, due to the rapid progress of very large-scale integrated (VLSI) circuits, complementary metal-oxide semiconductor (CMOS) devices have been scaled down continuously, while the CMOS circuits have been correspondingly increasing in functionality. This is echoed in Moore’s Law, which describes an important trend in the history of computer hardware: the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years [3, 4]. If this continued, in theory, then computers will be created where each molecule will have its own place, then we will have completely entered the era of molecular scale production. Of course, this carries enormous challenges. Therefore, it is not surprising that scientists are researching new and varied approaches to increasing circuit density, including the use of new high-k dielectric materials [5] and lithographic techniques [6] for use in large scale IC production. In this chapter, some of the more important challenges associated with silicon and conventional planar processing techniques will be discussed, along with how theses technologies are being adapted and evolved in order to overcome these challenges. Non-conventional methods for silicon processing will also be described including in particular spherical silicon IC’s and the world’s first electronically functional fibre [7–10].
2
Conventional CMOS Device Fabrication Processes
Conventional integrated circuits (ICs) are primarily fabricated on flat silicon wafers. These are made of pure silicon cut from a silicon ingot, polished to a smooth finish, and heated in order to form silicon wafers, which are typically 100mm–300mm in diameter. Fabricating silicon wafers is not a trivial process that requires a process to create rod-form polycrystalline semiconductor material. The rods are precisely cut into ingots, which are cleaned and dried and subsequently manufactured into a large single crystal by melting them in a quartz crucible. The crystal then undergoes an elaborate process of grinding, etching and cleaning at its surface; this includes cutting, lapping and polishing it to a mirror-smooth finish and then heat-processing the final wafers. Due to semiconductor resistivity, approximately one-third of the original rod is ultimately of high enough quality to be used in making integrated circuits. The remainder can often be re-processed and used for products that do not require such high purity, such as silicon solar cells [11]. This ingot fabrication process
4 Silicon Technologies for Microsystems
83
is independent of the device fabrication protocol and IC foundries are generally supplied by independent wafer manufacturers. Silicon is the material of choice for semiconductor device fabrication for many reasons, but in particular for the processing advantages given by its easy oxidation to a stable silicon dioxide (SiO2), which can be used as an insulator, a surface passivation layer and as a superior gate dielectric. There are five primary CMOS technologies for discrete IC fabrication: N-Well, P-Well, Twin Well, Triple Well and SOI (Silicon on Insulator) processes [12–13]. For the purposes of clarity and coherence this chapter will focus upon the N-Well process.
2.1
The N-Well Process
The following fabrication sequence illustrates the basic steps required to create a conventional CMOS inverter. Beginning with a p-type silicon wafer, a thermal oxidation process is performed to create a thin layer of oxide on top of the wafer (in Fig. 4.1). A photo-resist material is spun onto the wafer and a photolithographic process is performed (See Fig. 4.2). This involves exposing the wafer to a dose of UV light through a previously designed n-well mask. The photo-resist that is exposed to light becomes soluble to the photo-resist developer and the unexposed areas remain insoluble to the developer. This is known as positive resist and its primary purpose is to deposit the site specific implant dopants necessary for making IC devices, see Fig. 4.3. Using Organic photo-resist developer the exposed area is stripped (as is shown in Fig. 4.3).
SiO2
p substrate
Fig. 4.1 The Thermal Oxidation Process
Photoresist SiO2
p substrate
Fig. 4.2 The Photolithographic Process
84
T. Healy Photoresist SiO2
p substrate
Fig. 4.3 Developing the photo-resist
SiO2
p substrate
Fig. 4.4 Silicon Oxide Etching
n well p substrate
Fig. 4.5 The Formation of the n-well
n well p substrate
Fig. 4.6 Stripping the Oxide
The exposed oxide area is then etched using Hydroflouric acid (HF) and the photo-resist material is subsequently removed (see Fig. 4.4). This leaves a specific area of the wafer exposed for a subsequent implant. An n-well is formed using either a diffusion or Ion implantation process (See Fig. 4.5). Ion implantation is a doping process whereby ionised dopant molecules are accelerated through an electric field and implanted into the wafer at a depth specific to the implantation energy. Diffusion doping begins with the deposition of an impurity material over the specific site and at high temperatures (9000C–12000C) the impurity atoms diffuse into the wafer lattice creating the desired N-Well. The remaining oxide is stripped using Hydroflouric acid HF (as is shown in Fig. 4.6).
4 Silicon Technologies for Microsystems
85
Another thin layer of oxide is deposited to create the thin gate oxide of the final device and a layer of polysilicon is deposited using a chemical vapour deposition (CVD) process (See Fig. 4.7). Using photoresist and the lithographic techniques previously described a polysilicon layer is patterned to create the device poly gate (Fig. 4.8). An oxide layer is deposited and patterned to define the n diffusion areas (See Fig. 4.9). The exposed areas are implanted using a diffusion or ion implantation creating the source/drain of the transistor (as shown in Fig. 4.10).
Polysilicon Thin gate oxide n well p substrate
Fig. 4.7 Deposition of a Thin ‘Gate’ Layer of Oxide
Polysilicon Thin gate oxide n well p substrate
Fig. 4.8 Polysilicon patterning
n well p substrate
Fig. 4.9 Definition of n diffusion areas
n+
n+
n+ n well
p substrate
Fig 4.10 Implantation of exposed areas
86
T. Healy
p+
n+
n+
p+
p+
n+
n well p substrate
Fig. 4.11 The p diffusion step
p+
n+
n+ p substrate
p+
p+
n+
Metal Thick field oxide
n well
Fig. 4.12 Patterned field oxide, metal layer and passivation
The oxide is stripped and the process repeated for the p diffusion areas (See Fig. 4.11). Finally, a field oxide is deposited and patterned and this is followed by a patterned metal layer to create the final device. Moreover, depending on the final application a final passivation layer is deposited over the device for protection (See Fig. 4.12). The final device, an inverter, is the basic building block behind most integrated systems. One of the key areas of interest in the IC industry today is the reduction of the footprint of individual components to allow increased device throughput and thus reduction in the cost of overall systems. The following section investigates a select number of techniques being investigated to support the realization of this goal.
3
Silicon CMOS Processing Evolution
In this section, three different IC processing techniques, silicon-on-insulator, silicon fibre technology and spherical silicon processing, are reviewed. While all of these techniques have differing objectives, this section illustrates that the ongoing process of adapting and evolving silicon IC processing is constant, driven to supporting the creation of more complex but also more consistently socio-economically acceptable systems.
3.1
Electron beam lithography
Electron beam (e-beam) lithography is a process where a beam of electrons is used to generate a pattern on the surface of a wafer below the resolution limit of conventional photolithography (< 200 nm). The primary advantage of this technique is that it has
4 Silicon Technologies for Microsystems
87
the ability to overcome the diffraction limit of light and create feature sizes in the sub-micron range. It has a resolution of ~20nm in comparison to conventional photolithograpy, which has a typical resolution of ~1µm [14–15]. There is also no need for mask sets, a step which reduces associated costs and time delays. This form of lithography is widely used in the research arena, but has yet to become a standard technique in industry. Currently, electron beam lithography is used most directly by industry for writing features; the process is employed mainly to create exposure masks that are used with conventional photolithography processes. This is due to the lack of speed in comparison to conventional photolithography [16]. During the e-beam process the beam is scanned across the surface of the wafer; this type of pattern generation is slow compared with a parallel technique like photolithography (the current standard) in which the entire surface is patterned at once.
3.2
Silicon on Insulator (SOI)
Silicon-on-insulator (SOI) is a technology platform based upon the use of an insulator layer, typically silicon dioxide (SiO2), that is sandwiched between a thick-handle silicon wafer and a thin single crystal silicon device layer, as is shown in Fig. 4.13. The initial motivation for this technology was its low parasitic capacitance and radiation hard properties; this is due to the isolation provided by the buried oxide layer [17]. However in more recent years, its ability to create isolated devices at higher density than conventional CMOS processing, has lead it to become a candidate technology that may be central to the future of VLSI technology. Active devices and circuits are created in the top silicon thin-film of the SOI structure device. The buried oxide layer provides isolation from the substrate and this isolation reduces the capacitance of the junctions in the structure. This subsequently helps to reduce the amount of electrical charge that a transistor would have to move during a switching operation. The devices operate faster and they are capable of switching using less energy. SOI circuits can be up to 15 percent faster and consume 20 percent less power than the equivalent conventional bulk complementary metal-oxide semiconductor (CMOS)-based IC’s [18–19]. The SOI process can be achieved using a number of different techniques; however, the two processes most widely used are Separation by Implantation of Oxygen (SIMOX) [20] and Thin Film Silicon Layer Buried Oxide Silicon Substrate
Fig. 4.13 Cross-section of an SOI Wafer
88
T. Healy
smart-cut [21]. A comparison of the structural differences between SOI and conventional IC’s can be seen in fig. 4.14. The SIMOX process involves an oxygen implant into the wafer. The profile of the implanted oxygen dopants is Gaussian-shaped with its peak some distance below the surface [19]. Through a subsequent high temperature anneal, the oxygen dopants react with silicon to form a buried oxide layer around the peak of the oxygen profile. As a result, a single-crystal silicon thin-film is formed above the buried oxide layer. The smart-cut technology takes a different approach, using two separate wafers. A thick thermal oxide layer is grown on the device wafer (which is to be used as the buried oxide in the final SOI structure) and a hydrogen ion (H+) implant at a dose of 2 × 1016 − 1 × 1017 cm−2 is performed. After cleaning the device wafer, a second ‘handle wafer’ is introduced and both wafers are bonded together. The wafers are subjected to annealing in the range of 400–6000C. The implanted hydrogen atoms, at a predetermined depth below the oxide layer, gather to form blistering. If the amount of the implanted hydrogen atoms is sufficient, this blistering will cause flaking of the whole silicon layer. As a result a thin film of silicon with a thickness identical to the depth of the hydrogen implant is left at the top of the buried oxide. The smart-cut process is finalized by application of chemical mechanical polishing (CMP) to smooth the wafer surface.
Bulk Transistor
Poly-silicon Gate
Bulk Silicon Silicon Islands
Buried Oxide SOI Transistor
Fig. 4.14 A Cross-section of an SOI Wafer and a conventional IC
4 Silicon Technologies for Microsystems
3.3
89
Silicon Fibre Technology
The trend in our knowledge-based society demands not only more powerful circuits and systems; it also requires the integration of intelligence into our everyday environment. This immersion of microelectronic systems in our world is a fundamental consequence not solely of technology, but also of human need (and psychology). As silicon is the core element of most intelligent systems, new methods must continually be developed to increase function and to embed these systems in a nonintrusive manner. In recent years major advances have been made in the area of wearable and ambient electronics applications [22–25]. To date, the state of art in integrating electronics into wearable systems typically consists of mounting previously packaged integrated electronic components onto a textile substrate, interconnecting them by means of conductive fibres and enclosing them in a protective material/casing [27]. One of the more noteworthy recent research initiatives in this area takes the form of an electronically functional silicon fibre technology; the form factor supports subsystems capable of being seamlessly integrated, into a textile format [26]. The concept of the electronically functional fibre (EFF) has the potential to change the way advanced circuits and systems can be designed and fabricated in the future. Its aim is to enable large flexible integrated systems for wearable applications by building the functional fibres with single crystal silicon transistors at their core. The approach uses the conventional planar technology previously discussed to manufacture extremely powerful circuits and systems in long narrow fibres, which then have the potential to create the necessary fundamentals for the integration of information technology into everyday objects and in particular into high-techtextile products. The primary difficulty with integrating a silicon device into a flexible garment is the rigid nature of conventional silicon IC’s. However, research to examine the mechanical properties of silicon microstructures has revealed the useful fact that “silicon structures become extremely flexible when sufficiently thin” [28–29]. This provides a technology enabler; however, there remain a number of further constraints that flexible electronics must address. Typical issues are summarised in Table 4.1
3.3.1
Flexible Silicon Fibre Processing
The following section outlines a process developed using CMOS processing techniques to create a flexible electronically functional silicon device. To begin with an SOI structure with a 0.34µm thick top single crystal silicon device layer over a 0.4µm thick buried oxide layer on a 525µm silicon handle wafer was used. Step 1: Silicon islands were defined as the active area by first covering the wafer with a photo-resist material, exposing the wafer with U.V. light through the reticle; the exposed areas of the photoresist were subsequently removed
90
T. Healy
Table 4.1 Challenges for Wearable Electronics Constraints for Flexible Electronics • Impact of three-dimensional flexure of fibres and fibre meshed assemblies, including electrical, mechanical and physical effects due to bending, stretching, torsion, aging effects (including long time and short time dependencies) as well a mechanical and electrical hysteresis effects. • Integration of electronics in segments to avoid deformation effects. • Identifying the regions that are subject to lower deformation under dynamic operation using humanoid simulations. • Impact of chemical effects due to cleaning. • Impact of process handling in relation to the physical characteristics of the EFF (i.e. fibre length and fibre thickness/diameter, fibre protection, finishing, fibre structure, strength, interlacing, fibre lifetime, handling). • Impact of high humidity environments such as those encountered in washing, drying and ironing processes. • Impact of environmental conditions in general in relation to the physiology of fibres, textiles (e.g. medical devices, industrial textiles, personal protection equipment, construction and automotive textiles, home textiles) and clothing (perspiration resistance, antibacterial, antistatic finish, smog-protection, fibre appearance, abrasion resistance).
Device Silicon
Device Silicon Buried SiO2
P-type Handle Substrate
Fig. 4.15 Defining the silicon islands
with a developer solvent. The exposed silicon was plasma etched through the surface crystal layer (0.34µm) leaving defined silicon islands sitting on top of the buried oxide layer. The remaining resist was subsequently stripped off and the wafer cleaned (See Fig. 4.15). Step 2: Subsequent to the active area definition the next step is to grow the 20nm gate oxide followed by the twin well formation. The N and P well implants are split into two with the N well having a deep phosphorous implant (3e12@190KeV) and a top boron implant (2e12@20KeV), (See Fig. 4.16) and (See Fig. 4.17). This is to ensure that the bottom half of the island is N-type and the boron implant is used to
4 Silicon Technologies for Microsystems
N Well
91
Gate Oxide
P Well
Buried SiO2
P-type Handle Substrate
Fig. 4.16 Growing the gate oxide
N Well
Gate Oxide
P Well
Buried SiO2
P-type Handle Substrate
Fig. 4.17 Twin well formation
set the threshold voltage (Vt) of the transistor. The P well implant is split between a deep boron implant (2e11@70KeV) and a top boron implant (1.1e12@20KeV). These doses were chosen to give the circuit a Vt of 0.9 volts. Step 3: A 350nm layer of polysilicon is deposited and patterned to create the gate. This is followed by phosphorous (5e14@60KeV) and boron (2e11@70KeV) implants to create the source and drain followed by a rapid thermal anneal (RTA) to activate the source and drain.
92
T. Healy
Step 4: The next step in any conventional CMOS processing is to create the metal contact layer. For the purposes of this work the standard approach to creating a contact stage was revised. Ideally, a BPSG oxide layer is deposited and the contact layer patterned and followed by metallization. With the need for a flexible circuit that could be integrated into a textile format it was decided to incorporate a flexible polyimide material as the inter-dielectric layer in the design, (Fig. 4.18) and (Fig. 4.19). This is a more bendable alternative to standard Spin on Glass (SOG) inter-dielectric. The contact stage comprised of a 3µm patterned layer of polyimide. Polyimide S
D
D
S
Buried SiO2
P-type Handle Substrate
Fig. 4.18 Adding the flexible polyimide material
Polyimide S
D
D
S
Buried SiO2
P-type Handle Substrate
Fig. 4.19 Patterning of the polyimide
4 Silicon Technologies for Microsystems
93
Step 5: A 600nm layer of Al&backslash;1%Si metal is deposited and patterned to create interconnect between silicon islands, (Fig. 4.20). The metal is alloyed at 425°C to ensure good ohmic contact with the source/drain regions. Step 6: Finally, a polyimide encapsulation layer is deposited and patterned over the circuit to increase the flexibility and overall mechanical robustness of the circuit, (See Fig. 4.21).
Polyimide D
S
D
S
Buried SiO2
P-type Handle Substrate
Fig. 4.20 Metal deposition
Encapsulating Polyimide Inierdielectric PI S
D
D
S
Buried SiO2
P-type Handle Substrate
Fig. 4.21 Final polyimide encapsulation
94
T. Healy
To create the final flexible device an undercut process is required. This involves a combination of isotropic etching (this etches in all-crystallographic directions at the same rate) and anisotropic etch processing techniques [30]. Initially, the device side of the wafer is patterned with a resist material in order to anisotropically etching the buried oxide. There are two reasons for the initial oxide etch: ●
●
To act as an etch mask for the subsequent isotropic etch of the handle wafer silicon and To leave a number of anchors for the devices after they have been completely under-etched. This leaves the fibre secured by a thin buried oxide bridge at both ends, which is necessary to secure the devices while under vacuum in the etch chamber. These bridges are cut using a laser at a later stage to release the devices completely from the handle wafer and leave a freestanding electronically functional fibre. Fig. 4.22(left) gives an illustration of the front etch approach.
This is followed by an isotropic etch to undercut the silicon islands, releasing a flexible silicon device ~3µm thick. A clear representation of the release method is illustrated in Fig. 4.22(right). A freestanding ring oscillator fibre completely independent of the handle wafer can be seen in Fig. 4.23. Professor J. Lind of Georgia Institute of Technology, USA states: “It is only appropriate that the field of textiles take the next evolutionary step towards integrating textiles and computers by designing and producing a wearable computer that is also wearable like any other textile” [31].
The concept of the electronically functional fibre outlined here follows the aspiration of making information technology integrate seamlessly into a textile format; to date there has been no other published work similar to this technology in the area of wearable or ambient electronics.
Device (covered with resist)
Openings
Oxide bridge (covered with resist)
Bridge width Oxide etched away here. Bulk Si exposed to SF6 plasma
Resist
Silicon
Opening
Device Oxide
Bulk Si
Isotropic underetch
Fig. 4.22 A Plan view of device after oxide etch (left) and cross section showing isotropic underetch (right)
4 Silicon Technologies for Microsystems
95
Fig. 4.23 Freestanding ring oscillator fibre
3.4
Spherical Silicon
So far conventional planar processing technologies have been discussed. In such a wafer-based fabrication process, the number of ICs that are produced on each wafer depends upon the diameter of the wafer and the size of the IC being fabricated. In recent years, wafer diameters have increased in order to scale productivity and decrease cost of a silicon device. However, this will require more expensive equipment and the complexity will also increase significantly. The yield of larger silicon wafers can be affected also. In certain cases, the use of alternative form factors may be beneficial; one such approach to silicon processing has been developed by Ball Semiconductor and it takes the form of an IC device on a silicon sphere [32]. The fabrication process for these silicon spheres is not trivial and consists of a number of very small polycrystalline silicon granules being processed through a combination of gasses, chemical reactions and solid-state physics of semiconductor throughout a line of hermetically sealed tubes. The silicon spheres are in constant motion as they are processed, treated, and transported at high speed through these sealed pipes involving various processes for crystal-growing, grinding and polishing. During this they also undergo the repeated cleaning, drying, diffusion, film deposition, wet and dry etching, coating, and exposing steps of the integrated-circuit manufacturing process. The spheres are exposed to air only during photolithography; thus, there is no requirement for the traditional - and expensive - clean room facility.
96
T. Healy
Initially, 1-mm single-crystal balls were developed (see Fig. 4.24) and further research is being undertaken to produce even smaller spheres. Even though a one millimeter single crystal sphere has a surface area of 3.14 mm2, large VLSI circuits cannot be formed on a single sphere. However, larger circuits can be formed by grouping arrays of spheres to create individual subsystems. A sphere can be designed to function as an individual element is a subsystem (for example, a logic circuit, an I/O circuit, etc), and it can subsequently be interconnected with other spheres to form the complete subsystem. The manufacturing of ICs using silicon spheres offers a number of advantages over conventional planar processing techniques. For example, according to Ballsemi: Such spherical IC device manufacturing processes can greatly decrease the overall IC device manufacturing cost by eliminating the need for large scale dedicated clean room facilities, by allowing over 90% of the required silicon material to end up in functioning devices, and by eliminating the need to purchase new manufacturing equipment each time technological advances necessitate larger circuit devices.
The approach has been successfully implemented, however, numerous question remain regarding how this form-factor will be utilized. The packaging and interconnection of the silicon spheres represents one such challenge. Table 4.2 illustrates the potential advantages of spherical IC’s over conventional planar processing techniques.
Fig 4.24 1mm Single crystal silicon sphere, Courtesy of http://www.ballsemi.com/tech/spherical. html
4 Silicon Technologies for Microsystems
97
Table 4.2 Spherical IC’s versus Planar IC’s courtesy of http://www.ballsemi.com/tech/spherical. html Chips
Spheres
Manufacturing complexity
Three semi-automated processes(create, process and package wafers)
One fully automated process
Production flexibility
Batch processing
Single-unit processing
Surface area for inscribing circuits
Limited (area of 1mm chip = 1 sq. mm.)
Two to three times more (area of 1mm sphere = 3.14 sq. mm.)
System integration
More functions on larger chip
Cluster smaller balls with different functionality
Processing temperature
Must be below 1400°C
Can exceed 2000°C
Shipment to customers
Plastic or ceramic packaging
No packaging required
Cycle time, original silicon to final assembly
120–180 days
5 days
Cost per function
Varied
Approximately 1/10th for comparable function
Ease of innovation
Only highest volume designs are produced; high processing cost limits innovation
Lower processing cost means more designs can be converted to silicon
Energy consumption
Higher
Lower
Original silicon material shipped as final product (%)
10–20
90–95
Environmental impact
10–20
significantly lower impact
Wafer fabrication
Clean room
Clean tubes and pipes
4
Conclusion
This chapter has provided a brief overview of the current silicon processing technologies and outlined a number of possible future techniques. Silicon-oninsulator (SOI) technology offers the potential for higher densities than conventional processing techniques. It also offers a certain type of versatility as demonstrated by the manner in which the process can be adapted to create flexible silicon fibres. The realization of a silicon fibre technology creates opportunities to integrate silicon functionality more fully and effectively into textiles and fabrics. A further demonstration of the versatility of silicon is provided by the silicon sphere process. This technique enables IC-level functionality to be built onto silicon spheres using a process that is more economical and produces less waste material. The potential of the spherical IC is illustrated by the possibility that 3-D spherical arrays can be assembled and function as complex embedded subsystems.
98
T. Healy
References 1. http://www.ti.com/corp/docs/company/history/timeline/semicon/1950/docs/54commercial. htm 2. http://www.ti.com/ 3. Intel’s information page on Moore’s Law – With link to Moore’s original 1965 paper 4. Intel press kit released for Moore’s Law’s 40th anniversary, with a 1965 sketch by Moore 5. http://www.tyndall.ie/posters/highkposter.pdf 6. https://www.llnl.gov/str/Sween.html 7. Delaney, K, Healy, T. et al, “Creating Systems for Ambient Intelligence”, ‘EMRS Silicon Evolution and Future of a Technology’ Book, Chapter 24, p.489–515. 8. Healy, T. et al, “Innovative Packaging Techniques for Wearable Applications using Flexible Silicon Fibres”, IEEE 54thElectronic Components and Technology Conference, p. 1216–1219, 2004. 9. Healy, T. et al, ‘Electronically Functional Fibre Technology Development for Ambient Intelligence’, Part 4 ‘Augmenting Physical Artefacts’, The Disappearing Computer Initiative Book, p.255–274. 10. Healy, T. et al, “Silicon Fibre Technology Development for Wearable and Ambient Electronics Applications,” IEEE Frontiers in Electronics Book, 2005. 11. http://www.tf.uni-kiel.de/matwis/amat/semi_en/kap_3/backbone/r3_2_2.html 12. James B. Kou and Ker-Wei Su, “CMOS VLSI Engineering Silicon-on-Insulator (SOI)”, ISBN 0-7923-8272-2. 13. West, N. et al, ‘CMOS VLSI Design’, ISBN 0-201-08222-5 14. McCord, M. A.; M. J. Rooks (2000). “2”, SPIE Handbook of Microlithography, Micromachining and Microfabrication. 15. J. A. Liddle et al. (2003). “Resist Requirements and Limitations for Nanoscale Electron-Beam Patterning”. Mat. Res. Soc. Symp. Proc. 739 (19): 19–30. 16. Jaeger, Richard C. (2002). “Lithography”, Introduction to Microelectronic Fabrication. Upper Saddle River: Prentice Hall. ISBN 0-201-44494-7. 17. H. H. Hosack, et al. SIMOX Silicon-on-Insulator: Materials and Devices,” Sol. St. Tech., pp.61–66, Dec. 1990. 18. Inoue and Y. Yamaguchi, “Trends in Research and Development of SOI Technology,” Applied Physics, Vol.64. No.11, pp.1104–1110, 1995. 19. M. Bruel, “Silicon on Insulator Material Technology,” Elec. Let., Vol.31, No. 14, pp.1201– 1202, July 1995. 20. H. H. Hosack, et al. SIMOX Silicon-on-Insulator: Materials and Devices,” Sol. St. Tech., pp.61–66, Dec. 1990. 21. C. Mazure, ‘Thin Film Transfer by Smart Cut Technology beyond SOI’, http://www.electrochem. org/dl/ma/203/pdfs/0993.pdf 22. Danillo De Rossi, “Electro-active Fabrics and wearable biomonitoring devices”, Autex Research Journal, Vol. 3, No. 4, December 2003. 23. Gemperle, F. et al, “Design for Wearability”, Proc. Second International Symposium on Wearable Computers, Pittsburgh, PA, October 1998. http://www.gtwm.gatech.edu/index/ accomplishment.html 24. Rensing, Noa M. “Threat Response: a Compelling Application for Wearable Computing”, Proceedings of the 6th International Symposium on Wearable Computers (ISWC 2002) 25. U. Möhring et al. ‘Conductive, sensorial and luminescent features in textile structures’, 3rd International Forum on Applied Wearable Computing, Bremen, Germany, March 2006 26. Healy, T. et ‘Silicon Fibre Technology Development for Wearable Electronics applications’, Masters in Engineering Science, University College Cork. 2006. 27. Pirotte, F. et al, “MERMOTH: Medical Remote Monitoring of clothes”, Ambience 05 28. Lisby, T. “Mechanical Characterisation of Flexible Silicon Microstructures,” Proc 14th European Conference on Solid-State Transducers, August. 2000, pp.279–281.
4 Silicon Technologies for Microsystems
99
29. Ericson, F. and Schweitz, J. A. “Micromechanical Fracture Strength of Silicon,” Journal of Applied Physics, Vol 68 (1990), pp.5840–5844. 30. S. Federico et al, “Silicon Sacrificial Layer Dry Etching (SSLDE) for free-stanfing RF-MEMS architectures”, EPFL Center of Micro-Nano-Technology (CMI), IEEE 2003, 00-7803-7744-3/03. 31. J. Lind, et al, “A Sensate Liner for Personnel Monitoring Applications”, Proc. Second International Symposium on Wearable Computers, Pittsburgh, PA, Oct 1998. 32. http://www.ballsemi.com/tech/today.html
Part III
Hardware Sub-Systems Technologies Hybrid Technology Platforms, Integrated Systems
1.1
Summary
Interconnection and packaging is a phrase that summarizes all of the fabrication and assembly processes that permits the use of silicon (and other semiconductor) in ICs and sensor devices for real world applications. In simplistic terms, interconnection involves techniques to link processing and/or sensing devices together so that they may function as a complete system (or subsystem); this includes the passive components (i.e. capacitors, resistors and inductors) necessary to ‘control’ current and voltage levels, etc. Packaging primarily describes the use of materials to protect semiconductor devices from hazards and damage by providing a barrier to the external environment. This barrier is typically employed for mechanical protection, as well as preventing attack through environmental conditions (e.g. corrosion). It can involve complete encapsulation, as is usually the case with silicon ICs or it can be selective, as is the case for certain chemical or biological sensors, where areas of the sensor are deliberately exposed to permit access to a target medium. New interconnection and packaging technology platforms are constantly emerging, driven by the need to follow Moore’s Law and support the continuously increasing density of silicon circuits. As a result, the traditional structure of electronic packages and the material functions, such as mechanical protection, are being broken down and broadened, respectively. It turns out that innovation in the materials (and assembly processes) that surround the many existing forms of silicon devices are central to realising AmIfriendly concepts, such as Smart Dust and the Disappearing Computer. As a result, many high density systems are emerging that provide a potential route - or a roadmap - to miniaturizing, for example, autonomous sensor nodes. These enablers are very important, particularly when coupled with the emergence of new MEMS devices and the new silicon platforms discussed in Part II. However, in isolation, they lack a route to determining a number of key issues: What exactly should be miniaturised and Why? How can this be done cost effectively and in what way should these (sub) systems be linked to larger heterogeneous systems, such as the internet?
102
Part III Hardware Sub-Systems Technologies
Numerous wireless sensor node toolkits have emerged to bridge this gap and a selection of these is discussed in Chapter 5. These toolkits offer a means to rapidly create prototypes of sensor subsystems and systems, providing flexibility in use and even network scalability. The Chapter also provides an overview of some of the more significant sensor node miniaturisation programmes. Chapter 6 first provides an overview of microelectronics packaging and then explores the issues of systems integration, miniaturisation and packaging in more detail. In the context of AmI and sensor networks, one of the more interesting areas of research is 3-D packaging; this chapter summarizes this approach, focusing upon two approaches in particular, folded flex packaging and chip in laminate/interconnect, both of which offer certain advantages (as well as further challenges) in realising highly miniaturised autonomous networkable sensors.
1.2
Relevance to Microsystems
These are in essence MEMS interconnection and packaging techniques. There is a particular emphasis on a systemized approach, where the capability for data conditioning and (local) management is integrated with the sensor device(s) in a single System-in-Package (SiP) solution or even on a single silicon substrate, as a Systemon-Chip (SoC) solution. These techniques are not alone relevant to achieving vision statements, such as Smart Dust, they are a core part of existing roadmaps to increase the performance of existing electronics systems through achieving consistently higher densities in a reliable manner.
1.3
Recommended References
This is a significant area of research and there are numerous publications available. One of the most widely referenced is the ‘Microelectronics Packaging Handbook’ by Rao Tummala et al. This offers a strong insight into interconnection, packaging and integration issues for electronics systems, particularly for those new to this domain of research, and represents an excellent general reference. For those interested in more detailed research, the IEEE Electronic Components & Technology Conference Series is the premier International Conference on this R&D topic. For those interested in research toolkits and the sensor node platforms available there are a number of current sources of information available on the internet, including the Sensor Network Museum and a survey by the EU Project Embedded WiSeNts. There is also, amongst others, the ACM SenSys Conference Series. 1. R. Tummala et al, “Microelectronics Packaging Handbook: Semiconductor Packaging”, Chapman & Hall, January 1997 2. Proceedings of the IEEE Electronic Components & Technology (ECTC) Conference Series. Sponsored by the IEEE Components, Packaging, Manufacturing Technology Society: http:// www.cpmt.org/conf/past.html
Part III Hardware Sub-Systems Technologies
103
3. The Sensor Network Museum, including network hardware systems: http://www.btnode.ethz. ch/Projects/SensorNetworkMuseum 4. The EU Framework 6 Project (FP6-004400), Embedded WiSeNts report, “Critical evaluation of research platforms for wireless sensor networks” http://www.embedded-wisents.org/studies/ survey_wp2.html 5. Proceedings of the ACM Conference on Embedded Networked Sensor Systems (SenSys): http://sensys.acm.org
Chapter 5
Distributed, Embedded Sensor and Actuator Platforms John Barton1, Erik Jung2
Abstract Distributed embedded sensor and actuator platforms will be at the core of any research initiatives on smart objects. In this context, recent developments in wireless sensing and micro-sensor technologies provide foundation platforms for considering the development of effective modular systems. They offer the prospect, currently at a prototyping level, of flexibility in use and network scalability. Wireless sensor devices are the hardware building blocks required to construct the core elements of wireless sensor networks. A number of large scale research programmes have developed over the last few years to explore these emerging technologies. We review some of the largest of these, including a review of the current wireless sensor integration technologies. This chapter will also look more closely at some of the more well-known wireless motes that are being used as toolkits for research, development and field trials. Keyword Wireless Sensor Nodes, Smart Dust, Smart Matter, E-Grain, E-Cubes, Textile Integration, Advanced Packaging Technology, Applications
1
Introduction
Distributed sensor networks rely on building blocks with autonomous functionality. These building blocks consist of the required sensor, a signal conditioning circuit, a processing unit and the transmit/receive front-end. In addition, power management is required for true autonomy. Depending upon the actual challenges for the network (e.g. life time, fast deployment, low cost) either battery powered nodes, or those that include energy scavenging/conversion and storage functions are required. The latter provide extended life time to the network, but come at additional cost and size. Fig. 5.1 depicts a schematic of an individual node embedded in a distributed network. Networks may provide either a point-to-point protocol (e.g. the node communicates with a central unit) or an ad-hoc network and re-routing functionality, 1
Tyndall National Institute, Lee Maltings, Prospect Row, Cork, Ireland
2
Fraunhofer IZM, Gustav-Meyer-Allee 25, 13355 Berlin, Germany
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
105
106
J. Barton, E. Jung
Network Communication Phenomena
Sensor Unit (10µW*)
Signal Processing Unit (20µW*)
Base Band Processing Unit (30µW*)
Radio Front End (30µW*)
Power Management
Ambient
Energy Harvesting Unit
Energy
Energy Buffer Unit (600 mWh / cm3) *) Duty Cycle = 0.1 %
Fig. 5.1 Schematic of building blocks for an Ambient Intelligence (AmI) sensor node in a distributed network
transporting the information from one sensor node through multiple networking nodes to their final destination. The second topology is now in use more and more, becoming the most widespread technique. To ‘create’ Ambient Intelligence (AmI), there are additional requirements to a node’s pure functionality. It should be unobtrusive – a fact that may be most important to, for example, medical assistance systems. Larger nodes (e.g. like those shown in Fig. 5.2 and Fig. 5.3) still provide for the majority of current Ambient Intelligence sensors that are currently in use. However, cost and deployment requirements are driving the technologies towards more advanced integration solutions (See Fig. 5.4 and Fig. 5.5), which will be more broadly acceptable and will enable the use in more diverse areas. Sensor networks, particularly those used in outdoor environments, require a high level of ruggedness, as they need to withstand rain, frost, drops, direct sunlight, etc. Integration techniques need to be employed address this right from the beginning of the concept implementation. Thus, challenges for the integration strategies can be summarized as follows: ● ● ● ● ●
Modular sensor exchange High levels of autonomy Miniature for ease of deployment Low cost for mass deployment High level of ruggedness
In order to reach these goals, packaging and assembly strategies have been developed to cope with the challenges. The use of bare die is one of the most straightforward concepts to drive miniaturization and ruggedisation of the assemblies. Progressing from prototype through-hole techniques to high volume capable surface mount device (SMD) technology also provides significant advantages in size, cost and reliability [2].
5 Distributed, Embedded Sensor and Actuator Platforms
107
Fig. 5.2 A Sensor node for a micro-environmental monitoring network (~6×2×3cm) (courtesy XBOW)
Fig. 5.3 Commercial node with ~8×8×3cm dimensions (courtesy of Particle Computer)
108
J. Barton, E. Jung
Fig. 5.4 Miniature ambient sensor node with 13×11×7mm (courtesy Ecomote) [1]
Fig. 5.5 Evolution roadmap for small, rugged, autonomous sensor nodes used in ambient intelligence systems from 1cm3 to 25mm3 (Fraunhofer IZM)
2
Wireless Sensor Node Platforms
Wireless sensor nodes (typically called motes) are available commercially from a number of SMEs, mainly based in the United States. These include Crossbow Inc [3], Sentilla (formerly Moteiv Inc) [4], Dust Inc [5], Phidgets, Inc [6], Meshnetics [7], Sensicast [8], AccSense [9], Millennial Net [10] and Ember [11]. These 1st generation mote products are targeted primarily at Universities and Research Laboratories for use in experiments and the development of test-beds. However, more and more of these companies are releasing products aimed at the building automation and industrial automation markets. In terms of market size, On World [12], predicts (conservatively) that there will be 127 million ‘motes’ deployed worldwide by 2010. These are primarily 2-D surface mount (SMT) based PCB’s with varying levels of high density packaging.
5 Distributed, Embedded Sensor and Actuator Platforms
109
Further research into hardware configurations and 3-D packaging of ‘Mote’ PCB’s is carried out at research institutions like the Tyndall National Institute [13, 14, 15, 16], Fraunhofer-IZM [17], IMEC [18], Harvard [19], Imperial College London [20], the Center for Embedded Networked Sensing at UCLA [21], UC Berkeley [22], Lancaster University [23], ETH Zurich [24], MIT [25], Sandia National Laboratories [26], Yale [27], EPFL [28] and by companies such as Intel [29]. A number of these motes are developed for specific technology research purposes, such as algorithm testing, power management, antenna miniaturisation or wireless range improvement. However, most are designed for specific applications, whether it is environmental or energy monitoring, medical applications or animal tracking. A number of these motes will be discussed in more detail. Table 5.1, compares the hardware systems available and suitable for wireless sensor platforms. Some of the platforms referenced in the table have been previously surveyed in [30]. Note that, while many more platforms exist, in this table we have attempted to collect the more versatile platforms, those which are not application dependant and are able to interface to different types of sensors and applications.
2.1
The Mica Family
Probably, the most popular platform utilized by wireless network researchers is the Mica hardware family (See Fig. 5.6) developed by UC Berkeley and commercialized by Crossbow Technologies [3] and MoteIV Corporation (now Sentilla) [4]. The basic architecture has a motherboard with a standard low profile connector that accepts a sensor board. The main board contains power regulation, processor, wireless transceiver and an antenna. The daughter sensing board is connected on top of the motherboard. An ‘AA’ battery socket is attached to the bottom side of the main board. The ‘Mica-Dot’ is also popular as a smaller version, about the size of a 2.5cm coin, allowing the use of a conventional lithium-ion button battery. While ‘WeC’, ‘Rene’ and ‘Dot’ used integrated sensors, the Mica was carefully designed to optimise the sensor interfacing and serve as a general purpose platform for wireless sensor networking (WSN) research. The platforms use a simple modulation RFM radio transceiver. While it is a useful tool for research it has limitations in
Fig. 5.6 Mica2, Mica2Dot and TMote Sky nodes
110
J. Barton, E. Jung
Table 5.1 A comparison of selected wireless sensor platforms RAM/Flash/ EEPROM
Radio
OS
512/8k/32k 512/8K/32K 1M/4M
RFMTR1000 RFMRR1000 RDSSS9M
TinyOS TinyOS MicroC
1M/4M
LMX3162
uOS
Platform
CPU
weC [22] Rene1 [22] AWAÌRS
Rene 2 [22] Dot [22] Mica [42] BT node [24]
AT90LS8535 AT90LS8535 StrongARM SA1100 StrongARM SA1100 Atmega163 Atmega163 Atmega128L Atmega128L
SpotON [60] Smart-its (Lancaster) [23] Smart-its (Teco)[23] Mica 2 [22] Mica2Dot [22] iBadge [61] Medusa [62] iMote [29] U3 [63] Spec [64]
DragonBallEZ PIC18F252 Atmega103L Atmega128L Atmega128L Atmega103L AT91FR4081 Zeevo ZV4002 PIC18F452 AVR Risc core
RFRAIN [65]
Particle [68]
CC1010 (8051 core) Atmega128L MSP430F149 Amega128L MSP430F149 MSP430F149 nRF240E1 CC1010 (8051 core) RFPIC12F675
Parasitic node [69]
C8051F311
Pluto 70 Tyndall Mote [14–16]
MSP430F149 Atmega128L
4K/60K/512K 4K/128K/
EnOcean TCM120 [71] Eyes [72] IMote2 [73] uPart [74] Tmote sky [75] EmberRF [64]
PIC18F452 MSP430F1611 PXA 271 rfPIC16F675 MSP430F1611 Atmega128L
1.5K/32K/256 10K/48K 256K/32M/ 64/1K 10K/48K/1M 4K/128K/
mAMPS [59]
MANTIS Nymph [66] Telos [22] MicaZ BSN node [20] MITes [25] AquisGrain [30] RISE [67]
1K/16K/32K 1K/16K/32K 4K/128K/512K 4K/128K/4K
RFMTR1000 RFMTR1000 RFNTR1000 ZV4002 BT CC1000 2M/2M RFMTR1000 3K/48/64K Radiometrix 4K/128K Ericsson BT 4K/128K/512K CC1000 4K/128K/512K CC1000 4K/128K Ericsson BT 4K/32K/136K/1M TR1000 634K/512K Zeevo BT 1K/32K/256 CDC-TR-02B 3K/(*) FSK Transmitter 2K/32K CC1010
TinyOS TinyOS TinyOS TinyOS/ BTnut (*) Smart-its Smart-its TinyOS TinyOS Palos Palos TinyOS Pavenet TinyOS (*)
4K/128K/512K 2K/60K512K 4K/128K 2K/60K/512K 2K/60K/ 4k/128k/512k 2K/32K
CC1010 CC2420 CC2420 CC2420 nRF24E1 CC2420 CC1010
Mantis TinyOS TinyOS TinyOS (*) (*) TinyOS
4K/128K/512K
RFPIC12F675
(*)
512/16K/1280
BR-C11A Class 1 CC2420 EM2420 nRF2401, nRF903, nRF903 TDA5200 TDA5250 CC2420 rfPIC16F675 CC2420 EM2420
(*) TinyOS TinyOS
TinyOS TinyOS TinyOS Smart-its TinyOS EmberNet (continued)
5 Distributed, Embedded Sensor and Actuator Platforms
111
Table 5.1 (continued) Platform
CPU
RAM/Flash/ EEPROM
XYZ [27] Ant [30] ProSpeck [76] Fleck [77] SunSpot [78] FireFly [79] Sensinode [80] ScatterWeb [81] SHIMMER [31] SquidBee [82]
ML67Q500x MSP430F1232 CY8C2764 Atmega128L ARM7 ATmega32 MSP430F1611 MSP430F1612 MSP430F1611 ATmega168
4k/256k/512k 256/8k/ 256/16k 4k/128k/ 256k/2M/ 2k//32k 10/48k/4M 5k/55k/4G 10k/48k/2G 1k/16k/512
T-nodes [83] WeBee [84]
ATmega128L CC2430/31 8051 core MSP430F1611 MSP430 MSP430 ATmega1281 DSPCoolflux H8S/2218 EPXA1F484C3 MSP430F1611 MSP430F149 C8051F125
Tiny Node [28] Tmote Mini [4] Tmote Invent [4] IRIS [85] SAND [86] ZN1 [87] Fantastic Data node [88] mPlatform [89] SPIDER-NET [90] MASS [26]
Radio
OS
4k/128k/ 8K/128K/
CC2420 nRF24AP1 (*) nRF903 CC2420 CC2420 CC2420 CC1012 CC2420 XBee Maxstream CC1000 CC2430/31
SOS ANT (*) TinyOS SquakVM Nano-RK (*) Contiki TinyOS Xbee firmware TinyOS (*)
10K/48K/512K 10K48K/1M 10k/48k/1M 8k/128k/512k (*) 12K/-/128K 4M/32M/ 10K/48K 4k/60k/ 8k/128k/
XE1205 CC2420 CC2420 802.15.4 (*) 802.15.4 (*) CC2420 CC2400 CC2420 CC1000 (*)
TinyOS TinyOS TinyOS Mote Works (*) (*) (*) (*) (*) mC/OS-II
(*) Not specified on the literature
terms of power consumption. Further versions, the ‘Mica2’ and ‘Mica2Dot’, were designed to provide a more deployable platform; the microcontroller and radio were replaced and lower quiescent currents were achieved. The ‘MicaZ’ replaced the radio to become IEEE 802.15.4 compatible. Finally, the ‘Telos’ and later the ‘Tmote’, further reduced the quiescent current, increased wake-up times and incorporated USB connectivity, all in order to make the platform easier to use for researchers. From a software perspective all of the MICA platforms run TinyOS; the origin of this mote-orientated operating system was that is was originally written around the MICA family.
2.2
The Intel Mote family
The early implementation of motes, such as the mica family, focused upon supporting simple sensors for simple applications that handled low amounts of data (and did not require high bandwidth). Intel motes are designed to satisfy more demanding applications in terms of the amount of handled data and data processing, which is the case for schemes that implement data fusion and aggregation. The main
112
J. Barton, E. Jung
driving force for the design of the Intel motes was to improve existing motes in specific areas, including: CPU performance, memory capacity, radio bandwidth and reliability, while being both cost and size effective. The first Intel mote platform was designed in 2003; it is a Bluetooth (802.15.1) based wireless sensor network platform orientated to industrial applications [29]. The platform evolved to become the Intel Mote 2, where both the processor and radio are changed; the Bluetooth radio was replaced by a ZigBee 802.15.4 compliant platform (See Fig. 5.7). In the same fashion as the mica platform, the main board contains power regulation, processing and radio; application dependant daughter boards are connected on top. One reason for choosing Bluetooth was the capability to fully support the Bluetooth scatternet mode, which is required in order to build mesh networks of piconets. The Intel Mote2 incorporates an onboard Zigbee 802.15.4 radio, but allows for a possible addition of a Bluetooth module. Both platforms can run on TinyOS and the latest versions can run Linux for more demanding applications; these are commercially available through Crossbow technologies. In 2005, the Intel Digital Health Group created the SHIMMER, shown in Fig. 5.8 [31]. While this sensor node is orientated towards health and wearable
Fig. 5.7 The Intel I-Mote and I-Mote2
Fig. 5.8 The Intel SHIMMER Mote
5 Distributed, Embedded Sensor and Actuator Platforms
113
applications, it is versatile and easy to use; this makes the SHIMMER a perfect platform for research applications. One of the key features of the design is the dual radio interface, allowing for Bluetooth and IEEE 802.15.4, as well as the MicroSD card interface, which permits up to 2G of on-board memory storage. It contains the popular low power MSP430F1611 microcontroller. The board is designed on a thin substrate and allows connectivity to daughter sensing boards and the programming interface module. SHIMMER can also run over TinyOS.
2.3
The BT node
The BT node [24] is probably the first light computational mote that includes a dual radio option (See Fig. 5.9). It is an autonomous wireless communication and computing platform based on a Bluetooth radio, a low power radio and microcontroller. It has been developed as a demonstration platform for research in mobile and ad-hoc networks and in distributed sensor networks. The BT node has been jointly developed at the ETH Zurich (Swiss Federal Institute of Technology in Zurich) by the Research Group for Distributed Systems and the Computer Engineering and Networks Laboratory [32]. There have been three major hardware revisions of the BT node hardware platform. The first revision was a Bluetooth based only and, while it had advantages in terms of connectivity and data throughput associated with Bluetooth, the power consumption was excessive for real deployments or applications. The third revision of the mote includes both a low power radio CC1000 and a Bluetooth module, with lower power consumption than other revisions. The main advantage of the platform is the potential to coexist in a heterogeneous network; the node can even act as bridge between Bluetooth devices and low power networks. From a mechanical point of view, the platform is similar
Fig. 5.9 The BT Node
114
J. Barton, E. Jung
to the mica family. It contains a vertical connector to attach daughter boards and the battery receptacle is placed at the bottom of the board. As a software system, the BT node can run both the ‘BT nut’ and the TinyOS operating systems. The Swiss distributor Art of Technology [33] has commercialized the BT nodes.
2.4
The Tyndall Mote
Since its emergence in 2003, the Tyndall Mote, developed at the Tyndall National Institute by the Wireless Sensor Networks Team [34] has become an invaluable tool among research institutes across Ireland. As opposed to previously described platforms, the Tyndall Mote is compact, highly reconfigurable and truly modular. The design is based around several 25×25mm boards that are interconnected by means of two standard connectors placed on contiguous sides of each of the square boards. The connectors add mechanical robustness to the system and provide for electrical interconnection between layers on a shared bus. There are many compatible customlayer designs, ranging form a ZigBee compliant radio to a generic sensor interface layer. FPGA technology and additional processing capability can easily be incorporated into the system stack when required by simply adding the required layer, similarly appropriate power supplies and battery layers or coin cell battery layers can be stacked one on top of each other, also in a modular fashion (See Fig. 5.10). The communication layer contains a radio, a suitable processor and power regulation [16]. To date, there are ZigBee compliant, 2.4GHz, 868MHz and 433MHz communication layers, allowing maximum design flexibility for the application and enabling the Tyndall system to be used in a wide variety of deployment scenarios. All of the communication layers are designed with an on-board ATmega128L microcontroller and extensive C library drivers, developed to integrate the radio and transceivers, as well as being compatible with TinyOS, and other standard
Fig. 5.10 The Tyndall 25mm Mote
5 Distributed, Embedded Sensor and Actuator Platforms
115
operating systems commonly in use in the research community. An increasing number of application specific sensor layers have been developed to meet various project requirements (up to 20 to date) including health monitoring layers and full six degree of freedom inertial measurement units [14, 15]. A programme of further miniaturisation of the Tyndall Mote has resulted in a modular stackable 10mm mote (See Fig. 5.11). This mote includes: ●
●
A transceiver module with a size of 10mm by 10mm, operating in the 433/868MHz frequency bands. An interface layer providing a regulated power supply from a rechargeable battery, USB battery charging, and USB communications to support the transceiver module [35].
The node has been designed to support very low power operation for applications with low duty cycles, with a sleep current of 3.3mA, transmission current of 10.4mA, and reception current of 13.3mA. The small size, combined with the level of modularity and energy efficiency, results in a system that is suited to a wide variety of potential applications. Currently, a sensor interface module in the 10mm form factor is available. This includes a temperature and humidity sensor, a light sensor, and a 2-axis accelerometer. From a software perspective, several layers have been ported to TinyOS; most are now compatible with TinyOS-based programming.
3
Smart Dust, Smart Matter, the E-Grain and E-Cubes
A number of large scale research programmes have developed over the last few years to explore the emerging distributed sensing technology sector. A selection of these will be examined below.
Fig. 5.11 The Tyndall 10mm Mote
116
J. Barton, E. Jung
Fig. 5.12 The full Smart Dust Concept
Fig. 5.13 The WeC platform
3.1
Smart Dust
Kristofer Pister, a professor of electrical engineering at the University of California, Berkeley and one of the pioneers in the wireless sensor networks field, first coined the term “smart dust” in 1997. Extrapolating the recent advances in microelectronics and in wireless communications, he reasoned that a low-power computer could be built within one cubic millimetre of silicon. This “cubic millimetre mote” would contain a battery, a two-way radio, digital logic circuitry, and the capability to monitor its surroundings. This became know as
5 Distributed, Embedded Sensor and Actuator Platforms
117
the “Smart Dust” project [36, 37, 38]. Building upon the original concept, the ultimate project goal was to develop a system of wireless sensor modules where each unit was approximately the size of a mote of dust. The work includes miniaturization, including use of die-bonding, flip-chip and wire-bond assembly, employing integrated micro-sensors, and computation, as well as wireless (RF/ optical) communication. A recent review [43] discusses various techniques to take smart dust in sensor networks beyond millimetre dimensions to the micrometer level. The mote concept (as we know it today) was created in this context and evolved by researchers, such as David Culler at the University of California, Berkeley. Culler’s group went on to create TinyOS [39, 40, 41] and is now the lead developers of this operating system (see Fig. 5.12). WeC [42] was probably the first wireless sensor platform, or mote, ever conceived (see Fig. 5.13). It was introduced by the University of Berkeley as one of the outcomes of the Smart Dust project. WeC can be considered as the “mother” of the wireless sensor nodes outlined in the previous sections. Since its appearance in 1998, tens or even hundreds of platforms have been designed based on the WeC architecture.
3.2
Smart Matter
The ‘Smart Matter’ research programme at Xerox’s Palo Alto Research Centre (PARC) started around the same time as ‘Smart Dust’, seeking to enhance the environment by embedding microscopic sensors, computers and actuators into materials [44]. Smart matter was therefore defined originally as a physical system or material with arrays of microelectromechanical (MEMS) devices embedded in it in order to detect, and adjust to, changes in their environment. For example, smart matter could be used to move sheets of paper in a printing machine or maneuver an aircraft by performing tiny adjustments to wing surfaces. Generally, each MEMS device embedded in smart matter contains microscopic sensors, actuators, and computational elements. A characteristic of smart matter is that the physical system consists of large numbers (possibly thousands) of microelectromechanical devices. These devices work together to deliver a desired higher level function. PARC’s initial research activities in this area included a Smart Matter-driven paper path, smart beams and columns capable of adjusting their load-bearing strength and stiffness, distributed control strategies for Smart Matter, and novel fabrication techniques that merge MEMS technology and macro-scale objects. Since then, the research area has considerably expanded at PARC and now some of the many research themes in the area of Smart Matter integrated systems include embedded collaborative computing, embedded reasoning, modular robotics, large area electronics, industrial inkjet printing systems and controlled droplet dispensing [45, 46].
118
3.3
J. Barton, E. Jung
The Fraunhofer e-Grain concept
As shown, the modular break down of a distributed sensor node can be implemented now in a layered approach, where each functional block is realized in an individual miniature module. The final sensor node is realized when these modules are connected together,. The individual modules can be realized in SMD technology and even (specifically for the high levels of miniaturization) using bare dice and fine pitch interconnect schemes. Fig. 5.14 shows such a concept, where the individual layers consist of the actual sensor module with a signal conditioning circuit, the communication module and the energy storage/conversion module. As part of a collaborative project to promote self-sufficient distributed microsystems (funded by the German Research Ministry) the eGrain project - Autarkic Distributed Microsystems - was started, coordinated by Fraunhofer-IZM [47–49]. The project, commenced in 2002, sought to develop the necessary systems integration technologies to achieve a distributed microsystem. Also participating in the project were researchers at Technical University Berlin, developing network software and miniature antennas, and researchers from Ferdinand-Braun-Institut in Berlin, working on low-power high-frequency circuits. The long term goal of the project is the development of a 3D integrated cube with dimensions of 4mm × 4mm × 2mm, working off a 3V power source, with an energy capacity of 3.2 mWh. The target data rate is one Mbit/s at a frequency of either 24 or 60 GHz with a range of 1 m at a transmitting power of 0.1 mW.
Fig. 5.14 Fig. 52: Modular building blocks of a sensor node for ambient intelligence networks: sensor- signal conditioning -Tx/Rx - power supply
5 Distributed, Embedded Sensor and Actuator Platforms
119
By utilising advanced packaging technologies (including surface mount (SMD), chip-on-board (COB) technology, flexible fine-line interposer, flip chip, vertical chip integration, thin-chip integration on flex; integrated passive devices on flex or CSP), prototypes of wireless sensor nodes were implemented to verify the design approach. Several miniaturization steps with different 3D system integration technologies were realized during the exploration of this design space. Starting from modules of 2.6cm (about one inch) edge length in conventional SMD technology (See Fig. 5.15), miniaturization has since shrunk the wireless sensor system to 2 cm (per side). At this size, the modules were realized using bare die that are attached by flip chip mounting. Subsequently, prototypes of 1 cubic centimetre were developed, based upon a folded flexible substrate. Finally, flip chips on both substrate sides allowed folded modules of only 6mm (in edge length).
3.3.1
Advanced techniques (flex folding)
For higher miniaturization requirements, alternative approaches to the “packageon-package” (PoP) concept have been used; for a PoP system the individual die are separately packaged and then the system is assembled. As stated above, folded flex carriers using bare die are a good approach, benefiting from the mature flexible substrate technology. This folded flex approach can shrink a PoP module to ~ 1/10 volume (See Fig. 5.16).
Fig. 5.15 26mm × 26mm × 24mm eGrain prototype
120
J. Barton, E. Jung
Fig. 5.16 An ultra-miniature sensor node for ambient sensing and communication (courtesy Fraunhofer IZM)
3.3.2
Emerging techniques
The previously mentioned concepts rely on established, advanced assembly and interconnect techniques and will not benefit from the scalability in size and cost we can see in the semiconductor industry. Techniques like Wafer Level Assembly (See Fig. 5.17), or wafer integrated systems, will allow future systems to be manufactured in scalable technologies. The individual functional layers are not handled individually anymore, but as full wafers, stacked and interconnected with through silicon vias (See Fig. 5.18). Another emerging alternative for integration is the use of stacked wafers, leveraging the commercial advent of three dimensional through-silicon vias (3D-TSV). Here, advanced fabrication processes, derived from surface micromachining, are used to create and metal-fill vias, which interconnect the different wafers together. In order to do this, the wafers mounted in the sequence need to be backside thinned to 50um (remaining silicon thickness). Currently, these technologies are reaching a maturity level that makes them attractive for semiconductor manufacturers, specifically for memory modules. However, as soon as these techniques have entered the mainstream, they are available also for complex system realization [50].
3.4
The e-Cubes Project
Two of the collaborators in the eGrain project – Fraunhofer IZM and Technical University Berlin are also involved in the eCubes project [51], a large scale European Project, which commenced in February 2006. The objective of e-CUBES (See Fig. 5.19) is to advance micro-system technologies to allow for cost-effective realization of highly miniaturised, truly autonomous
5. Distributed, Embedded Sensor and Actuator Platforms
121
Fig. 5.17 Wafer level assembly of embedded circuitry (electronics, sensor, passive components) using thin chips (~20um) (courtesy Fraunhofer IZM)
Fig. 5.18 Through silicon vias for wafer stack interconnect, creating subminiature systems (images courtesy of 3D EMC)
Fig. 5.19 The concept of e-CUBES in the context of global AmI Systems
122
J. Barton, E. Jung
systems for Ambient Intelligence (AmI). With 20 partners from 11 countries, eCubes is a significant research undertaking. In order to achieve a cost effective solution for the highly miniaturised e-CUBES system, the project is applying 3D interconnect technologies (“cubic” interconnects - hence the name e CUBES), as well as using modularity (reuse) and wafer level fabrication technologies (in order to reach the required economies of scale). The e-CUBE is a 3D stack of functional sub-modules, each of which is, in itself, composed of a 3D stack of different (heterogeneous) functional layers (e.g. e-CUBE application layers). Given the projected improvements in integrated circuit technology (with respect to die size, power consumption and frequency capabilities) the target size for the e-CUBES project is to be smaller than 1cm3. Technologically, the project also focuses on functional building blocks for integration, such as the individual communications, sensor and power components. The overall application scenarios envisaged by the project are in 1) health and fitness, 2) aeronautics and space and 3) Automotive.
4 4.1
Systems Examples Using Advanced Packaging Technology Textile Integration
For ambient sensing on the human body, rigid electronics and electronic interconnecting cables are not suitable. A more acceptable solution would be the direct integration of the electronic sensors and circuitry in the clothing. As any rigid cable would interfere with the user’s movements and habits, integration with textile-based electrodes and wiring is preferred. Conductive yarn (See Fig. 5.20) has become available in recent years and has been demonstrated to provide low resistance, high reliability, and possibilities for high density interconnect [52]. Multi-threading offers a high degree of redundancy; this may ensure contact, even after multiple washing cycles and everyday wear and tear. Module interconnect can be obtained by pushbutton style mounting of electronic modules or by sewing the module to the fabric.
Fig. 5.20 Conductive yarn for textile inspired interconnects (e.g. [17])
5 Distributed, Embedded Sensor and Actuator Platforms
123
Fraunhofer IZM and TITV Greiz have pioneered these approaches, in which miniature, non-obtrusive modules can be safely connected to a woven substrate (e.g. low-density routing using conductive yarn) by either of the interconnect techniques (See Fig. 5.21). To protect both the electronic module and interconnect during, for example, washing and pressing, a thin encapsulation using an electronic mold compound (EMC) was applied to ruggedize the assemblies. The use of lamination (using duromeric polymer layers – See Fig. 5.22) over the sensitive area has also been evaluated. The potential for reliable operation of these created interconnects has been demonstrated successfully [53].
4.2
A system for monitoring in vineyards
GrapeNetworks in California has launched products based upon a variety of sensors (including humidity and temperature) with full ad-hoc networking capability and integrated them to a web-based infrastructure for global monitoring
Fig. 5.21 Sewn interconnects using conductive yarn to connect an electronic miniature module to a textile substrate (courtesy Fraunhofer IZM)
Fig. 5.22 Protection of sewn-on electronic module by lamination of a duromeric polymer (courtesy Fraunhofer IZM)
124
J. Barton, E. Jung
Fig. 5.23 Sensor node for a wireless ambient aware network (courtesy Grape Networks)
applications. Key customers include grape manufacturers for wine, using the systems to ensure that the optimum supply of water is provided to the growing grapes (See Fig. 5.23) [54].-
4.3
Long-range systems for environmental monitoring
SensorWare, a spin out from NASA research, has developed a multi-sensory pod that provides long range communication [55, 56]. The required autonomy of the system is ensured by a solar cell on top of the sensor/communication pod or by an adequate battery unit. The commercial implementation is larger than the research prototype, as target ranges in Antarctica require many miles of coverage with the sensors (Fig. 5.24).
4.4
A high-density demonstrator
Fraunhofer IZM has demonstrated (using a 3D stacked package concept) the modular integration of a complete miniaturized sensing/transmission system [57] that, as a demonstrator, enables a golf ball (See Fig. 5.25) to communicate strike data to a remote system [58]. The core techniques used here were based on low power accelerometers in silicon surface micromachining, a proprietary signal conditioning circuit and a Bluetooth chip stack. The latter was selected for easy integration into an existing IT infrastructure with data rates that are high enough to transmit the information to the PDA. It is not currently optimized for energy management.
5 Distributed, Embedded Sensor and Actuator Platforms
125
Fig. 5.24 Sensor module for ambient sensing and communication, miniature and deployed version with solar energy supply (courtesy Sensor Ware Systems, Inc.)
Fig. 5.25 Smart Golf Ball insert, leveraging advanced packaging techniques for miniature ambient sensing (courtesy Fraunhofer IZM)
5
Conclusion
Recent developments in wireless and micro-sensor technologies have provided foundation platforms for the development of effective modular systems. They offer the prospect of flexibility in use and network scalability. Wireless sensor devices are the key hardware platforms required to construct the building blocks of wireless
126
J. Barton, E. Jung
sensor networks; their existence is the direct consequence of three main key breakthroughs in microelectronics: ●
●
●
Recent progresses in very large scale integration (VLSI), moving towards nanotechnology, and packaging technology, developing chip scale package (CSP), led to development of miniaturised, very low cost and low power microcontrollers. Advances in RF technology in parallel with CMOS processing, resulted in the development of highly integrated, high performance RF front ends leading to transceivers with on-chip integrated functional blocks. Microelectromechanical (MEMS) technology enabled the development of low power, low cost, highly miniaturised sensors that can potentially be integrated in silicon substrate among other circuitry.
These advances in technology make possible the vision of highly integrated, inexpensive microsystems that are able to sample, process information and then communicate over short distances. While the electronic performance, the size and the cost of these micro-sensor devices might meet the demands of certain wireless sensor networks applications, battery technology has not been able to cope with the pace of the advance and constitutes a bottle neck on the development of many other application areas.
References 1. P. H. Chou, Y.C. Chung, C.T. King, M.J. Tsai, B.J. Lee, and T.Y. Chou, “Wireless Sensor Networks for Debris Flow Observation,” in Proceedings of the 2nd International Conference on Urban Disaster Reduction (ICUDR), Taipei, Taiwan, November 27–29, 2007 2. J.P. Clech et al., “Surface mount assembly failure statistics and failure free time”, ECTC 1994, pp. 487–497 3. Crossbow Inc - http://www.xbow.com/ 4. MoteIV Corporation (now Sentilla) - http://www.sentilla.com/ 5. Dust, Inc - http://www.dust-inc.com/ 6. Phidgets, Inc - http://www.phidgets.com/ 7. Meshnetics - http://www.meshnetics.com/ 8. Sensicast - http://www.sensicast.com/wireless_sensors.php 9. AccSense - http://www.accsense.com/ 10. Millennial Net - http://www.millennial.net/products/meshscape.asp 11. Ember - http://www.ember.com/products_index.html 12. On-World WSN Report, 2005 13. B. Majeed et al, “Microstructural, Mechanical, Fractural and Electrical Characterisation of Thinned and Singulated Silicon Test Die”, J. Micromech. Microeng. Volume 16, Number 8, August 2006 pp. 1519–1529 14. J. Barton et al, “25mm sensor–actuator layer: A miniature, highly adaptable interface layer”, Sensors and Actuators A 132 (2006), pp. 362–369, November 2006 15. S. J. Bellis et al, “Development of field programmable modular wireless sensor network nodes for ambient systems”, Computer Communications - Special Issue on Wireless Sensor Networks and Applications, Volume 28, Issue 13, pp. 1531–1544. (Aug 2005) 16. B. O’Flynn et al, “A 3-D Miniaturised Programmable Transceiver”, Microelectronics International, Volume 22, Number 2, 2005, pp. 8–12, (Feb 2005)
5 Distributed, Embedded Sensor and Actuator Platforms
127
17. http://www.textile-wire.ch/downloads/neu_textile_wire_doc_de.pdf 18. S. Stoukatch et al, “3D-SIP Integration for Autonomous Sensor Nodes”, Proc. ECTC 2006, Sheraton San Diego, San Diego, California, May 30- June 2, 2006, pp. 404–408 19. K. Lorincz, et al, “Sensor networks for emergency response: challenges and opportunities”, IEEE Pervasive Computing, Volume 3, Issue 4, Oct-Dec 2004, pp. 16–23 20. B. Lo et al, “Architecture for Body Sensor Networks”, Proc. Perspective in Pervasive Computing Conference, October, 2005, pp. 23–28 21. D. Mclntire et al, “The low power energy aware processing (LEAP) embedded networked sensor system”, Proc.s Fifth International Conference on Information Processing in Sensor Networks (IPSN 2006), 19–21 April 2006, pp. 449–57 22. J. Polastre, R. Szewczyk, and D. Culler, “Telos: enabling ultra-low power wireless research,” in Information Processing in Sensor Networks, IPSN 15-April-2005, Page(s) pp. 364–369 23. H. Gellersen et al, “Physical prototyping with Smart-Its” IEEE Pervasive Computing, Volume 3, Issue 3, July-Sept. 2004 pp. 74–82 24. J. Beutel et al, “Prototyping Wireless Sensor Network Applications with Btnodes,” Proc. 1st European Workshop on Sensor Networks (EWSN 2004), pp. 323–338. 25. E. M. Tapia, et al, “MITes: wireless portable sensors for studying behavior,” in Proceedings of Extended Abstracts Ubicomp 2004: Ubiquitous Computing, 2004. 26. N. Edmonds et al, “MASS: modular architecture for sensor systems” Proc. Fourth International Symposium on Information Processing in Sensor Networks, 2005. IPSN 2005. pp. 393–397 27. D. Lymberopoulos et al, “XYZ: a motion-enabled, power aware sensor node platform for distributed sensor network applications” Proc. Fourth International Symposium on Information Processing in Sensor Networks, 2005. IPSN 2005, pp. 449–454 28. H. Dubois-Ferriere et al, “TinyNode: a comprehensive platform for wireless sensor network applications”, Proc. Fifth International Conference on Information Processing in Sensor Networks, 2006. IPSN 2006, pp. 358–365. 29. L. Nachman et al, “The Intel mote platform: a Bluetooth-based sensor network for industrial monitoring”, Proc. Fourth International Symposium on Information Processing in Sensor Networks, 2005. IPSN 2005, pp. 437–442 30. P. van der Stok, “State of the art,”IST-034963, WASP. Deliverable D1.2: Mar.2007 31. B. Kurtis and T. Dishongh, “SHIMMER: Hardware Guide,”Intel Digital Health Group, Version 1.3, Oct.2006. 32. ETH-TK, Computer Engineering and Networks Laboratory. http://www.tik.ee.ethz.ch/ 33. Art of Technology, Art of Technology AG website. http://www.art-of-technology.ch/english/ index.html 34. Tyndall National Institute: http://tyndall.ie/ 35. S. Harte et al, “Design and Implementation of a Miniaturised, Low Power Wireless Sensor Node”, Proc. 18th Euro. Conf. Circuit Theory and Design, Seville, Spain, August 26-30th, 2007, pp. 894–897 36. J.M. Kahn et al, “Next century challenges: Mobile networking for smart dust”, In Proc. 5th ACM/IEEE Ann. Int’l Conf. Mobile Computing and Networking (MobiCom ‘99), pages 271–278. ACM Press, New York, August 1999. 37. B. Warneke et al, “Smart dust: Communicating with a cubic-millimeter computer,” Computer, vol. 34, no. 1, p. 44–51, Jan.2001. 38. The University of Berkely, Smart Dust project website. http://www-bsac.eecs.berkeley.edu/ archive/users/warneke-brett/SmartDust/index.html. 39. J.L. Hill et al, “System architecture directions for networked sensors”, In Proc. 9th Int’l Conf. Architectural Support Programming Languages and Operating Systems (ASPLOSIX), pages 93–104. ACM Press, New York, November 2000 40. P. Levis et al, “Ambient Intelligence”, chapter TinyOS: An Operating System for Sensor Networks, pages 115–148. Springer, Berlin, 2005. 41. http://www.tinyos.net/ 42. J. Hill and D. Culler, “Mica: A Wireless Platform for Deeply Embedded Networks”, IEEE Micro., vol. 22(6), Nov/Dec 2002, pp. 12–24.
128
J. Barton, E. Jung
43. M. J. Sailor et al, “Smart dust: nanostructured devices in a grain of sand”, Chemical Communications, vol. 11, p. 1375, 2005 44. T. Hogg and B. A. Huberman, “Controlling smart matter”, Smart Mater. Struct. 7 No 1 (February 1998) R1–R14 45. www.parc.com/research/subtheme.php?subtheme=Smart+Matter+Integrated+Systems 46. www.parc.com/research/projects/ecc/collaborative_sensing.html 47. M. Jürgen Wolf, “The e-Grain Concept - Microsystem Technologies for Wireless Sensor Networks”, Advanced Microsystem Packaging Symposium, April 7th, 2005, Tokyo, Japan 48. M. Niedermayer, et al, “Miniaturization platform for wireless sensor nodes based on 3D-packaging technologies”, Proc. Fifth International Conference on Information Processing in Sensor Networks, 2006. IPSN 2006. pp. 391–398 49. http://www.e-grain.org/ 50. V. Kripesh, “Silicon Substrate Technology for SiP Modules”, EMC 3D Technical Seminar, Munich, Jan 2007 51. The eCubes Project: http://ecubes.epfl.ch/public/ 52. T. Linz et al., “Embroidering electrical interconnects with conductive yarn for the integration of flexible electronic modules into fabric”, Wearable Computers, 2005, pp. 86–89 53. T. Linz et al., “Contactless EMG sensors embroidered onto textile”, Proc. Of 4th International Workshop on Wearable and Implantable Body Sensor Networks, Berlin, 2007, pp. 29–34 54. J. Burrell et al., “Vineyard computing: sensor networks in agricultural production”, Pervasive Computing Volume: 3, Issue: 1, 2004, pp. 38–45 55. K. Delin et al., “The Sensor Web: A New Instrument Concept”, SPIE’s Symposium on Integrated Optics, 20–26 January 2001, San Jose, CA 56. K. Delin et al., “Sensor Web for Spatio-Temporal Monitoring of a Hydrological Environment”, Proceedings of the 35th Lunar and Planetary Science Conference, League City, TX, March 2004 57. M. Niedermayer et al., “Miniaturization platform for wireless sensor nodes based on 3D-packaging technologies”, Information Processing In Sensor Networks, SPOTS 06, Nashville, 2006, pp. 391–398 58. K.D. Lang et al., Industrially compatible PCB stacking technology for miniaturized sensor systems”, EPTC 2005, Singapore, 2005, pp. 6–10 59. R. Min et al, “An architecture for a power-aware distributed microsensor node”, in Workshop on Signal Processing Systems, SiPS 13-October-2000, Page(s) pp. 581–590 60. J. Hightower et al, “SpotON: An indoor 3D location sensing technology based on RF signal strength”, University of Washington, Department of Computer Science and Engineering, Seattle, WA, 2000 61. A. Chen et al, “A support infrastructure for the smart kindergarten”, IEEE Pervasive Computing, vol. 1, no. 2, pp. 49–57, June 2002. 62. A. Sawides and M. B. Srivastava, “ A distributed computation platform for wireless embedded sensing”, in International Conference on Computer Design: VLSI in Computers and Processors, ICCD 16-September-2002, Page(s) pp. 220–225 63. S. Saruwatari et al, “PAVENET: A Hardware and Software Framework for Wireless Sensor Networks”, Transactions of the Society of Instrument and Control Engineers, vol. E-S-1, no. 1, pp. 76–84, Nov. 2004. 64. J. L. Hill, “System architecture for wireless sensor networks.” PhD thesis is Computer Science, University of California, Berkeley, 2003. 65. RFRAIN: RF random access integrated nodewww.media.mit.edu/resenv/rfrain/index.html 66. H. Abrach, S. Bhatti, J. Carlson, H. Dai, J. Rose, A. Sheth, B. Shucker, J. Deng, and R. Han, “MANTIS: system support for multimodAl NeTworks of in-situ sensors,” in International Workshop on Wireless Sensor Networks and Applications, IWWNA 2003, Page(s) pp. 50–59 67. A. Banerjee et al, “RISE - Co-S: high performance sensor storage and Co-processing architecture”, in IEEE Communications Society Conference on Sensors and Ad Hoc Communications and Networks, IEEE SECON 29-September-2005, Page(s) pp. 1–12.
5 Distributed, Embedded Sensor and Actuator Platforms
129
68. Teco, “Selection of Smart-Its Particle Prototypes, Sensor and Add-On Boards”. http://particle. teco.edu/devices/devices.html. 69. L. Laibowitz and J. A. Paradiso, “Parasitic mobility for pervasive sensor network”, in International Conference on Pervasive Computing, PERVASIVE May-2005, Page(s) pp. 255–278. 70. V. Shnayder et al, “Sensor networks for medical care,”Technical Report TR-08-05, Division of Engineering and Applied Sciences, Harvard University, 2005. 71. EnOcean Transceiver Module TCM120 datasheet. http://www.enocean.com/php/upload/pdf/ DB_ENG7.pdf. Last accessed: 21-7-0007 72. S. Blom et al, “Transmission Power Measurements for Wireless Sensor Nodes and their Relationship to the Battery Level”, in International Symposium on Wireless Communications Systems 7-September-2005. 73. R. M. Kling, “Intel Motes: advanced sensor network platforms and applications”, in MTT-S International Microwave Symposium Digest, MWSYM June-2005, pp. 4. 74. M. Bigl et al, “The uPart experience: building a wireless sensor network”, in International Conference on Information Processing in Sensor Networks, IPSN 21-April-2006, Page(s) pp. 366–373. 75. Moteiv wireless sensor networks, Tmote Sky datasheet. http://www.moteiv.com/products/ docs/tmote-sky-datasheet.pdf. 76. D. K. Arvind and K. J. Wong, “Speckled computing: disruptive technology for networked information appliances”, in International Symposium on Consumer Electronics 3-September2004, Page(s) pp. 219–223. 77. P. Sikka et al, “Wireless sensor devices for animal tracking and control”, in International Conference on Local Computer Networks, LCN 18-November-2004, Page(s) pp. 446–454. 78. Sun Microsystems Laboratory, “Sun SPOT system: Turning vision into reality”,2005. 79. R. Mangharam et al, “Voice over sensor networks,” in International Real-Time Systems Symposium, RTSS December-2006, Page(s) pp. 291–302. 80. Sensinode Inc,Sensinode: Micro hardware manual. http://www.sensinode.com/pdfs/ sensinode-manual-hw.pdf. 81. M. Baar et al, “Poster Abstract: The ScatterWeb MSB-430 platform for wireless sensor networks”, in The Contiki Hands-On Workshop March-2007. 82. SquidBee Open Hardware and Source, SquidBee datasheet. http://www.libelium.com/squidbee/upload/c/c1/SquidBeeDataSheet.pdf. 83. O. W. Visser, “Localisation in large-scale outdoor wireless sensor networks”, Master’s Thesis in Computer Science, Delft Univeristy of Technology, 2005. 84. Center of Excellence for Embedded Systems Applied ResearchLucerne University of applied Sciences, Datasheet WeBee Three. http://www.ceesar.ch/cms/upload/pdf/projects/Datasheet% 20WeBee%20Three.pdf 85. Crossbow Technologies Inc, IRIS datasheet. http://www.xbow.com/Products/Product_pdf_ files/Wireless_pdf/IRIS_Datasheet.pdf. 86. M. Ouwerkerk et al, “SAND: a modular application development platform for miniature wireless sensors”, in International Workshop on Wearable and Implantable Body Sensor Networks, BSN 2006, Page(s) pp. 166–170. 87. S. Yamashita et al, “A 15 × 15 mm, 1 uA, reliable sensor-net module: enabling applicationspecific nodes”, in International Conference on Information Processing in Sensor Networks, SPOTS 21-April-2006, Page(s) pp. 383–390. 88. T. Hammel and M. Rich, “A higher capability sensor node platform suitable for demanding applications”, in International Conference on Information Processing in Sensor Networks, IPSN 27-April-2007, Page(s) pp. 138–147. 89. D. Lymberopoulos et al, “mPlatform: a reconfigurable architecture and efficient data sharing mechanism for modular sensor nodes”, in International Conference on Information Processing in Sensor Networks, IPSN 27-April-2007, Page(s) p. 137. 90. K. Sweiger et al, “SPIDER-NET: a sensor platform for an intelligent ad-hoc wireless relaying network”, in International Conference on Mobile Computing and Networking, MobiCom 2004, Page(s) pp. 125–126
Chapter 6
Embedded Microelectronic Subsystems John Barton
Abstract This chapter explores embedded microelectronic sub-systems by first defining the meaning of microelectronics packaging. Increasing the packaging density of electronic products, through techniques such as integral substrates and advanced interconnect, is a key issue. This challenge needs to be addressed inherently through electronic packaging in order to meet consumers demand for light-weight, compact, reliable and multifunctional electronic or communication devices. The technological advances, particularly 3-D packaging, which is driven by consumer demand can also enable concepts such as smart objects, smart spaces and augmented materials. This chapter provides a concise review of selected areas in 3-D packaging and then focuses upon two areas that may provide the type of flexibility and density required for future high-volume smart object development. These techniques are folded flex packaging and chip in laminate/interconnect. Keywords Interconnection, Packaging, Miniaturisation, System-in-Package, Folded-flex, Wafer-level Packaging, Flip-chip, Chip-in-Laminate, Silicon Thinning
1 1.1
Introduction The Function of the Package
The function of the ‘package’ in microelectronics packaging is to provide mechanical support and environmental protection for an IC, or ICs, and for their interconnections to each other; it must also provide a means to transfer signals and data to the next ‘packaging’ level [1]. To function electrical circuits must be supplied with electrical energy, which must be either be transferred or consumed, meaning it is converted into thermal energy. The major functions that a ‘package’ is required to fulfill in an electronic circuit design are: Tyndall National Institute, Lee Maltings, Prospect Row, Cork, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
131
132 ●
●
●
●
J. Barton
Signal distribution, which involves primarily consideration of topological and electromagnetic issues. Power distribution, which includes consideration of electromagnetic, structural and material performance. Heat dissipation (and cooling), which considers both structural and materials constraints. Protection (mechanical, chemical and electromagnetic) of the components and interconnection in the package itself.
An example of the packaging hierarchy [1] including an archetypal first level package or single chip package is shown in Fig. 6.1. Innovation in assembly and packaging is accelerating in response to fact that packaging is now the limiting factor in cost and performance for many types of devices. Difficult challenges exist, in the short-term, in all phases of the assembly and packaging process from design through manufacturing to test and reliability. Many critical technology targets have yet to be met and achieving these targets will require significant investment in research and development. According to ITRS roadmap 2007 [2], some of the most difficult short-term challenges in assembly and packaging technology are:
Wafer Electronic Package Hierarchy
Chip First level package (Multichip Module)
First level package (Single chip Module) COB
Second level package (PCB or Card)
Third level package (Mother board)
Fig. 6.1 Packaging Hierarchy [R. Tummala et al]
6 Embedded Microelectronic Subsystems
133
1. 3D packaging, which includes bumpless interconnect architectures, thermal management, wafer-wafer bonding, through-wafer via structures and the via fill process. 2. Small die with high pad count and high current density packages, which includes electromigration, thermal/mechanical reliability modeling, whisker growth and thermal dissipation issues. 3. Flexible system packaging, which includes low cost interconnection, as well as small and thin die assembly.
2
3D Packaging
Assembly and packaging technology requirements are being driven as much by the rapidly changing market requirements as by the advancing generations of silicon technology. New package types are evolving in response to the demand for smaller, thinner and lighter electronic products, particularly for the rapidly expanding consumer market. Wafer-Level Packaging (WLP) and System-in-Package (SiP) are two new packaging categories that require the implementation of new and complex manufacturing technologies, with significant infrastructural investment. Wafer-Level Packaging [3, 4, 5], where the packaging functions are achieved through wafer level processing, holds the promise of lower cost and improved performance for single die packages (See Fig. 6.2). System-in-Package (See Fig. 6.3), where system integration is achieved in die packaging, enables the smaller sizes, lower cost, higher performance and shorter time to market demanded for consumer electronics [6, 7, 8]. These two package types represent paradigm shifts that, with further advancement, will in turn seek to deliver a suite of future technology to meet both the demands of future market applications (technology pull) and of advancing semiconductor technology generations (technology push). In a recent strategy document for European nanotechnology research [10], 3D integration is clearly cited as one of the leading methods for complex multifunctional systems. In the document, a number of technologies are named as priorities for research including: wafer-thinning, dicing, handling, thin interconnect, chip-to wafer integration, low temperature wafer bonding technologies, technologies for functional
Fig. 6.2 Schematics of Wafer-level packaging, showing different wafers stacked on top of each other, with interconnection formed through the via holes in the wafers [4]
134
J. Barton
Fig. 6.3 Chip level stacking, showing three chips stacked and interconnection made through the wire bonding [9]
polymer layers (including chip-in-thin-film layers, interconnection by wafer transfer), functional layer integration (actuators, sensors, antennas, lenses), as well as 3D Integration (vertical chip integration, through-silicon vias, power vias. etc). There are several companies and research institutes all over the world currently working on the research and development of 3D integrated systems. Besides approaches based on fabrication of multiple device layers, using re-crystallization or silicon epitaxial growth, there is a large spectrum of technological concepts, which can be classified in three categories: (1) stacking of packages, (2) chip stacking, (3) vertical system integration by wafer stacking or chip-to-wafer stacking. A number of current concepts, which surpass conventional multichip module (MCM) technology, are based upon stacking of fully processed and packaged devices and providing their interconnections via side-wall contacts. Since the early 90s, associated techniques have been applied, among others, by companies such as Irvine Sensors [11] in co-operation with IBM. In the European project, TRIMOD, a vertical multichip-module concept, or MCM-V, was developed by the NMRC (now Tyndall), together with Alcatel Espace and Thomson CSF (now Thales) [12]. This technology is currently industrialized by the company 3D-Plus, primarily for military, space and aeronautics applications [13]. The wafer-level 3D packaging technique uses stacking of die to a substrate wafer, embedding and interconnecting the thinned die with modified multilayer thin film wiring. The Japanese Association of Super-Advanced Electronics Technologies (ASET) researched wafer-level large-scale integration (LSI) chip stacking technologies with the goal of interconnecting more than 5 device layers by electroplated via holes (20mm diameter) [14]. In Germany, Fraunhofer, in co-operation with Infineon, developed wafer-level 3D integration based upon the adjusted bonding and vertical metallization of completely processed device substrates, without interfering with the basic IC process. The Vertical System Integration VSI® concept provides very high density vertical wiring between thinned device substrates, based upon CVD-metallized inter-chip vias (See Fig. 6.4) that are placed in arbitrary positions (1–2mm diameter) [15].
6 Embedded Microelectronic Subsystems
135
Fig. 6.4 Inter-chip vias filled with MOCVD of TiN and Tungsten from Fraunhofer
Fig. 6.5 High Aspect Ratio (1:5) Copper Interconnections 20mm wide and 100mm tall with a pitch of 40mm from Georgia Tech
Copper-to-copper thermo-compression bonding is a commonly used technology for wafer-level stacking. The technology has been investigated by MIT [16], Georgia Tech (as shown in Fig. 6.5 [17]), and Tezzaron [18]. Corresponding patterns of copper pads are exposed on the two surfaces to be bonded and the bonding is usually performed at a temperature of 350–400°C, with an applied pressure of 0.3–0.4 MPa. The desired pad size is 4×4 mm2, and the minimum pitch is 6.08mm. The capacitance of the feed-throughs is 7 fF and the series resistance is below 0.25W. As an alternative to copper metallization, the ZyCube Company uses indiumgold (In-Au) micro-bumps. In the stacking process, wafers with buried interconnect are thinned down to 20–150 mm by chemical-mechanical-polishing, or CMP. In-Au micro bumps are formed on the surface by a lift-off technique. The bumped wafer is aligned to the substrate wafer within a custom-built aligner and temporarily bonded by the In-Au bumps. For the final bond strength, a liquid
136
J. Barton
Fig. 6.6 Cross-sectional sketch of ZyCube’s technology
epoxy adhesive is injected into the gap between the two wafers in a vacuum chamber. The contact resistance of a bump, with an area of 10 mm2, was less than 0.1W [19]. The Zycube technology (See Fig. 6.6) was developed in collaboration with the Tohoku University, Japan, and license agreements for the technology are available. Nickel (Ni) is yet another alternative metal. CEA-LETI, together with the French smart card manufacturer AXALTO, has developed a technology named ‘microinserts’, or ‘micro-bumps’ for interconnecting smart cards (see Fig. 6.7). A set of 3mm thick, electroplated Ni pillars are used to connect aluminium (Al) pads on both wafers. Both polyimide and epoxy have been used for the bonding. The bonding temperature is either 100 or 350°C, depending upon the chosen material. A bonding pressure of 0.5 MPa was applied. To date, pad dimensions as small as 20mm square have been created and 2mm diameter ‘micro-inserts’ have been reported [20, 21].
3
Folding Flex
3D packaging has emerged as a high potential solution for advance electronics applications. It offers the potential for increased system integration density, reduced development cost and reduced interconnect length. Many innovative and exciting 3D technologies have been developed in the past decade, some of which have been described in the previous section. Chip or package stacked techniques use standard fabrication facilities, which make it very attractive to production companies. However, each chip has to be processed one-at-a-time and, currently, only peripheral connections are possible, limiting the overall vertical interconnection capability. Embedded chip packaging allows 3D formation with higher integration density; however, there are issues relating to thermal performance and high development costs.
6 Embedded Microelectronic Subsystems
137
Fig. 6.7 The ‘Micro-insert’ processing sequence and flip chip structure
Wafer-level packaging allows highest level of vertical integration, but challenges exist in obtaining a reliable process at a relatively low cost and lower production time. Most of the stacked, embedded and wafer-level technologies can be classified as chip-based technologies. These require either uniform chip sizes, or specific process flows, depending upon the applications. However, a folded flex process, employed as a substrate-based technology, offers the potential to be applied in heterogeneous 3-D formats and can be easily adopted for many chip designs and sizes. One of the main advantages of this technology is that it allows for easy prototyping, having a lower associated development time. The prospect of adopting this technology, from the perspective of ambient electronics systems, offers a very viable and interesting option. Using flexible substrates in combination with thin silicon could, for example, result in a very thin profile package, which can be embedded in nonconformal objects. In addition, the technology can be used to develop subsystems, or a technological platform, to investigate assembly and characterization issues for different materials. In order to investigate the assembly issues and complete a material behaviour characterization, the first step is full analysis of current state-of-the-art folded flex technologies and prototype modules. A closer look (excluding the solder-balls) at the different components and materials that make up a folded flex assembly is illustrated in Fig. 6.8 [22, 23]. It shows a typical die thickness of 114 microns, a substrate thickness of 126 microns (two metal layers), a wire-bond loop height of 80 microns, and a combined thickness due to bond-pads, molding materials, and adhesives of 145 microns. It is clear that further reduction in profile could be achieved by employing different interconnection technologies, such as flip-chip, and by reducing the thickness of the die and the substrate. For example, prior to flip-chip assembly, silicon bare die can be thinned down to 50 microns (or lower); note the flexible substrate needs to be as thin as possible to get the minimum folded package profile.
138
J. Barton
Bond line (25 um) Wire to mold top clearance (70 um)
0.8 mm
Wire loop (80um) Thin die (114 um) Elastomer (50 um) 2-metal substrate (126 um) Solder ball height (230 um or lower)
Fig. 6.8 Detail of different components in a folded chip assembly [22,23]
At Tyndall National Institute, extensive work has been carried out to further reduce the profile of a folded flex module by investigating different materials behaviour, processes and assembly sequences [24–27]. These tasks are divided in to four strands: (1) the flexible substrate, (2) a silicon thinning process, (3) a flip chip interconnect technology and (4) 3D packaging.
3.1
Flexible substrates
This section describes the appropriate investigation, development and characterization steps relevant to realizing a thin flexible substrate. The development process commenced with formation of flexible substrate layers on 4 inch test wafers, using polyimide. The thickness of the layers can be varied from 16mm down to 3mm. This was followed by an experimental sequence where, initially, a number of different release methods were tested for effectiveness in removing high integrity flexible layers and the performance of the flexible substrates derived from this sequence was evaluated through a characterization program that included electrical, chemical, moisture, and mechanical testing. A global description of the experimental and characterization sequence for this program is shown in the schematic in Fig. 6.9. [24] Two different flexible materials were analyzed; the first substrate was ‘control’ material, a commercially available 25 micron thick polyimide with 5 microns of copper metal, while the second substrate was fabricated in-house on a 4-inch wafer, varying the polyimide thickness ranging from 3 to 17 microns and depositing 4 microns of sputtered copper. A number of separation techniques, designed to release the flex from a carrier wafer, were investigated. It was concluded that the optimum solution was to use a laser ablation technique, targeting the interface between the polyimide and a quartz carrier wafer. Electrical and chemical analyses demonstrated that the in-house materials matched the characteristics of commercial polyimide. Stresses generated in the in-house thin flex increased with increasing polyimide thickness but the increase was determined to be negligible. Mechanical characterization showed that, for the in-house flex, the tensile strength and Young’s Modulus
6 Embedded Microelectronic Subsystems
139
Fabrication of the flexible substrate
Release techniques
Chemical
Mechanical
Laser
Characterisation
Chemical
Electrical
Humidity
Mechanical
Fig. 6.9 Schematic showing experimental and characterization work for flexible substrates
Fig. 6.10 A 3.9mm thick flexible substrate after release showing significant wrinkling
changed very little with varying polyimide thickness, while elongation at break decreased proportionally with decreasing thickness. It was observed that when the polyimide thickness decreased below 10 microns, the stiffness of the polyimide dropped off very dramatically and the flex wrinkled (see Fig. 6.10). The cause of this wrinkling was attributed to the stress generated by the copper sputtering process. The stiffness of the polyimide below 10 microns was not high enough to overcome the driving force due to copper in order to avoid a wrinkle-free substrate; thicker polyimide has enough stiffness to resist the wrinkling. In order to address the problem of wrinkling for thin flex, a polymer ring is placed around the circuit; this resulted in a flexible, wrinkle-free 4-micron polyimide substrate with 4 microns of copper. The work showed that as the thickness of the substrate material decreases, handling and processing issues become more pronounced. A compromise has to be reached between the advantages of reducing the thickness of flex and the disadvantages of extra processing issues due to wrinkling and the increased handling difficulties.
140
3.2
J. Barton
The silicon thinning process
In many emerging applications, electronic products must literally be flexible. An electronic product may be folded and twisted so that it can fit into a very limited, or confined, space or it may need to be flexible in the course of its normal usage. To meet the increasingly stringent requirements set by industry, research on different aspects of flexible substrates will most likely continue to grow significantly. These will include newer methods for developing thinner substrates, as well as better dielectric materials. From a processing perspective, the thinning of silicon is becoming normal practice. Given the ever-increasing demand for miniaturisation, it is important that the processes involved in thinning (their advantages, disadvantages and limitations) are fully understood and characterised. A comprehensive review of different silicon wafer thinning techniques was undertaken, including mechanical grinding, chemical mechanical polishing (CMP), dry etching and wet etching. Silicon test wafers of different thickness (specifically, 525, 250, 100 and 50 microns, respectively) were thinned using mechanical grinding. The wafers were diced, using both specialized dicing saws and lasers, in order to study the effect of singulation. It was concluded from microscopy and the mechanical characterization that, as a result of damage done by each laser pulse used in the dicing process, that laser clear had an adverse affect on the silicon chip itself. Surface and edge microscopy showed that the back surface was very smooth, but there was evidence of chipping observed on the top edge of the thinned chips. The chipping did not have any sharp notches and it was not deep enough to cause notably adverse effects on the mechanical properties. Through a 3-point bend test, the mechanical properties were calculated. From this data, the statistical behaviour of the silicon was identified. Using average values (in this discussion), it was concluded that the load required to break the chip decreased linearly as the chip thickness is reduced. However, the low level of force required to break a thin IC means handling becomes very critical; this occurs for IC in the range of 100 microns or less. At the same time, the fracture stress increased with the reducing thickness, indicating that a thin chip can potentially handle much higher stresses during packaging and the subsequent application. The increase in flexibility indicated by the decrease in radius of curvature with falling chip thickness means that, to a certain extent, thin chips could adopt a non-planar format. Average flaw size was calculated from the fracture strength values and it was noted that flaw size was decreased with the decreasing chip thickness. This was confirmed by AFM results, where the surface roughness was seen to decrease with decreasing chip thickness [25]. Both Weibull and lognormal models were fitted to the experimental strength data. The Weibull modulus for of all the data was found to be between 3 and 5, which is characteristic of brittle materials. The effectiveness-of-fit was measured with an information complexity criterion, which showed that lognormal model provides better results when compared to the Weibull model. However, the sample size can have a crucial effect on this preference for lognormal model.
6 Embedded Microelectronic Subsystems
141
The strain rate had no effect upon the fracture strength of the samples leading to the conclusion that there was no slow crack growth in the test samples. The strength dispersion was found to be low for thin silicon samples, which illustrates that thin silicon is more flexible and has higher fracture strength [26]. Fractured samples were macroscopically examined for different types of failure mode (See Fig. 6.11); including clean fracture due to low force and shattering due to high force. From SEM images of failed samples (See Fig. 6.12) it was concluded that the pattern of crack generation indicated that a primarily stress field, rather than crystallographic parameters, controlled the growth of cracks in the cleavage plane. Electrical parameters, like the diode forward voltage and the reverse biased current, were investigated for different chip thickness and the results showed the thinning process had no adverse effect on these parameters. From the I-V measurements,
Fig. 6.11 Optical microscopic study indicating different types of failure in (a) 525mm and (b) 50mm test dies
Fig. 6.12 A SEM micrograph of 250mm die that shattered during the three-point bend test
142
J. Barton
Fig. 6.13 Experimental and analytically determined bow values for 50 microns wafer
a novel method to characterize the process-induced stress during thinning, based on band-gap narrowing effect, was investigated. The active surface of the wafer was found to be in tensile stress and the stress values are significantly lower than their fracture strength. The difference in stress values between wafers of different thickness were correlated with the thinning process and the growth of silicon dioxide on the back surface of the wafer. Non-linear plate-theory-based analytical calculations were done to determine bow at wafer level (See Fig. 6.13). The wafer bow as calculated was in accordance with those available for similar process in literature. So, it can be concluded that accurate I-V measurements and non-linear plate theory can be used to approximately calculate the bow in wafer [27]. The results showed that the thinning process induced very little stress on the chip.
3.3
Flip chip interconnect technology
For the flip chip technique, the interconnection between an IC and the substrate is made by flipping the active side of the chip onto that substrate. As a result, the electrical connections are made simultaneously for all contacts in one single step. Flip chip technology was initially developed for high-end applications by IBM [28, 29].There are two main requirements for flip chip assembly. First, the die, or wafer, needs to have bumps and second, a supporting medium, either an ‘underfill’, or an adhesive, is required.
6 Embedded Microelectronic Subsystems
143
The most commonly used bumps include solders, electroless nickel-gold and gold stud bumps [30, 31]. A eutectic composition, 63Sn-37Pb, or a near eutectic composition, 60Sn-40Pb, have been the most widely used solders in microelectronics packaging. However, as a result of issues such as new environmental legislation regarding the use of lead in electronic assembly, the resultant higher reflow temperatures, issues of under-bump metallization and lower pitch requirements, there is an ongoing need for improvement. This focuses specifically on improving existing methods and on developing alternative flip-chip attachment techniques. Lead-free solders [32] are making some headway in this direction, while many other innovative approaches have been reported in the literature as well [33, 34, 35]. Currently, flip chip assemblies can be formed on flexible substrates with anisotropic conductive films and pastes; gold stud bumps can be used in the bumping process (See Fig. 6.14). The gold stud bumps are formed by bonding gold wire to the substrate with force, heat and ultrasonic energy and then snipping the wire just above the formed ball to leave a stud. This eliminates the need for special processes, such as plating, for making bumps. Process development for flip-chip on FR4 (a laminate substrate) with gold stud bumps has been reported previously by Zeberli [36]. However, information regarding the effect of the gold bump shape and planarity on the reliability of the assembly is not documented. In a recent study by Cheng [37] two types of bumps, including an electroplated gold bump and a composite of polyimide and aluminium, were investigated through finite element modeling. Cheng concluded that the bump height uniformity is a key factor in the overall performance of the contact interface. A systematic approach to characterizing the bonding process parameter was investigated. This included, for example, analysis of the curing characteristics of the adhesive and the bonding pressure. To establish the reliability of the interconnect, environmental tests (including temperature cycling and humidity tests) were carried out. Based upon failure analyses from these environmental tests, the shape of the gold bumps was observed to be a source of failure, requiring modifications to eliminate the negative impact on the performance and reliability of the interconnect.
Fig. 6.14 SEM images of ACF interconnections with varying pressure a) 1500 gm, b) 3000gm bonding force
144
J. Barton
Fig. 6.15 (a) Original gold stud bump, (b) coined gold bump
Two types of conductive adhesives, an anisotropic conductive paste and an anisotropic conductive film, were studied. The work looked into the effects of temperature on the curing degree of the adhesives. The result showed that 95% of curing has occurred at 200°C. Pressure was the main affecter during the flip chip assembly; scanning acoustic microscopy, as well as optical and scanning electron microscopy, was used as the characterizing techniques in the pressure optimization process. In the reliability analyses, the thermal shock tests showed that thermal cycling has little impact on the reliability and the observed failures were attributed to bad adhesion of the gold bump to the silicon and delamination of gold bump from the ACF. The environmental study showed that humidity is a major concern in the reliability of gold stud bumps, flipped-chip using either anisotropic conductive paste or film. The delamination during the humidity testing starts at an edge of the gold bump, which is not in proper contact with the flex. This is because originally the gold bumps did not have a uniform topography. They are deformed during the bonding and result in different shapes depending on the pressure and conductive material properties. The current shape is like a taper edge, which allows moisture to seep through easily and this gold bump shape reduces the contact area, thus, increasing the chance of failure. It was concluded that one of the most important factors that contributed to failure is the shape of the gold bump. A modified process (See Fig. 6.15) was developed to obtain a planar, ‘coined’ gold bump, and the resulting assemblies showed no failure during the humidity testing [38].
3.4
Building a Folded Flex Module
The initial work on 3D module assembly was done using commercial flex. This helped in optimizing the flip chip parameters and resulted in a range of technological demonstrators with varying die thicknesses. The assembly process commenced
6 Embedded Microelectronic Subsystems
145
Fig. 6.16 The different steps in folded flex assembly sequence, (top) flip-chip assembly on thin flex, (bottom-left) folding the flex and (bottom right) the final flex module
with the development of single layer flexible substrates. The substrate consisted of a polyimide dielectric layer and a copper conducting layer with a nickel-gold (Ni/Au) electroless immersion finish. Utilising an anisotropic conducting adhesive, flip chip assembly was used to make the interconnection between the gold bumps on the die and conducting tracks on the flex. In this way, four test chip die [39] were attached to the flat flexible substrate. As the temperature reached 200°C during each bonding operation, dies were electrically checked after each bonding to observe any effect on the bonded die. As discussed in the previous section, it was found that thermal cycling did not cause major reliability failures and therefore no yield issues were observed in the current experiment. To obtain the 3-D format the flex is manually folded and fixed in place with an adhesive. Fig. 6.16 shows the different stages in the development of the folded flex assembly. Once the assembly process was optimized, a set of modules was fabricated with different combinations of die and flex thickness. The initial target of a four die folded stack module with a thickness below 500 microns was achieved (See Fig. 6.17). An evaluation, including analysis of the thermomechanical and thermal performance of the modules was completed.
146
J. Barton
Fig. 6.17 50 microns four die stack module
4
Chip-In-Laminate Interconnect
A further approach to achieving increased density and functionality is to physically embed the bare die into the printed circuit board itself. Various methods of achieving this have been investigated in recent years. Texas Instruments Inc [40] have built multi-chip modules by placing die in cavities in a laminate material and then applying an upper lid (See Fig. 6.18). Another approach is the Intel BBUL technology [41]. A die or dice is embedded in a substrate, which then has one, or more, build up layers formed on top of it by molding or dispensing of an encapsulation material. A further patented technology is also being investigated by an indigenous Irish PCB company, ShipCo [42]. In essence, die of various sizes and thickness are placed on a laminate layer, additional layers of pre-preg are then placed on top of this and the complete assembly is laminated using a standard printed circuit board fabrication process; this causes the pre-preg to flow and completely encapsulate the die before curing. Once embedded, blind vias are then laser-drilled down to pads on the surface of the die and these vias are then electroplated (see Fig. 6.19). The key advantage of this approach is that it uses standard PCB technology and can accommodate die of various thicknesses. Researchers at the Fraunhofer IZM Institute in Berlin [43, 44] are investigating approaches to embed thinned silicon chips in “build-up layers” of polymer on top of a printed circuit board, as illustrated in Fig. 6.20. This technology integrates embedded thin chips in a conventional PCB, which can then be further used in 3D packaging. An important aspect of this work is that the silicon chips are thinned to around 50 mm. Components, or die, of different height are not catered for. Ultra-thin chips are embedded in the dielectric layers of modern laminate printed circuit boards (PCBs). Micro-via technology allows connection of the embedded chip to the outer faces of the systems circuitry. Embedded device packaging research at Tyndall is focused upon the development of two multi-chip packaging technologies capable of integrating standard power die
6 Embedded Microelectronic Subsystems
147
Solder Mask Kapton layers DIE
Laminate layers HEAT SINK I/O CONNECTOR
Fig. 6.18 Multi-chip module using the die-in-cavity process by Texas Instruments
Fig. 6.19 Embedded chip in laminate packaging concept (Shipco)
Fig. 6.20 Chip in Polymer by Fraunhofer IZM
(i.e. greater than 200 mm thick). These packaging technologies are: “Chip-in-PCB” and “Chip-in-polymer, build-up layer” (See Fig. 6.21). Both technologies replace wirebond connections with plated copper interconnect. The research includes the design and fabrication of a complete power converter, using each of the packaging approaches. These converters are to be benchmarked against the best available commercial converters. These approaches have the following advantages over conventional power packaging techniques: ●
●
●
Enhanced reliability and improved performance through the removal of the wire-bond and solder interconnections Automated batch level processing, which can increase repeatability and reduced costs Potential for increased integration and functionality
148
J. Barton
Fig. 6.21 Two embedded packaging technologies showing (top) the ‘Chip-in-PCB’ and (bottom) the ‘Chip-in-Build-up Layer’ approaches
embedded FET
Cu interconnect
Via to embedded diode
Fig. 6.22 (a) A section of embedded FET showing the top side interconnect and (b) an embedded diode with 500 microns via
6 Embedded Microelectronic Subsystems ● ●
149
Increased power density and miniaturisation Potential to use thinned silicon and chip stacking techniques
Currently, a novel thick photoresist process has been developed, which uses a modified commercial material. Patterning of 400 mm thick layers can be achieved with a fast cure time (5 minutes). Any distortion that may result from shrinkage is also significantly reduced. Fig. 6.22 shows an embedded power diode, with the top side interconnect plated. The structure is then released from temporary substrate and tested at 1 amp.
5
The Move to Micro-Nano Interconnect
In their December 14, 2005 webcast SEMI published their well-researched forecast on “Global Nano-electronics Markets and Opportunities”, which makes clear both the major changes that nanotechnology will bring to electronics packaging and how soon those changes will be seen. SEMI reports that the 2004 worldwide market for nano-electronic materials and equipment at $1,448 million, forecasting a 20% compounded growth rate to $4,219 million by 2010. On December 5, 2005, Fujitsu announced they have demonstrated that carbon nano-tubes can be grown as heatsinks on semiconductor wafers. The higher thermal conductivity of nano-tubes permits power RF die to be flip-chip-mounted; this was previously impossible because solder bumps could not dissipate the heat effectively. Flip-chip eliminates wirebond inductance, enabling higher frequency operation. Thus the combination of nano-tubes and flip-chip makes feasible higher power, higher frequency RF amplifiers. Fujitsu expects to have these nano-tube heat-sink power amplifiers available for mobile phone base stations before 2010. Fujitsu is only one of many major electronics packaging innovators. Earlier in 2005, Hewlett-Packard announced laboratory versions of nano-scale crossbar switches, a possible alternative to conventional logic. SUSS MicroTec is offering nano-imprint lithography systems. Samsung has begun a joint initiative with the Korea Advanced Institute of Science to fabricate memory chips thinner than 50 nm. In 2004, Toshiba announced that addition of nano-particles to conductive silver epoxy provided a die-mount adhesive with better properties than solder or conventional silver-flake materials. These product development initiatives each directly affect packaging. Future products like nano-tube field effect displays and organic semiconductors will no doubt bring their own packaging challenges. The compelling benefits of nanotechnology, such as higher thermal and electrical conductivity, greater mechanical strength, lower melting points, self-linking metal conductors and altered adhesive properties, make its early and ongoing use in microelectronics packaging inevitable. The challenge to increase the density of electronic products, for example, by using integral substrates and advanced interconnect, are prime drivers in all of the technologies described earlier in this chapter. Multiple performance parameters
150
J. Barton
need to be addressed, including light-weight and compact form factors, as well as increasing reliability and multi-functionality. New nano-scale materials and technologies will be central to consistently achieving these new targets. For high-density interconnection approaches, such as nanoscale surface activated interconnect and nano-wire/carbon-tube bumps, may be applied to dramatically increase the density of interconnection. For integral substrates, built-in passive components (i.e. resistors, capacitors, inductors and filters) with nano-materials can remarkably increase the density of such a substrate. Therefore, nano-materials, applied through electronic packaging technologies, can provide solutions to satisfying the need for innovation. These solutions will support the information, communication and consumer electronics industries by enabling manufacturers to develop lighter, more compact, more integrated, and ultimately more competitive products.
6
Conclusions
Ambient Intelligence (AmI) systems are those that will use electronics with learning capability to proactively and seamlessly support and enhance our everyday lives. In this regard, AmI is an extremely user-focused research topic. From a technical point of view, creating an AmI environment means integrating and networking numerous distributed devices that exist in the physical world (i.e. workspaces, hospitals, homes, vehicles, and the environment). One framework for this is the concept of Augmented Materials, outlined in Chapter 2. These are materials with fully embedded distributed information systems, designed to measure all relevant physical properties and provide a full ‘knowledge’ representation of the material. As a concept, it captures what is possible from microelectronics, microsystems, packaging technology and materials science to encourage a roadmap where progress is made through convergence. This progress will open entirely new possibilities for future applications and the resulting markets. Areas like medical monitoring and telemedicine, sports, and entertainment are currently beginning to benefit from research that is developing and using building blocks for these systems. However, the process is highly complex and involves numerous challenges that may only be solved through highly multidisciplinary methods. This chapter has reviewed 3D Packaging techniques, which may provide a step in the right direction for augmented materials, particularly in conjunction with embedded wireless systems; a technology that many have forecast as crucial in the development of a knowledge society. There is no doubt that augments materials systems can provide key technology enablers for future AmI systems, however, the timeline for this is currently unclear. This is most likely due to the lack of a clear ‘need’ for this in a real world domain; an application where the vision on augmented materials and AmI meets the real-world, perhaps in at point of ‘extreme’ use, where current technologies cannot function.
6 Embedded Microelectronic Subsystems
151
There is increasing acceptance of wireless sensor networks in society and, while there remains much to do, they are beginning provide evidence that they will approach the appropriate levels of complexity and reliability. It is in this area where the ‘need’ and thus, the drivers for the highly miniaturised electronics building blocks that will compose augmented materials, will most likely emerge. There are huge numbers of applications world wide requiring wireless sensors, in formats that can be built right now (not necessarily requiring significant levels of high-density integration), and there are millions of dollars to be made implementing these solutions in smart buildings, healthcare and environmental monitoring. Some of the leading innovators and academics in high density systems have left academic research to form companies [45, 46] in these domains. This provides a scenario where, perhaps, the first version of augmented materials will in fact emerge as current and future generation wireless sensor networks begin to merge with the everyday objects in the above target application domains. This could commence as a type of electronics packaging problem; merging the material in the object with the packaging material in manner that increases scope for functionality. However, the work being performed in nano-scale technologies should not be ignored; as discussed in this chapter, significant efforts are being made to improve existing materials performance in electronics packaging, as well as materials in many other manufacturing domains. The initial instances of the augmentation of materials will most likely be driven by high density packaging. However, perhaps the ultimate, or optimum, realization of augment materials will come once these high density solutions themselves become infused with nano-scale technologies. This will be realised through the emergence of nanoscale electromechanical systems (NEMS), from research like that of Prof. Alex Zettl’s group [47] in the Department of Physics at U.C. Berkeley; the group has reported on the fabrication of nano-motors powered by nano-crystals [48] and even constructed a fully functional, fully integrated radio receivers from a single carbon nano-tube [49].
References 1. R. Tummala et al, “Microelectronics Packaging Handbook: Semiconductor Packaging”, Chapman & Hall, January 1997 2. http://www.itrs.net/Links/2007ITRS/ExecSum2007.pdf 3. S.L. Burkett et al, “Advanced Processing Techniques for Through-Wafer Interconnects,” Journal of Vacuum Science Technology B, Vol. 22, no. 1,pp 248–256, (Jan. 2004) 4. M. Sunohara et al, “Development of Wafer Thinning and Double Sided Bumping Technologies for Three Dimensional Stacked LSI”, In Proc. 52nd Electronic Components and Technology Conference, (May 28 31 May 2002), San Diego, California USA, pp. 238–245 5. R. Nagarajan, et al, “Development of a Novel Deep Silicon Tapered Via Etch Process for ThroughSilicon Interconnection in 3D Integrated Systems”, In Proc. 56th Electronic Components and Technology Conference, (May 30 -June 2, 2006), San Diego, California, USA, pp 383–387 6. M. Bonkohara et al., “Trends and Opportunities of System-in-a-Package and Three-dimensional Integration”, Electronics and Communications in Japan (Part II: Electronics), Vol. 88, Issue 10, pp 37–49 (20 Sep 2005)
152
J. Barton
7. M. Kada, “The Dawn of 3D Packaging as System-in-Package (SIP)”’ IEICE Transactions on Electronics, Special Issue on Integrated Systems with New Concepts, Vol. E84-C, No.12, Japan, pp1763–1770, (2003) 8. M. Karnezos et al, “System in a Package (SiP) Benefits and Technical Issues,” Proceedings of APEX, San Diego, (January 16–18, 2002), pp S15-1, 1 to 6 9. T. Kenji et al, “Current Status of Research and Development of Three Dimensional Chip Stack Technology”, Japanese Journal Of Applied Physics; Vol. 40, 2001, pp 3032–3037 10. ENIAC Strategic Research Agenda http://cordis.europa.eu/ist/eniac 11. http://www.irvine-sensors.com/chip_stack.html 12. C. Cahill et al, “Thermal Characterisation of Vertical Multichip Modules MCM-V”, IEEE Transactions on Components, Hybrids and Manufacturing Technology, Vol 18 No. 4, December 1995, pp 765–772 13. http://www.3d-plus.com/ 14. http://www.aset.or.jp/index-e.html 15. P. Ramm, et al, Japanese Journal of Applied Physics Vol. 43, No. 7A (2004), p. 829–830 16. K.N. Chen et al. “Morphology and bond strength of copper wafer bonding”, Electrochemical and Solid-State Letters 7, pp. G14–G16, 2004 17. R. R. Tummala et al, “Copper Interconnections for High Performance and Fine Pitch FlipChip Digital Applications and Ultraminiaturized RF Module Applications”, Proc 56th ECTC 2006 pp 102–111 18. http://www.tezzaron.com 19. http://www.zy-cube.com/e/index.html 20. N. Sillon et al, “Innovative Flip Chip Solution for System-On-Wafer Concept”, In Proc. First International Workshop on 3S (SOP, SIP, SOC) Electronic Technologies, (September 22–23, 2005), Atlanta, Georgia, USA 21. A. Mathewson et al, “Detailed Characterisation of Ni Microinsert Technology For Flip Chip Die on Wafer Attachment”, Proc 57th ECTC 2007 pp 616–621 22. Tessera’s Unique Approach to Stacked IC’s Packaging; Tessera Inc, http:// www.tessera. com/images/news_events/Stacked_packaging_backgrounder_05-25-01.pdf 23. Y J. Kim, “Folded Stack Package Development,” In Proc. 52nd Electronic Components and Technology Conference, (May 28 31 May 2002), San Diego, California USA, pp 1341–1346 24. B. Majeed et al, “Fabrication And Characterisation Of Flexible Substrates For Use In The Development Of Miniaturised Wireless Sensor Network Modules”, Journal of Electronic Packaging, Volume 128, Issue 3, pp. 236–245, September 2006 25. B. Majeed et al, “Microstructural, Mechanical, Fractural and Electrical Characterisation of Thinned and Singulated Silicon Test Die”, J. Micromech. Microeng. Volume 16, Number 8, August 2006 pp. 1519–1529 26. I. Paul et al, “Statistical Fracture Modelling of Silicon with Varying Thickness”, Acta Materialia, Volume 54, Issue 15, Pages 3991–4000 (September 2006) 27. I. Paul et al, “Characterizing Stress in Ultra-Thin Silicon Wafers”, Applied Physics Letters 89, 073506 (2006) 28. E. M. Davis et al, “Solid logic technology: versatile high volume microelectronics”, IBM J. Res. Dev., vol. 8, pp.102, 1964. 29. L.F. Miller, “Controlled Collapse Reflow Chip Joining”, IBM Journal Research & Development, Vol. 13, pp 239–250, (1969) 30. S. Baba, “Low cost flip chip technology for organic substrates”, Fujitsu Sci. Tech. J. vol. 34, no.1, pp 78–86 September 1998. 31. R. Aschenbrenner et al, “Adhesive flip chip bonding of flexible substrates,” in Proc. 1st IEEE Int. Symp. Polym. Electron. Packag., 26–30 Oct 1997 pp: 86–94. 32. M. Abtewa et al, “Lead-free solders in microelectronics”, Mat. Sci. Eng., vol. 27, pp 95–141, 2000.
6 Embedded Microelectronic Subsystems
153
33. W. Kwang et al, “A new flip chip bonding technique using micromachined conductive polymer bumps”, IEEE Transactions on Advanced. Packaging, vol. 23, no 4, pp 586–591, November 1999. 34. R. W. Johnson et al, “Patterned adhesive flip chip technology for assembly on polyimide flex substrates”, Int. J. Microcirc. Electron. Packag., vol. 20, no. 3, pp 309–316, 3rd Qtr., 1997. 35. M. E. Wernle et al, “Advances in materials for low cost flip chip,” Adv. Microelec., pp 1–4, Summer 2000. 36. J. F. Zeberli et al, “Flip chip with studbumps and non conductive paste for CSP-3D”, in Proc. 13th Europ. Microelec. Packag. Conf, 2001, pp 314 319. 37. H. C. Cheng, et al, “Process-dependent contact characteristics of NCA assemblies,” IEEE Trans. Comp. Packag. Technol., vol. 27, no. 2, pp 398–410, June 2004. 38. B. Majeed et al,“Effect of Gold Stud Bump Topology on Reliability of Flip Chip on Flex Interconnects”, Accepted for IEEE Transactions on Advanced Packaging 39. S. C. O’Mathuna et al, “Test chips, Test Systems and thermal test data for multi-chip modules in the ESPRIT-APACHIP project”, IEEE Trans. Compon. Packag. Manuf. Technol. A Vol. 17, No. 3, pp 425 Sept. 1994 40. Texas Instruments (US Pat. No. 6,400,573 B1) 41. Electronic Package Technology Development Intel Packaging Journal, Volume 09, Issue 04, November 9, 2005 42. Ship Co. Patent WO2004/001848 A1 Electronics circuit manufacture 43. E. Jung et al, “Ultra Thin Chips for Miniaturised Products”, In Proc. 52nd Electronic Components and Technology Conference, (May 28 31 May 2002), San Diego, California USA, pp 1110–1113. 44. R. Aschenbrenner, et al, “Process flow and manufacturing concept for embedded active devices”, Proceedings of the Electronics Packaging Technology Conference EPTC, Dec 2004, pages 605–609. 45. http://www.sentilla.com/ 46. http://www.dust-inc.com/ 47. http://www.physics.berkeley.edu/research/zettl/ 48. B. C. Regan et al, “Nanocrystal-Powered Nanomotor”, Nano Lett.; 2005; 5(9); 1730–1733 49. K Jensen et al, “Nanotube Radio”, Nano Lett.; 2007; 7(11); 3508–3511
Part IV
Networking Technologies Wireless Networking and Wireless Sensor Networks
1.1
Summary
From a technology perspective wireless systems are essential in handling the requirements of mobility in everyday life. Networking, whether wired or wireless, is now one of the key building block approaches in IT systems, its value growing with scale, as evidenced through examples such as the internet. Some of the strongest drivers towards Ambient Intelligence are being provided by technologies that combine wireless performance with networking. Thus, it is no coincidence the one of the most vibrant areas of research at the moment is Wireless Sensor Networking (WSN). In this area of research, networking is not simply a technological component in a system, it also extends to the approaches taken by the researchers in achieving real innovation; most of the effective projects on sensor network design and implementation are highly collaborative in nature. In fact, as will be addressed later in this book, in some ways how the research programmes are constructed (as collaborative processes) can be as important as the innovation target itself. This section deals with network technologies and provides, in one chapter, an overview of the principles of computer networking, including a review of communication protocols for embedded wireless networks. It also summarises wireless communication system standards and discusses low power proprietary radio technology for embedded wireless networks.
1.2
Relevance to Microsystems
The interplay between wireless networking and microsystems is effectively a technological frontline in the development of systems solutions for Ambient Intelligence. It is framed by applications, existing and new, and thus by user requirements and the application software. Microsystems will be used to provide the sensor interfaces between the network (from simple node-level to internet-level) and the user. They will also be used to improve the wired and wireless infrastructure and the performance of the network itself, including communications, power and reliability.
156
1.3
Part IV Networking Technologies
Recommended References
There are a large number of publications in the area of wireless communications and networking. The following references provide a more detailed insight into these topics: 1. I.F. Akyildiz, S. Weilian, Y. Sankarasubramaniam, E. Cayirci, “A survey on sensor networks”, IEEE Communications Magazine, Aug 2002, Volume: 40, Issue: 8, pp 102–114 2. K. Akkaya, M. Younis, “A survey on routing protocols for wireless sensor networks”, Elsevier Ah-hoc Networks Journals, 3 (2005) 325–349 3. H. Karl, A. Willig, “Protocols and Architectures for Wireless Sensor Networks”, John Wiley & Sons, 2007, ISBN 0470519231 4. C. de Morais Cordeiro, D. P. Agrawal, “Ad Hoc and Sensor Networks: Theory and Applications”, World Scientific Publishing, 2006, ISBN 9812566813 5. K. Sohraby, D. Minoli, T. Znati, “Wireless Sensor Networks: Technology, Protocols, and Applications”, Wiley Blackwell, 2007, ISBN 0471743003
Chapter 7
Embedded Wireless Networking: Principles, Protocols, and Standards Dirk Pesch1, Susan Rea1, and Andreas Timm-Giel2
Abstract All aspects of society are networked today ranging from people to objects. Our daily lifes rely heavily on the ability to communicate with each other but also the many systems that enable us to conduct our life require networked systems, that is networked embedded systems. 90% of all microprocessors are used in embedded systems applications from our cars to home appliances, entertainment devices, to security and safety systems. Increasingly, embedded systems communicate wirelessly using a range of technologies, from wireless sensor networks, to wireless local area networks, to wireless and mobile ad-hoc networks to mobile cellular networks. The vision of the future is to network as many of the embedded systems as possible to create a wide range of smart applications. Embedding computing technology into materials and objects and networking those computers will create the vision of augmented materials and smart objects. This chapter briefly presents the principles of computer networking and presents a state of the art of communication protocols for embedded wireless networks. It then presents an overview of the main wireless communication system standards and selected low powerr proprietary radio technology available to create embedded wireless networks. The chapter concludes with a brief discussion of open issues in wireless communications and networking for augmented materials and smart objects.
Keywords Embedded networks, wireless networks, MAC protocols, routing protocols, IEEE802.11, IEEE802.15.1, Bluetooth, IEEE8021.5.4, ZigBee, mobile networks, layered communication
1 2
Centre for Adaptive Wireless Systems, Cork Institute of Technology, Cork, Ireland TZI/iKOM/ComNets, University of Bremen, Germany
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
157
158
1
D. Pesch et al.
Introduction
This chapter presents a review of basic networking concepts, a summary of the main standards and some selected proprietary wireless communication networking technologies relevant for embedded networking. Communication between embedded computing nodes is based upon communication protocols that enable reliable, physical communication across a network of embedded nodes. The application and style of embedding of nodes often determines how networks are formed and how they are best operated. Embedded computing nodes are usually installed in places where they have to operate autonomously due to the fact that they are inaccessible, or where maintenance costs are expensive, or even prohibitive, due to the number of devices, access, etc. Also, embedded computing devices are typically small with limited energy supply, for instance using batteries or having limited energy harvesting capability; their function is typically tailored for a specific application. This situation demands that the devices are self-configuring and operate in an adaptive fashion in order to change their behaviour based upon the environment, their energy supply, or the general context the system operates in. A node does not only need to adapt its own function, but needs also to cooperate with other devices in the embedded network to achieve a network-wide function, which is central to the operation of an augmented object, material or larger smart enviornment. A range of embedded computing platforms are available commercially, or have been developed through research projects in academia or industry. Those computing nodes are typically called motes. For reference, an overview of the design, architecture, and functionality of motes was presented in Chapter 5.
2
Networking Principles and Protocols
The design and operation of computer networks is typically based on a layered protocol stack approach. The protocol stack implements a particular protocol, or protocols, in each layer, which communicate with peer systems by using the functionality of underlying layers. An underlying layer exports interfaces to the layer above; these are used to access the lower layer’s functionality for data transfer. The functionality of the underlying layer is hidden from the layer above – similar to the object-oriented programming concept – and by doing so, an underlying layer can change its protocol without affecting higher layers. A number of layered models exist, of which the ISO/OSI 7 layer model [1] and the 5 layer TCP/IP model [1] are the most prominent. Fig. 7.1 depicts the concept of the layered model and the fundamentals of message transfer between two computer systems. The layered protocol stack not only facilitates communication between two systems but also across a network. A computer network is formed by a number of computer nodes sharing a common communication facility at the lowest layer, the physical layer. This facility provides a communication channel, such as a cable,
7 Embedded Wireless Networking: Principles, Protocols, and Standards Source machine
159
Destination machine
Layer 5
4
3
2
M
H44
H3 H4 M1
M
H33 M2
H2 H3 H4 M1 T2 H2 H3 M2 T2
Layer 5 protocol
M
Layer 4 protocol
H4
Layer 3 protocol Layer 2 protocol
H33 H44 M1
M
H33 M2
H 22 H 33 H44 M1 T22 H22 H 33 M2 T22
Layer 1 protocol 1
Fig. 7.1 Layered Protocol Stack and Message Transfer Concept
radio channel, or some other physical medium that is able to carry information encoded into electrical, optical or acoustic signals. The way the communication facility is shared influences the topology of the network. A number of typical topologies include star, bus, tree, ring, mesh, and hybrid topology. The network topology may be flat or hierarchical. Some networks, in particular those that are formed in ad-hoc fashion, may also form irregular topologies based on the geographical location of the computer nodes. The layers (in this layered communication network view) that are most relevant to embedded wireless networks are the physical layer, the medium access control layer, the nework layer and, in particular, routing protocols in the network, transport and application layers. The physical layer deals with all of the physical specifications of the devices; that is how nodes interface with the physical medium and how data is transmitted as electrical or other signals. The medium access control layer determines how access and sharing of a communication channel is governed. The network layer takes care of routing and message transport, the transport layer provides end-to-end connectivity across the whole network and the application layer provides a service interface between the users and the network. In the following sections, the basic operation of each layer and an overview of aspects of key protocols for the physical, medium access control, network and transport layer are presented with references to the literature in the field.
2.1
The Physical Layer
The physical layer provides the connectivity to the physical transmission medium, through either specialist connectors or appropriate antennae. While providing physical connectivity to the medium, it also embedds the desired information bits into signals;
160
D. Pesch et al.
that is, electrical, optical, acustic, or other methods, using modulation schemes. In order to condition, transmit and receive the signals the layer also provides amplification, reception and detection circuitry [2]. As embedded systems are used to exchange digital information, most modulation schemes in use in embedded networks are digital modulation schemes. The main schemes, and some of their variations, are Frequency shift keying (FSK), Phase Shift Keying (PSK), or a combination of Phase and Amplitude Shift Keying (ASK), which is called Quadrature Amplitude Modulation (QAM) [2, 3]. Due to size and power constraints, the physical layer of embedded networks is often quite simple and rarely uses advanced signalling and control techniques, such as amplifier linearisation, MIMO signalling, beamforming and adaptive antennae, or other techniques [3]. Occasionally, forward error protection schemes [2, 4] are used to protect the information transmission, in particular if real-time information needs to be transmitted, such as video signals. However, typically error control is facilitated by the medium access control layer through error detection and packet retransmission schemes, called Automatic Repeat ReQuest (ARQ). Fig. 7.2 shows the elements of the physical data transmission chain for a digital communication link.
2.2
Medium Access Control
Medium access is the management or control process within a networked computer node that determines how multiple computers may access or share a common communication channel [1]. This communication channel may be a copper or other metal wire, it may be a fibre optic cable, or it may be wireless – a radio channel. While the focus of this chapter is mainly concerned with networking in wireless channels, most of the concepts presented in this section are also valid for wireline channels. Medium access control (MAC) protocols for wireless networks can be largely grouped into two categories, that is scheduling based and contention based medium access control protocols. Source (R1)
Bit rate R1
Channel Coder (R2)
Channel Decoder
R=R1+R2
Modulator (B)
Channel (α)
Demodulator (B, S/(I+N))
Noise (T, B)
Fig. 7.2 Physical layer data transmission chain
Interference
7 Embedded Wireless Networking: Principles, Protocols, and Standards
2.2.1
161
Scheduling based medium access control (MACs)
Scheduling based MACs require a central controller or coordinator to coordinate the transmission of each node in the network (or part of the network) by requesting that each node transmit at a particular time. The scheduling of transmission by the central controller can be done in a round robin fashion or in some other way. Nodes do not have to transmit or receive outside the scheduled time interval, which reduces battery power consumption in wireless networks, as nodes can sleep between transmission/recption periods. Scheduling based medium access control in embedded networks is typically based on a polling mechnism, such as used in the Point Coordination Function (PCF) of the IEEE802.11 (WLAN) MAC [5] or based upon the Time Division Multiple Access (TDMA) [2, 4] method, also used in many cellular data networks, to provide access to a common communication channel. Here, users have access to the channel on a time slice basis. Each node has access to the channel for a period of time, which is determined by the central or coordinating entity called a time slot. N time slots are typically grouped into a time frame, which is repeated over time, providing a reserved time slot i, i = 1 . . . N for each node to use the channel without interference from other nodes. Transmission of data in a time slot starts usually with a preamble that contains synchronisation, address, and possibly error control information. Synchronisation is one of the critical issues in TDMA, as time slots can interfere during transmission by different nodes when synchronisation is lost. Therefore, a controlling entity, a coordinator, needs to reserve time slots for nodes and transmit synchronisation information frequently. Another technique, Frequency Division Multiple Access (FDMA) is the oldest and simplest technique to share a common radio spectrum, in which individual carriers are created and accessed by each node individually, through a controlling entity. Another medium access technique that is used in embedded networks is Code Division Multiple Access (CDMA), which is a technique based on spread spectrum communications [2, 4]. In spread spectrum communications the transmission bandwidth used is much larger than the bandwidth required to transmit the information signal. A spread spectrum system exhibits the following chracteristics: ●
●
●
the spread spectrum signal utilises a much larger bandwidth than necessary to transmit the information signal Spectrum spreading is achieved by applying a spreading signal or so-called spreading code, which is independent of the information signal. Despreading at the receiver is accomplished by correlating the received spread spectrum signal with a synchronised replica of the spreading code used at the transmitter.
In CDMA, many nodes transmit in the same channel at the same time. The separation of a nodes’ transmission from others is achieved by applying a different spreading code to each users information signal [2]. In CDMA, a particular spreading code represents a channel in which the information is transmitted. CDMA also
162
D. Pesch et al.
requires a controlling entity, a network coordinator, as in the case of TDMA and FDMA, which assigns spreading codes to nodes in the network. Scheduling based schemes have the advantage that they are very energy efficient as transmission cylces can be optimised for each node’s application and they do not cause collisions when multiple nodes attempt to transmit data at the same time. However, scheduling based schemes require strict time synchronisation between nodes, often coordinated by a central node. This creates inflexibility for losely coupled networks and where node mobility occurs. Also, they do not scale well as they do not adapt to changing node density and do not cater for the peer-to-peer communication often desired in embedded networks.
2.2.2
Contention based medium access control (MACs)
Contention based medium access control [6] uses an on-demand contention mechanism. Whenever a node wishes to transmit information, its access to the medium must adhere to the rules of the particular contention based access scheme. During times when no data needs to be transmitted, the node does not use communication resources. Such dynamic access to communication channels is driven by the randomness of data generation at source and therefore these schemes are also often referred to as random access protocols. Many contention based medium access control protocols have been developed for both wireline and wireless networks. The best known families of contention based medium access control protocols are the ALOHA protocol family and the Carrier Sense Multiple Access (CSMA) protocol family [6, 7]. The CSMA protocol family is best known as the basis of medium access control in local area computer networks, in particular Ethernet [7]. Both protocols, as well as their derivatives, can be found in many real-world wireless and mobile systems. Around 1970 the University of Hawaii developed a radio based communication system called the ALOHA system, which included a very simple random access protocol - the ALOHA protocol - to control the access to the single radio channel. The protocol is based on the following modes of operation: Transmission Mode – users transmit data packets at any time they desire. Listening mode – after a data packet transmission, a station listens to an acknowledgment from the receiver. As there is no coordination among individual stations, different data packet transmissions will occasionally overlap in time causing reception errors. This overlap of data packets is called Collision. In such cases, errors are detected and stations receive a negative acknowlegdement. Retransmission mode – when a negative acknowledgement is received, data packets are retransmitted. In order to avoid consecutive collisions, stations retransmit after a random time delay. Timeout mode – if neither a positive or a negative acknowledgement is received within a specified timeout period, a station will retransmit the data packet. While the operation of the ALOHA protocol is very simple, throughput is poor due to the limited coordination. Stations also need to listen all of the time in order to make sure they capture a data transmission, which in embedded wireless networks
7 Embedded Wireless Networking: Principles, Protocols, and Standards
163
leads to significant power inefficiencies. A simplified estimation of the ALOHA protocol’s throughput [4] leads to S = Ge−2G, where S is the throughput and G the offered load, with a maximum throughput of S = 0.18. In order to improve channel utilisation, an element of synchronisation in the form of a time slot mechanism can be introduced. Slotted ALOHA operates on a slotted communication channel similar to a TDMA based MAC. The time slot duration is best chosen to be close to the packet transmission time. Medium access is synchronous to the start of a slot, which leads to a significant improvement in throughput and power efficiency, with maximum throughput twice as much as in the pure ALOHA case: S = Ge−G and Smax = 0.36. The main reason for the low ALOHA protocol throughput is that stations do not observe the other station’s data transmissions. One way to improve the throughput of random access protocols is by sensing whether another station is transmitting in the common channel. Protocols that sense channel availability before transmission are commonly known as Carrier Sense Multiple Access (CSMA) protocols. In CSMA, a station senses whether a channel is available before attempting to transmit. When a channel is sensed as being idle, there are several variations of the protocol that determine what to do next. The following three protocol variants are the most common: ●
●
●
Non-persistent CSMA: if the channel is sensed idle, start transmitting, otherwise wait a random time and start sensing again for an idle channel 1-Persistent CSMA: if the channel is sensed idle, start transmitting immediately, otherwise wait until it is idle and then start transmitting immediately. p-Persistent CSMA: this strategy requires that the channel is divided into time slots in the same way as in slotted ALOHA. If the channel is sensed idle, start sending with probability p, or wait until the next time slot with probability (1 − p). Repeat this until the data packet has been successfully transmitted or another terminal has started sending. In the latter case wait a random time and start sensing again.
Carrier sensing is able to improve on the throughput of the ALOHA type protocols and can reach a throughput close to 100%. However, one of the problems with carrier sense protocols that occurs in wireless environments, is the hidden terminal problem [8]. This problem occurs when a station cannot sense the transmission of another station due to the distance between the two, but a station in between these two, which is the recipient of both transmissions, is able to receive both. Protocol variants of CSMA, such as CSMA with Collision Detection (CD) are used in the well known local area network standard Ethernet (IEEE802.3 standard) and CSMA with collision avoidance (CA) is used in the IEEE802.11 Wireless LAN and the IEEE802.15.4 LP-WPAN standard. CSMA/CA is a protocol variant that has features to overcome the hidden terminal problem. Traditional MAC protocols look towards balancing throughput, delay and fairness but MAC protocols for embedded wireless networks, while also addressing these concerns must also satisfy energy efficiency. The commonality among energy efficient MAC protocols is duty cycling, where the radio is switched to a low power sleep mode when possible to save on power consumption.
164
D. Pesch et al.
Duty cycled based MAC protocols are categorised as synchronised, asynchronous and hybrid techniques. The motivation of duty cycling is to reduce idle listening, as this needlessly consumes energy. Synchronised protocols, such as S-MAC [9] and T-MAC [10] are based on loose synchronisation, where sleep schedules are specified within a frame so that idle listening is reduced. T-MAC improves on SMAC by reducing the awake period if the channel is idle. Unlike S-MAC, where nodes stay awake for the complete awake time frame, in T-MAC nodes listen to the channel for a short time after synchronisation; if the channel remains idle during this short listening period then the node reverts to sleep mode. Asynchronous protocols, such as B-MAC[11], WiseMAC[12] and X-MAC[13], rely on low power listening and preamble sampling for implementing asynchronous sleep scheduling. Preamble sampling negates the need for explicit synchronisation. The sending node transmits a preamble that at a minimum matches the duration of the sleep period of the intended receiver node. Consequently, when the receivers switches from sleep mode to awake mode it listens to the channel and detects the preamble and will remain awake to receive the data. B-MAC has been developed at the University of California at Berkeley and is a CSMA-based protocol that relies on low power listening and an extended preamble for energy efficiency. The sending node transmits a preamble that extends slightly beyond the sleep period of the receiver so that the sender is confidient that the receiver will be in awake mode at some point during the preamble to detect it. With WiseMAC, intended for infrastructure sensor networks, in addition to preamble sampling the sending node learns the schedule of the receiver’s awake period and schedules its transmission so that extended preamble duration is reduced. Receiver nodes, when acknowledging data frames, place the time of its next awake period in the acknowledgement frame. This enables a wishful transmitter to begin the preamble just before the receiver awakes and so reduces energy consumption. While low power listening is energy efficient, the long preamble duration has overhearing problems associated with it, where all non-target receviers must wait for the complete preamble duration to determine if they are the target of the ensuing data transmission. As recevier nodes must wait for the preamble to terminate before receiving data, over multihop paths the per-hop latency accumulates and can become large. To further reduce energy consumption and lessen per-hop latencies, the XMAC protocol was developed and relies on a short preamble where address information of the intended receiver is contained within the preamble. This allows non-intended receivers to return to sleep mode to resolve overhearing problems. In addition, a strobed preamble is used so that the receiver node can interrupt the preamble once it has identified itself as the intended target. This further reduces energy consumption and reduces latency.
2.3
Network Layer and Routing Protocols
The network layer in all communication networks is responsible for establishing communication channels across a network of nodes. In order to do this, switching
7 Embedded Wireless Networking: Principles, Protocols, and Standards
165
and routing are at the core of the layer. In embedded networks, data is typically transmitted in packet form; that is, small chunks of data are grouped together and labeled with the address of the source and destination node, as well as some control information that provides the nodes along the route from source to destination with additional information to route the data; for example, routing information, data priority, congestion information, etc. Routing is a key feature of the network layer in embedded wireless networks as it facilitates network formation and data delivery. A wide range of routing protocols have been developed for embedded wireless networks. Early designs were based upon routing protocols used in fixed networks, such as the Internet [14], but more recently proposed protocols are more suited to the needs of embedded networks. In particular, routing protocols have been developed that are able to adapt to the network and environmental context, in particular battery power, the changing network topology – which may occur due to node mobility and failure - and network congestion. Routing protocols for several types of embedded networks have been developed. The main categories of embedded networks that are relevant to augmented materials and smart objects/environments are mobile ad-hoc networks (MANETs) and wireless sensor/actuator networks (WSN). Both types of networks are ad-hoc in nature; that is, they are networks that do not necessarily have any fixed, wired infrastructure and central configuration capability apart from a single or few gateways into a fixed network, typically the Internet. Many routing protocols for wireless sensor/actuator networks are also based on MANET protocols.
2.3.1
MANET Routing Protocols
A range of routing protocols for MANETs have been proposed during the last decade, which can be grouped into two main classes: proactive and reactive protocols. Proactive routing protocols discover routes and set up routing tables whether data needs to be transmitted or not. Reactive protocols only start searching for a route, if none is known, when the devices have data packets to transmit. Proactive routing protocols have the advantage of lower initial delay, as the route is already known. They usually perform better, if there is low mobility in the network. However, if there is high mobility, the update rate for routing tables has to be increased - leading to increased overhead and battery power consumption or stale or broken routes. Reactive protocols have lower overhead and better performance in applications with high mobility - for example, in embedded vehicular communication networks - and few communicating peers [15], [16], [17]. However, at the start of a data transmission there is a potentially large initial delay caused by the route discovery. In addition to proactive and reactive protocols, hybrid protocols have been proposed in the literature, which attempt to combine the benefits of both approaches, typically using a network cluster topology with a proactive approach at the cluster level and a reactive approach at the global network level. Other routing approaches proposed for MANETs consider the geographical node location in order to better adapt to node mobility [18], [19].
166
D. Pesch et al.
Pro-active routing protocols, also referred to as table-driven protocols, establish the routing tables independent of the need to communicate. The protocols maintain a routing table also in dynamic environments by looking for new routes and detecting link failures. This can either be done by periodic updates or be event-driven. The best known proactive routing protocols are the DSDV (Destination-Sequence Distance-Vector) protocol and the OLSR (Optimized Link State Routing) protocol. DSDV was the earliest routing protocol for mobile computing applications, proposed by Charles Perkins in 1994 [20]. It is based on traditional distance vector protocols used in fixed networks, such as the Internet, where routing tables are kept at each node, containing a number of possible routes, via the node’s neighbours, and their associated costs to reach a destination. The routing algorithm selects for each packet the neighbour with minimal costs, typically expressed in terms of the number of hops towards the packet’s destination, and then forwards the packet to it. It is well known, that Distance Vector routing schemes can form loops caused by stale or broken routes due to the distributed nature of the routing algorithms, which is aggravated by mobility in MANETs. DSDV was designed to avoid loops by adding a sequence number originated by the destination to each routing table entry and using this to identify the age of a routing entry when periodically exchanging route information with neighboring nodes. OLSR is the second, important proactive routing protocol used in mobile ad-hoc networks. OLSR optimises Link State Routing, known from fixed networks, limiting the flooding by introducing a Multipoint Relay. OLSR is documented in RFC (Request For Comments) 3262 [21]. Further improvements have been recently proposed as an Internet Draft OLSR – v2 [22]. In contrast to distance vector protocols, where nodes exchange their routing tables, in link state routing the topology is exchanged between the nodes of the network. Knowing the network topology, each node can calculate the best route itself. Unlike proactive protocols, reactive routing protocols discover a route between source and destination only if data need to be transmitted. The main reactive routing protocols are Dynamic Source Routing (DSR), Ad-hoc On-demand Distance Vector (AODV) and Dynamic MANET On-demand (DYMO) Routing Protocol. All three protocols establish routes between source and destination through a route discovery mechanism, broadcasting route request messages containing the destination address to all neighbor nodes. Each node receiving a route request forwards this to its neighbor nodes until it reaches the destination, which then replies to the source. In DSR [23] and [17] a node receiving a route request checks if it has a route in its routing tables, otherwise it broadcasts a route request, attaching its own address to neighbor nodes. While the route request traverses along network paths, each intermediate node adds its own address to the message header thus recording the nodes traversed along the path. If an intermediate node has a route entry to the destination, or if the node is the destination itself, it does not forward the packet, but returns a route reply along the path given in the message header. If a route breaks, a route error message is sent back to the source. All packets transmitted from source to destination have the complete routing information in the packet header. The overhead is significant if small packets are transmitted along long
7 Embedded Wireless Networking: Principles, Protocols, and Standards
167
routes. By listening to the communication channel, nodes can also learn routes to different destinations. AODV, introduced by Perkins et al. in 1997 [24] and specified in RFC 3561 [25], reduces this problem of large packet headers by maintaining routing table entries along the route. The route discovery is similar to DSR, flooding the network with route requests, when a route to the destination is not known. These route requests contain a sequence number and are forwarded by intermediate nodes, if they do not have a routing table entry for the destination themselves. However, instead of attaching the route information and their own node ID to the route request like in DSR, intermediate nodes keep an entry of the sender of the last route request in their routing table, along with the source ID and the sequence number. If the route request reaches the destination, or a node having a routing table entry to the destination, a route reply is returned. Each intermediate node knows from its routing table entry to where to return the route reply messages. As intermediate nodes maintain routing tables, packets can be forwarded hop by hop, without the need of any routing information in the packet header. In both DSR and AODV, routes not utilised for some time expire and are removed from the routing tables, reducing the probability of stale or broken routing table entries. The DYMO [26] protocol is a recent development, combining the benefits of both DSR and AODV. In the route requests, information of the intermediate nodes can be attached. Routing tables are created with the route reply messages allowing the sending of data without route information in the packet headers thus reducing the overhead. DYMO additionally supports multiple physical layer interfaces; that is, route requests can be forwarded from one ad hoc network via a node with several interfaces to another ad hoc network.
2.3.2
Sensor Network Routing Protocols
The second class of embedded wireless networks are wireless sensor/actuator networks, which serve a very large application space within smart objects and augmented materials systems, as they provide the embedded infrastructure for interfacing with the physical world. Based on the specifc application of an embedded network, the following classification can be used to group routing protocols for wireless sensor/actuator networks [27] – Hierarchical Routing, Data-centric Routing, location-aware Routing, Quality-of-Service-aware Routing, Maintenanceaware Routing, Cross-layer Routing Protocols. Hierarchical routing protocols are used in embedded networks to deal with the scale of many such networks; the network is divided into clusters with cluster heads that control the cluster subset of the network and provide a backbone for routing. The protocols are similar to the hybrid MANET routing protocols mentioned above, where clusters are formed to segment the network into smaller parts. Nodes route data via their cluster heads, which often carry out data aggregation and reduction for energy saving, while routing data towards the sink. In many implementations, cluster heads are often devices of greater complexity with more
168
D. Pesch et al.
battery power than sensor devices and may even be line powered. These routing protocols are often not optimal, but they are simple, with some control message overhead during cluster formation. On the other hand, hierarchical protocols tend to consume energy uniformly throughout the network and usually guarantee low latency, since their proactive behaviour in building clusters provides the protocol with topological information. Examples of hierarchical routing protocols are Low Energy Adaptive Clustering Hierarchy (LEACH) [28], Power Efficient Gathering in Sensor Information Systems (PEGASIS) [29], and Threshold sensitive Energy Efficient sensor Networks (TEEN) [30]. Data centric networking puts the focus of routing on the sensor data that embedded devices gather, rather than in the node identity (as opposed to other types of networking, where the identity – address – of the node is the distinguishing aspect for routing). The resource constrained nature of embedded wireless network nodes in terms of processing power, communication bandwidth, data storage capacity and energy gives rise to new challenges in information processing and data management in such networks. In many embedded applications, the application may frequently query information in the network, which requires consideration of a trade-off between updates and queries. In-network data processing techniques, from simple reporting to more complicated collective communications, such as data aggregation, broadcast, multicast and gossip, have been developed. In datacentric protocols sources send data to the sink, but routing nodes look at the content of the data and perform some form of aggregation/consolidation function on the data originating at multiple sources. Many data-centric protocols also have the ability to query a set of sensor nodes, and to use attribute-based naming and data aggregation during relaying. Well known examples of data-centric routing protocols include Sensor Protocols for Information via Negotiation (SPIN) [31], Directed Diffusion [32], Rumour Routing [33]. Other data-centric protocols are Gradient-Based Routing (GBR) [34], Constrained Anisotropic Diffusion Routing (CADR) [35], COUGAR [36], TinyDB [37], and Active Query forwarding In sensoR nEtworks (ACQUIRE) [38]. Location aware routing protocols are used were the geographical location of nodes – source, destination, and intermediate nodes – is important from a routing perspective. Considering node location can also achieve more efficient routing in terms of energy consumption, data aggregation and routing delay. One distinctive routing approach that has gathered some interest recently is the, so-called, geographicalaided forwarding. Several techniques have been proposed in the literature where the availability of location information is achieved by means of GPS, or GPS-less, techniques [39–41] and is used for performing packet forwarding, without requiring either the exchange of routing tables among network nodes or the explicit establishment of a route from a sender to a destination. Location-based routing protocols have been widely adopted in the design of wireless sensor networks. Most of the existing location-based routing protocols are stateful; that is, they make routing decisions based upon cached geographical information about neighbouring nodes. However, possible node movements, node failures, and energy conservation techniques in sensor networks do result in dynamic networks with frequent topology
7 Embedded Wireless Networking: Principles, Protocols, and Standards
169
transients, and thus pose a major challenge to stateful packet routing algorithms. Examples of geographical routing techniques include Geographic and Energy Aware Routing (GEAR) [42], GeRaF [43], Minimum Energy Communication Network (MECN) [44], Small MECN (SMECN) [45], and Geographic Adaptive Fidelity (GAF) [46]. Qualit-of-Service (QoS)-Aware routing protocols base routing decisions on the specific quality of service needs of the applications that the embedded network supports while at the same time trying to minimize energy consumption. Some applications have specific delay requirements - for instance, surveillance applications that require the routing protocols to be cognisent of delay - and other aspects include reliability, where data loss is unacceptable. Many other quality of service attributes are used in embedded networks leading to a wide veriety of these types of protocols, such as Sequential Assignment Routing (SAR) [47], SPEED [48] and the Energy Aware QoS Routing Protocol. Other protocols include Maximum Lifetime Energy Routing [49], Maximum Lifetime Data Gathering [50], and Minimum Cost Forwarding [51]. Protocols developed to provide increased reliability and accuracy of sensor data, such as those presented in [52], [53] and [54], are also important QoS-aware routing protocols in embedded wireless networks. Maintenance aware routing protocols have been proposed recently as a means to acknowledge that, in many circumstances, the nature of embedded wireless networks may not permit access to the nodes for maintenance purposes - that is battery replacement or repairs - without difficulty, or at all. An example of such a routing protocol can be found in [55]. Cross-layer routing protocols are based on recent approaches to overcome some of the inefficiencies that the original, strictly-layered approach to computer networking creates. Cross-layer optimisation techniques attempt to use information available in other layers to make routing decisions; for example, if congestion is present in the MAC layer then avoid being included in the current route request, if the physical connection to a node is lost then initiate the route update now rather than waiting until data needs to be sent and creating delays. Examples of crosslayer routing protocols can be found in [56, 57].
2.4
Transport Protocols
A second aspect of networking in computer networks is end-to-end data delivery. This is usually the task of the transport layer. The role of the transport layer, and its protocols, is to provide reliable end-to-end data delivery and traffic and congestion control, both in a fair manner. The standard protocol used in most computer networks is the Transmission Control Protocol (TCP) [6, 7], which is used on the Internet. TCP provides reliable end-to-end delivery of data, employing a retransmission mechanism, when data gets lost or delayed, and it also controls congestion in the network. However, TCP is not the most efficient protocol for embedded
170
D. Pesch et al.
networks. A number of changes have been proposed to TCP to better adapt it for use over wireless channels [58] and for application in low power embedded networks. Examples are nanoTCP [59], nanoUDP, 6LoWPAN (see below), Zigbee (see below), and event-to-sink reliable transport (ESRT) [60], which has been proposed for specific applications in sensor networks. In the following section, an overview of the key wireless communication standards for embedded networks are presented, followed by some examplary proprietary technologies.
3 3.1
Wireless Network Standards Communication Standards for Embedded Devices
Embedded devices are heterogeneous and are adapted to greatly differing application needs. Therefore varied communication technologies can be used, including cellular communication systems, such as GSM and 3G (e.g. WCDMA/UMTS), WLAN technology as standardized by the IEEE in the 802.11 standard, Wireless Personal Area Network (PAN) communication standards, such as Bluetooth/802.15.1 and ZigBee/802.15.4, Near Field Communication (NFC) and Radio Frequency Identification (RFID) technology. Cellular systems are typically more power hungry than other wireless communication approaches due to the distances they need to cover and the complexity of the protocols. They also require registration and a contract with a network provider and therefore are only suitable for specific application fields. WLAN technology, based on IEEE 802.11, is widely deployed in companies and private homes. It offers relatively high data rates, up to 54 Mbit/s using 802.11g and well above 100 Mbit/s [61] with 802.11n expected to be published in 2009. The high data rates offered by IEEE802.11 make it more power hungry than many other technologies used for embedded networking and is used mainly where these high data rates are needed; for example, video surveillance applications. For Personal Area Networks, that is, the network of devices a person may carry with them, (e.g. mobile phones, headsets, PDAs) the Bluetooth standard was introduced. Pushed by industry and in particular by Ericsson, the standard was developed and the lower layers became an official IEEE standard (802.15.1) in June 2002 [62]. A modified version of Bluetooth, called WiBree, operating over shorter distances and with ultra low power consumption, was specified by Nokia and first published in the fall of 2006. In June 2007, WiBree joined Bluetooth SIG1 and now serves as a special low power, low range physical layer for Bluetooth type services. The WiBree physical layer is not (yet) part of IEEE 802.15.1.
1
http://www.bluetooth.org
7 Embedded Wireless Networking: Principles, Protocols, and Standards
171
IEEE has standardised another low power communication technology covering physical and MAC layers, the IEEE802.15.4 Low Power Personal Area Network standard. Industry sponsored standards groups have complemented this to provide other protocol layers to create a complete network standard. The main standards groupings to do this are the ZigBee Alliance2, the HART Foundation3, and the ISA4. In those standards, IEEE 802.15.4 specifies the physical and MAC layer, while the ZigBee Alliance - similar to the Bluetooth Special Interest Group (SIG) -, the HART Foundation and ISA have specified the higher layers, services and application scenarios for their respective systems standards. The ZigBee higher layers and services are substantially less complex than the Bluetooth protocol stack and for this reason the standard is particularly suitable for low complexity, energy limited devices, such as sensor nodes and embedded devices. However, market penetration is still low as there are doubts about the energy efficiency of the Zigbee protocol stack [63]. In the following sections, a range of communication systems are introduced in more detail.
3.2
Cellular Mobile Systems Standards
Cellular mobile communication systems are well planned and designed computer networks, deployed by network operators, who have a license to operate a certain system in a particular frequency band and region, as granted by regulatory authorities. The most successful mobile communication system is without doubt GSM (Global System for Mobile Communications). GSM started as the second generation mobile phone system in Europe gradually replacing analogue systems (first generation systems), with the first network operational in Finland in 1991. GSM was primarily designed for voice services with some data capabilities. Besides voice, circuit switched data services and short messages (Short Message System, SMS) have been introduced. SMS has been an unexpected, overwhelming success with around 1.9 trillion messages sent in 2007, leading to a revenue of 52 billion US$ [64]. With the increasing demand for mobile data communication, GSM has evolved and now supports High Speed Circuit Switched Data (HSCSD), with rates up to 57.6 kbit/s, by bundling several data channels and packet switched services (General Packet Radio Service, GPRS), with data rates theoretically up to 171.2 kbit/s and realistically up to 115 kbit/s. GPRS is often referred to as 2.5 G mobile communication system. GSM’s data and messaging capabilities are attractive for embedded applications and are used to wirelessly connect to remote embedded systems; this is possible due to the ubiquitous availability of cellular system services across many geographical areas.
2
http://www.zigbee.org http://www.hartcomm.org 4 http://www.isa.org 3
172
D. Pesch et al.
Third generation cellular mobile systems were internationally standardised and harmonised by the International Telecommunication Union (ITU), with the vision of establishing a single world-wide standard. In the end, two different implementations of IMT-2000 (Intelligent Mobile Telecommunications 2000) were realised: W-CDMA/UMTS, standardised by the third generation partnership project (3GPP), and cdma2000 by 3GPP2. The cdma2000 implementation allows backwards compatibility to cdmaOne, popular in the US, whereas WCDMA/UMTS represents an evolutionary path from GSM to 3G. Third generation systems offer higher data rate services, initially at 384 kbit/s, as well as a higher spectral efficiency. Currently, 3G extensions (3.5G) are deployed; for example, HSDPA/HSUPA (High Speed Downlink Packet Access, High Speed Uplink Packet Access), providing peak data rates of theoretically up to 14.4 Mbit/s in the uplink and 5.7 Mbit/s in the downlink. IEEE802.16 (WiMAX) and 4G systems, such as Long Term Evolution (LTE) and System Architecture Evolution (SAE) of 3G, are under discussion at the moment, leading to substantially higher data rates - above 100 Mbit/s - and all IP network architectures [65]. Third generation mobile communications systems and its evolution are well described in literature. [66–68]. For embedded wireless networks, mobile communication systems, such as GSM and UMTS, are important if the embedded device itself is mobile or needs to communicate to mobile objects and persons equipped with mobile phones. They are also important to provide wireless and mobile wide area connectivity to embedded wireless monitoring and control systems installed in remote locations.
3.3
IEEE802.11 WLAN
IEEE 802.11 is a set of standards for wireless local area network (WLAN) computer communication, developed by the IEEE LAN/MAN Standards Committee (IEEE 802) in the 5 GHz (11a) and 2.4 GHz ISM (Industrial, Scientific, and Medical) public spectrum bands. The 802.11 suite is designed to provide wireless connectivity for laptop and desktop computers and consequently provides much higher data rates than may be necessary for most wireless embedded network applications, apart from image or video based applications. Wireless LANs have originally been designed as an alternative to fixed LANs for portable computers. Starting with 1–2 Mbit/s in 1997, WLAN systems today reach approximately 100 Mbit/s, using a draft version of the upcoming IEEE802.11n standard. The IEEE802.11 standard is comparatively complex and has not been designed for high energy efficiency, although recent implementations reach similar energy per bit ratios to other low power technologies, such as IEEE802.15.4 (see below). However, the complexity of the 802.11 protocol stack requires approximately 30 times more memory than the ZigBee/802.15.4 protocol stack (~1 MByte vs. 4–32 kByte). The main objective of the WLAN design was to enable wireless networking of computers with high data rates over short distances within buildings, or up to a few hundred meters in outdoor environments. However, approximately 30% of the IEEE802.11
7 Embedded Wireless Networking: Principles, Protocols, and Standards
173
based chipset are now being used for non-PC based systems, such as mobile phones, digital cameras, camcorders and mp3 players. For a detailed technical description of WLAN, refer to the literature [69–71]. A short summary is given in the following section, starting with the physical layer and continuing with higher layers. In comparison to the cellular mobile systems, WLAN and also the Wireless Personal Area Network System, uses the ISM frequency bands, which do not require operating licenses in most countries of the world. For WLAN, the 2.4 GHz band and 5 GHz band are being used. The initial standard started with Direct Sequence Spread Spectrum (DSSS) technology using the 11 chip Barker Code and DBPSK and DQPSK modulation schemes in the 2.4 GHz band. With this legacy 802.11 standard a relatively robust transmission, with 1–2 Mbit/s transmission rate, is achieved, utilizing a chip rate of 11 Mchip/s and occupying a bandwidth of approximately 22 MHz. Eleven frequency channels have been defined for the US and thirteen for Europe, with a channel spacing of 5 MHz. Taking the 22 MHz required bandwidth per channel, only three non-overlapping channels are available in the 2.4 GHz ISM band. In Europe an EIRP (equivalent isotropically radiated power) of 100mW is permitted and in the US it may be up to 1W. In 1999, 802.11b was introduced and brought a commercial break-through to WLAN; with a technically enhanced the data rate up to 11 Mbit/s and the same bandwidth requirement, through applying Complementary Code Keying (CCK) modulation schemes. In 802.11a, also introduced in 1999, OFDM (orthogonal frequency division multiplexing) and different phase shift keying modulation schemes, from BPSK to 64-QAM, are applied and provide data rates from 6 to 54 Mbit/s in the less congested 5 GHz ISM band. Here, 12 non-overlapping channels are available for WLAN traffic, each 20 MHz wide. For Europe, a lower EIRP of 50mW is permitted. With transmit power control (TPC) and dynamic frequency selection (DFS), 250 mW is acceptable, as described in the IEEE standard 802.11h. The specific conditions for Japan in the 5 GHz frequency range are addressed in IEEE 802.11j. IEEE 802.11g, introduced in 2003, brought OFDM and data rates of up to 54 Mbit/s to the 2.4 GHz band. Further increase is expected from the 802.11n standard, still under development within the standardisation process; 802.11n will utilize multiple antenna (Multiple Input Multiple Output – MIMO) technology to achieve data rates beyond 100 Mbit/s. Pre-standard products are on the market and are even certified by the WiFi Alliance (11), an industry consortium certifying WLAN 802.11 products to ensure interoperability between different vendors. Two different operational modes are specified in IEEE802.11: Infrastructure and Ad-hoc mode. In the infrastructure mode the communication between two stations is via an Access Point (AP). The access to the wireless medium is contention based, referred to as Distributed Coordination Function (DCF), or contentionfree, coordinated by the Access Point, referred to as Point Coordination Function (PCF). In the distributed coordination function (DCF), which is mandatory for WLAN systems, the CSMA/CA random access scheme is used as described in section 2.2. To avoid the hidden node problem RTS/CTS (Request to send/Clear to send) can be applied optionally. In PCF mode, an optional mode for 802.11 systems, the Access Point can poll different stations and therefore QoS can be better
174
D. Pesch et al.
supported. However, the transition from contention to contention-free periods cannot be scheduled and no QoS classes are defined. Both of these issues have been resolved with the IEEE 802.11e standard, released in 2005. In Ad-hoc mode only the distributed coordination function (DCF) mode is used accessing the channel using the CSMA/CA scheme. Communication is possible between stations directly, without the need to authenticate to an access point. Network layer ad-hoc protocols (MANET), as introduced in section 2.3, typically utilize the MAC layer ad-hoc mode, but are not part of the IEEE standard. Other parts of the IEEE802.11 series address interoperability issues (IEEE802.11h/j), mesh networking (IEEE802.11s) and embedded wireless networking for car to car communication (IEEE802.11p).
3.4
IEEE802.15.1, Bluetooth and WiBree
Bluetooth [72] was developed by the Bluetooth Special Interest Group as a low power, short range communication technology for Wireless Personal Area Networks. Applications target the connectivity of devices, such as laptop computers, mobile phones, printers, and also audio devices, such as wireless headsets. For Bluetooth-based communications, Bluetooth profiles the wireless interface specifications that provide configurations to meet application requirements. Bluetooth can be used for sensor network applications, but as with the 802.11 suite it provides data rates - up to 723kbps (asym) or 432kbps (sym) in the first Bluetooth version and even up to 3 Mbit/s in Bluetooth 2.0 EDR - that are higher than required for many WSN applications. However, Bluetooth based on Frequency Hopping Spread Spectrum (FHSS) wireless communication does not allow for long inactive periods and therefore power consumption is too high for many WSN applications. Additionally, Bluetooth does not scale well to larger networks. Another disadvantage of Bluetooth is the relatively slow pairing time when new nodes enter a Bluetooth network. Furthermore, only eight devices, including the coordinator, can be active members of a Personal Area Network (Piconet). Around 250 parked members can join the network. Both numbers are not sufficient for many embedded system applications. Piconets can be interconnected to form Scatternets using common gateway nodes. However, this topology approach is rather inflexible. WiBree [73] is a new digital radio technology, developed by Nokia as an extension to Bluetooth and designed for ultra low power consumption, within a short range of approximately 5–10 meters. WiBree is designed to complement rather than replace Bluetooth. It is aimed at interconnecting small devices that do not need full Bluetooth functionality and consumes a fraction of the power of related Bluetooth technology. It operates in the 2.4 GHz ISM band with a physical layer bit rate of 1 Mbps. Targeted applications include Sport and Wellness, the Wireless Office and Mobile Accessories, Healthcare Monitoring, Entertainment Equipment; these is a focus on interconnecting devices such as watches, keyboards, sports sensors to mobile phones, with low power consumption being a key design requirement. WiBree is aimed at creating sensor networks around mobile phones rather than large scale networks.
7 Embedded Wireless Networking: Principles, Protocols, and Standards
3.5
175
IEEE802.15.4, Zigbee, WiHART, ISA-SP100.11a
IEEE 802.15.4 is the main IEEE standard specifying the physical layer and medium access control for low cost, low power, low data rate, personal area (short range) wireless networks. IEEE802.15.4 operates on one of three possible ISM bands, 868MHz, 915MHz, and 2.4GHz. Similar to IEEE802.11, IEEE802.15.4 uses a direct sequence spread spectrum (DSSS) based physical layer. In the 868 MHz frequency band, data transmission rates of up to 20 kbit/s are possible using BPSK modulation with a 15 chip spreading code, resulting in a chip rate of 300 kchp/s. Only one channel is available here. In the 902–928 MHz range there are 10 channels available with data rates of 40 kbit/s per channel using BPSK modulation and the same 15 code spreading code. In the 2.4 GHz range, data rates of up to 250 kbit/ s can be supported using O-QPSK modulation with a 32 code spreading code resulting in a chip rate of 2 Mbit/s and 5 MHz channel spacing. IEEE802.15.4 distinguishes two types of network node, the full function device (FFD) - this can operate as the coordinator of a personal area network and is referred to as the PAN coordinator - and the reduced function devices (RFD). RFDs are intended to be extremely simple devices with very lightweight resource and communication requirements and it is only possible for such nodes to communicate with FFDs. Due to power and functionality restrictions, RFDs are precluded from acting as coordinators. Networks can be topologically configured as either pointto-point or star networks, as per Fig. 7.3, with networks requiring one FFD to act
Star Topology
Tree Topology (Peer to Peer) (not recommended in ZigBee)
Legend Full Functional Device
Legend Full Functional Device
Reduced Function Device
Reduced Function Device
Network Co-ordinator
Network Co-ordinator
Fig. 7.3 IEEE802.15.4/ZigBee Topology Configurations
176
D. Pesch et al.
as the coordinator. Address identifiers are unique 64-bit identifiers, with the possibility to use a short 16-bit identifier within a restricted domain (i.e. individual PANs). As the standard does not specify a network layer, routing is not directly supported, but a subsequent layer can be added to provide support for multi-hop communications. Two operating modes, non-beacon and beacon enabled, are possible with physical medium channel access being achieved via a CSMA/CA protocol. For non-beacon mode, unslotted channel access is based on listening of the medium for a time window scaled by a random exponential back-off algorithm. In beacon enabled mode the coordinator broadcasts beacons periodically to synchronise the attached devices. A superframe structure is used in beacon enabled mode and its format is determined by the coordinator, with successive beacons acting as the superframe limit. Contention within superframes is resolved by CSMA/CA and each transmission must end before the arrival of the second beacon. The focus of 802.15.4 is the provision of low power communication between nearby devices with little to no reliance on underlying infrastructure and this has seen the standard adopted as the main wireless communication technology for automation and control applications. ZigBee [74] is a low-cost, low-power, wireless mesh networking standard that defines the network and application layers that sit on top of IEEE 802.15.4. The ZigBee Alliance comprises industrial partners that include Philips, Motorola and Honeywell. The ZigBee standard provides technology that is suitable for wireless control and monitoring applications, the low power-usage allows longer life with smaller batteries, and the mesh networking provides high reliability and larger range. Among the main applications being targeted is Building Automation – light control, meter reading, etc. The current release is Zigbee 2007 and this offers two stack profiles. A lightweight stack profile 1, referred to as ZigBee, is aimed at home and light commercial use, whereas the stack profile 2, called ZigBee Pro, provides additional features, including multicasting, many-to-one routing and high security with Symmetric-Key Key Exchange. Both stack profiles provide full mesh networking and work with all ZigBee application profiles. The ZigBee standard defines three different device types: Zigbee Coordinator (ZC), ZigBee Router (ZR), and ZigBee End Device (ZED). Only one ZC is required per network, which initiates network formation, assigns addresses and acts as IEEE802.15.4 PAN co-ordinator. The ZR is an optional network component, which associates with the ZC or another ZR. The ZC acts as IEEE802.15.4 Coordinator, a Full Function Device, provides local address management, and participates in multi-hop/mesh routing and maintains routing tables. The ZED is also an optional network component, although essential in most real networks, as it is typically the device that provides sensing and control functionality within the network. The ZED is an IEEE802.15.4 Reduced Function Device (RFD) and therefore needs to associate itself with a ZC or ZR in order to send data towards the PAN coordinator. ZEDs rely on parent devices (FFD) to initiate sleep cycles and do not participate in association or routing.
7 Embedded Wireless Networking: Principles, Protocols, and Standards
177
As in IEEE802.15.4, Zigbee uses both beacon and non-beacon modes. In the non-beacon enabled mode channel access is supported through an unslotted CSMA/ CA protocol. This mode typically requires ZR devices to have their receivers alwayson and this necessitates constant (line-powered) power supply. In beacon enabled networks the ZR schedules periodic beacons to identify their presence in the network. As a consequence of periodic beaconing, other network nodes, in particular ZigBee End Devices, can sleep between beacons, which facilitates a smaller duty cycle and prolonged battery life. Zigbee uses a basic master-slave topology configuration, shown in Fig. 7.3, suited to static star networks of many infrequently used devices. In the star configuration, Zigbee supports a single hop topology, constructed with one coordinator in the centre and end devices. Devices only communicate via the network coordinator and this is necessary for RFDs as they are not capable of routing. The tree topology is a multiple star topology configuration with one central node acting as the Zigbee network coordinator. For mesh configurations the FFDs communicate without the aid of a network coordinator and the FFDs serve as routers, forming a reliable network structure, as shown in Fig. 7.4. The ZigBee protocols aim at minimising power usage by reducing the duration that the radio transceiver is on, but there are deficiencies associated with this approach in that there is no support for energy efficient routing for networks with mesh topologies; ZigBee does not provide beacon scheduling for such topologies.
Mesh with Star Clusters
Mesh Topology Legend Full Functional Device Reduced Function Device Network Co-ordinator
Fig. 7.4 ZigBee Mesh Topology Configurations
Legend Network Router (FFD) Network End Device (RFD, FFD) Network Co-ordinator (FFD) Mesh Link Star Link
178
D. Pesch et al.
For FFDs to act as routers in mesh topologies, they need to be line powered as they have to be in listening mode all the time, which drains battery power. The routing protocol is relatively static with route re-discovery occurring as part of route maintenance and this leads to slow recovery from node failures. Likewise, routing is not scalable as it is based on AODV and there is no provision for efficient Real-Time Short Address Allocation Algorithms. The WirelessHART standard, part of HART release 7, was approved and released in June 2007. This is a wireless extension of the HART Communication Foundation’s HART protocol (IEC 61158), used for networking embedded control devices in industrial automation and control environments. The HART Communication Foundation is an independent, not-for-profit organisation and is the technology owner and standards body for the HART Protocol. The foundation has members that include the major instrumentation manufacturers and users on a global scale: ABB, Adaptive Instruments, Crossbow Technology, Dust Networks, ELPRO Technologies, Emerson Process Management, Endress+Hauser, Flowserve, Honeywell, MACTek, MTL, Omnex Control Systems, Pepperl+Fuchs, Phoenix Contact, Siemens, Smar, Yamatake and Yokogawa. HART claims that its WirelessHART standard is the first open and interoperable wireless communication standard focused on providing reliable, robust and secure wireless communication in real world industrial plant applications. WirelessHART uses IEEE802.15.4 for physical and MAC layer and adds self-organising, self-healing mesh based networking. WirelessHART is seen as being complementary to wired HART technology rather than a replacement, extending the capabilities of the existing wired applications. At present it is estimated that HART technology is used in more than 25 million installed devices worldwide. The objectives of the WirelessHART standard are: 99% reliability, 3–10 yr battery life for wireless devices, mesh, star, and combined networks (rather than just point-to-point) and backward compatibility with all equipment in the field. WirelessHART aims to provide more data in real time, with wireless capability giving easier access to new intelligent device and process information, offering multivariable process data, as well as status, diagnostic, and configuration data. It is claimed that it improves asset management, environmental monitoring, energy management, regulatory compliance, and access to remote or inaccessible equipment (personnel safety). WirelessHART claims it offers more flexibility in that the wireless technology allows attachment of HART-based controllers anywhere in the control loop and offers, through the HART protocol, compatibility with legacy systems. The ISA SP100.11a standard is billed as being the first of a family of standards for multiple industrial applications and is a standardisation effort by ISA, the society for automation and control professionals. ISA-SP100.11a is a new wireless protocol standard based upon IEEE802.15.4 and it is aimed at providing a wireless networking solution for industrial automation equipment. ISA is currently considering 6LoWPAN (see below) as an option for the network layer of the SP100.1a standard. SP100.11a is developed as an open standard and, currently, efforts are underway to align Wireless HART with ISO-SP100.11a.
7 Embedded Wireless Networking: Principles, Protocols, and Standards
3.6
179
6LoWPAN
The 6LoWPAN standard, specified by the IETF in RFC4944, provides IP networking capabilities for IEEE802.15.4 devices and supports Internet connectivity with IEEE802.15.4 networks. The standard proposes an adaptation layer to provide interoperability between IPv6 and 802.15.4 networks and provides support for mesh topologies, IP header compression unicast and multicast routing. The targeted application space for 6LoWPAN is low data rate applications, such as automation in home, office and industrial environments, which require wireless internet connectivity.
3.7
Z-Wave
Z-Wave is an interoperable wireless communication protocol developed by Zensys and the Z-Wave Alliance, that is focused on low power, low bandwidth applications for home automation and sensor networks. The Z-Wave Alliance is based upon a consortium of independent manufacturers that develop wireless home automation products built on the Zensys Z-Wave open standard. Z-Wave provides 40 kbit/s data transmission capability and is fully interoperable with an open air range of approximately 30 meters, which is lessened for indoor applications, depending on the environment. Z-Wave Radio uses the 900 MHz ISM band and a network can contain up to 232 devices, with the option of bridging networks for supporting additional devices; routing relies on an intelligent mesh network topology without the need for a master node.
4
Proprietary Technologies
A wide range of proprietary and application specific wireless communication technologies exist that are tailored for specific embedded networking applications. Many of the large RF chip manufacturers now provide their own wireless networking software with those chipsets in order to promote their sales. The networking software is, if not based on standards, proprietary and targets a particular application range that is often not well covered by standards based protocols, or where standards based technology is not necessary. A selection of such manufacturers and technologies include Texas Instruments’ Chipcon range, Nordic Semiconductor, Analog Devices, RF Monolithics, and others. While Texas Instruments have IEEE802.15.4 compliant chipsets, they also provide their own proprietary network technology, called SimpliciTI [75] with other low power RF chipsets, such as CC110x and CC2500. Nordic Semiconductor offers the ANT protocol [76] with a selected range of their low power RF chipsets.
180
D. Pesch et al.
Simplici is a Texas Instruments proprietary low-power RF network protocol, using Texas Instruments CC1XXX/CC25XX chipsets, suitable for use over small (not exceeding 256 devices) RF networks, aimed at battery operated devices with low data requirements and low duty cycle. SimpliciTI supports peer-to-peer communication with access points and range extenders (max 4 hops) for multihop communications. This is a low cost protocol with a memory footprints of <4K Flash and <512 bytes RAM. Simplici is designed for easy deployment and is compatible with several of Texas Instruments’ RF platforms, including the low power MSP430 MCUs (microcontrollers) and the CC1XXX/CC25XX transceivers and SoCs (system-on-a-chip). The target application spaces are alarm & security, smoke detectors, automatic meter reading and active RF-ID applications. The wireless desktop protocol stack was designed by Nordic for use with their family of ultra low power 2.4GHz transceivers. The protocol was initially designed and optimised for control of PC peripherals, but it is also suitable for other applications, such as sensor applications for home automation, industrial control and monitoring and for game controllers. Networking is based on the ANT protocol and is centred on a single host having multiple control devices with native protocol support for up to 5 control devices, configured in a star topology for bidirectional and unidirectional communication. The protocol’s ‘ReverseBurst’ feature enables high-throughput data streaming from the host to the device, making it suitable for the bidirectional communications. RadioWire’s MicrelNet is a low power wireless networking technology that’s uses a FHSS radio and provides support for a flexible multi-level star networking topology. This can be extended to support a multi-level, multi-cluster based network topology with repeater functionality. MicrelNet is available as a free stack from Micrel Inc. for use in conjunction with Micrel’s RadioWire range of RF transceivers. For lower power consumption, the RadioWire transceiver is designed to operate in the 868MHz and 902–908MHz ISM bands in preference to 2.4GHz.
5 Summary and Conclusions An overview of wireless networking principles has been presented with a focus upon wireless embedded networking. Following the introduction of protocol principles, the current state of the art for MAC and routing protocols, in embedded wireless networks and selected other protocols, have been summarised. The most important communication standards for wireless embedded networks have also been presented, followed by selected proprietary communication technologies for embedded networks. The range of wireless communication and networking techniques and standards is significant, which is caused by the enormous range of applications for networked embedded systems, which span every aspect of modern life. The main research focus for protocol development on embedded wireless networks has been the reduction of battery power consumption for increased node and network lifetime, network self-configuration, and network scalability. Power
7 Embedded Wireless Networking: Principles, Protocols, and Standards
181
consumption is one of the key challenges for wireless embedded systems, as they tend to be battery powered. Many applications demand long battery lifetimes, extending to years rather than a few days. Thus, approaches that are useful for mobile phone or laptop based WLAN applications are not acceptable for embedded applications. Increasing node lifetime is achieved through extended sleep cycles for the wireless modules. Many protocol developments presented in this chapter focus on low duty cycle approaches, often coupled with efficient routing. Improvements in power consumption are an open-ended problem and will remain a key research issue for embedded networking in the smart objects, wireless sensing and augmented materials/spaces area for the foreseeable future. Self-configuration is another important issue, as embedded network deployment is often ad-hoc and carried out by non experts. This requires that network formation, as well as node and network configuration, be executed in a self-configuring manner. Essentially all protocols for embedded wireless networks achieve this more or less with success. Robustness and scalability are issues for self-configuration and more work is needed in this area. Scalability, in particular, is an increasing issue for embedded networks as many applications foresee the use of thousands of embedded network nodes. It is still unclear if existing protocols will scale to networks of this size or if new protocols will have to be developed. It is already clear that existing wireless communication and network standards, as discussed in this chapter, are unlikely able to scale to thousands of devices. More research in this direction is required which may lead to new, more scalable, standards for embedded wireless networks.
References 1. 2. 3. 4. 5. 6. 7. 8.
9.
10.
11.
A. S. Tanenbaum, “Computer Networks”, 4th ed., Prentice Hall Intl, 2001 J. Proakis, “Digital Communications”, 4th ed., McGraw-Hill, 2000 A. Goldsmith, “Wireless Communications”, Cambridge University Press, 2005 B. Sklar, “Digital Communications: Fundamentals and Applications”, 2nd ed., Prentice Hall, 2001 Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, ANSI/IEEE Std 802.11, 1999 Edition. J. Irvine, D. Harle, “Data Communications and Networks: An Engineering Approach”, John Wiley & Sons, 2001 W. Stallings, “Data and Computer Communications”, 8th ed.,Prentice Hall, 2006 F. Tobagi, L. Kleinrock, “Packet Switching in Radio Channels: Part II–The Hidden Terminal Problem in Carrier Sense Multiple-Access and the Busy-Tone Solution”, IEEE Transactions on Communications, vol. 23, no 12, Dec. 1975, pp 1417–1433 W. Ye, J. Heidemann, D. Estrin, An energy efficient mac protocol for wireless sensor networks, Proceeding of 21st International Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM’ 02), New York, NY, USA 2002. T. van Dam and K. Langendoen, An adaptive energy efficient mac protocol for wireless sensor networks, Proceedings of 1st ACM Conference on Embedded Networked Sensor Systems (SenSys), 2003. J. Polastre, J. Hill, and D. Culler, Versatile low power media access for wireless sensor networks, Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2004.
182
D. Pesch et al.
12. El-Hoiydi and J. Decotignie, Low power downlink mac protocols for infrastructure wireless sensor networks, ACM Mobile Networks and Applications, 10(5):675–690, 2005. 13. M. Buettner, G. Yee, E. Anderson, R. Han, X-MAC: A Short Preamble MAC Protocol for Duty-Cycled Wireless Sensor Networks, Proceedings of the 4th ACM Conference on Embedded Sensor Systems (SenSys), 2006. 14. E. Perkins Ad Hoc Networking, ISBN-10: 0201309769, ISBN-13: 9780201309768, AddisonWesley Professional, 2001 15. T. Clausen, P. Jacquet, L. Viennot, Comparative study of routing protocols for mobile ad-hoc networks, The First Annual Mediterranean Ad Hoc Networking Workshop, 2002 16. S. R. Das, Samir, R. C. Neda, J. Yan, Simulation-based performance evaluation of routing protocols for mobile ad hoc networks, Mob. Netw. Appl., Kluwer Academic Publishers, 2000, 5, 179–189 17. C. E. Perkins, E. M. Royer, S. R. Das, Performance Comparison of two on-demand routing protocols for ad hoc networks IEEE Personal Communications, IEEE Personal Communications, February 2001. 18. R. Jain, A. Puri, R. Sengupta, Geographical Routing Using Partial Information For Wireless Ad Hoc Networks, IEEE Personal Communications, vol. 8, no. 1, February 2001. 19. K. Young-Bae, V. H. Nitin, Location-Aided Routing (LAR) IN Mobile Ad Hoc Networks, Proceedings of the 4th annual ACM/IEEE International Conference on Mobile Computing, MOBICOM’98, Dallas, Texas, 1998. 20. C. Perkins, P. Bhagwat, Highly Dynamic Destination-Sequenced Distance-Vector Routing (DSDV) for Mobile Computers, Proceeding of ACM SIGCOMM’94 Conference on Communications Architectures, Protocols and Applications, 1994. 21. T. Clausen, P. Jacquet, Optimized Link State Routing Protocol (OLSR) RFC 3626, IETF Network Working Group, IETF Network Working Group, RFC 3626, 2003 22. T. Clausen, C. Dearlove, P. Jacquet, The Optimized Link State Routing Protocol version 2 IETF, Internet Draft, 2008 23. D. B. Johnson, B. D. A. Maltz, Y. Hu, The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks (DSR) IETF RFC 2026, 2004 24. C. Perkins, Ad-hoc on-demand distance vector routing, MILCOM ‘97 panel on Ad Hoc Networks, 1997 25. C. E. Perkins, E. M. Royer, S. R. Das, RFC 3561 - AODV – Ad-hoc On-demand distance Vector IETF, 2003. 26. I. Chakeres, C. Perkins, Dynamic MANET On-demand (DYMO) Routing IETF, 2006 27. K. Akkaya, M. Younis, “A survey on routing protocols for wireless sensor networks”, Elsevier Ad Hoc Networks, Volume 3, Issue 3, May 2005, Pages 325–349 28. W.B. Heinzelman, A. P. Chandrakasan,. H. Balakrishnan, “An application-specific protocol architecture for wireless microsensor networks”, IEEE Transactions on Wireless Communications, 1(4): 660–670. October 2002 29. S. Lindsey, C.S. Raghavendra, PEGASIS: power efficient gathering in sensor information systems, in: Proc. IEEE Aerospace Conference, Big Sky, Montana, March 2002. 30. A. Manjeshwar and D. P. Agrawal, “TEEN: A Protocol For Enhanced Eficiency in Wireless Sensor Networks,” in Proceedings of the 1International Workshop on Parallel and Distributed Computing Issues in Wireless Networks and Mobile Computing, San Francisco, CA April 2001. 31. Kulik, J., Heinzelman, W., Balakrishnan, H.,., “Negotiation based protocols for disseminating information in wireless sensor networks”, ACM Wireless Networks, Vol. 8, Mar–May 2002, pp.169–185. 32. D. Intanagonwiwat, R. Govindan, D. Estrin, J. Heidemann, and F. Silva. Directed diffusion for wireless sensor networking. IEEE/ACM Transactions on NetworkingIEEE/ACM Transactions on Networking, 11, February 2003. 33. D. Braginsky, D. Estrin, “Rumor routing algorithm for sensor networks”, Proceedings of the First Workshop on Sensor Networks and Applications (WSNA), Atlanta, GA, October 2002.
7 Embedded Wireless Networking: Principles, Protocols, and Standards
183
34. C. Schurgers and M.B. Srivastava. Energy effcient routing in wireless sensor networks. In MILCOM Proceedings on Communications for Network-Centric Operations, 2001. MILCOM Proceedings on Communications for Network-Centric Operations: Creating the Information Force. 35. M. Chu, H. Haussecker, and F. Zhao. Scalable Information-Driven Densor Querying and Routing for ad hoc Heterogeneous sensor networks. Intl. J. of High Performance Computing Applications, 16(3), August 2002. 36. Y. Yao and J. Gehrke, “The cougar approach to in-network query processing in sensor networks”, in SIGMOD Record, September 2002. 37. S. Madden, M. Franklin, J. Hellerstein and W. Hong, TinyDB: an acquisitional query processing system for sensor networks, ACM Trans. Database Syst. vol. 30, no. 1, 2005, pp. 122–173 38. N. Sadagopan, B. Krishnamachari, and A. Helmy. The ACQUIRE mechanism for efficient querying in sensor networks. In Proc. 1st IEEE Intl. Workshop on Sensor Network Protocols and Applications (SNPA), Anchorage, AK, May 2003. 39. P. Bahl, V. N. Padmanabhan. Radar: An in-building RF-based user location and tracking system. Proc. of IEEE INFOCOM 2000. Tel-Aviv, Israel. March 2000. 40. N. Bulusu, J. Heidemann, D. Estrin. GPSless low cost outdoor localization for very small devices. IEEE Personal Communications Magazine Vol.7, No.5. pp. 28–34. Oct. 2000. 41. S. Capkun, M. Hamdi, J. P. Hubaux. GPS-free positioning in mobile Ad-Hoc networks. Proc. of HICSS 2001. Maui, Hawaii. January 2001. 42. Y. Yu, D. Estrin, and R. Govindan. Geographical and Energy-Aware Routing: A Recursive Data Dissemination Protocol for Wireless Sensor Networks. UCLA Computer Science Department Technical Report, UCLA-CSD TR-01-0023. May 2001. 43. M. Zorzi and R.R. Rao. Geographic Random Forwarding (GeRaF) for Ad Hoc and Sensor Networks: Multihop Performance. IEEE Transactions on Mobile Computing. Vol. 2, No. 4. October–December 2003. 44. V. Rodoplu and T. H. Meng. “Minimum energy mobile wireless networks”. IEEE J. Selected Areas in Communications, 17(8):1333–1344, August 1999 45. L. Li and J. Y. Halpern. Minimum energy mobile wireless networks revisited. In Proc. IEEE International Conference on Communications (ICC), June 2001 46. Y. Xu, J. Heidemann, and D. Estrin. Geography-informed energy conservation for ad hoc routing. In Proceedings of the ACM International Conference on Mobile Computing and Networking, pages 70–84, Rome, Italy, July 2001. 47. K. Sohrabi, J. Gao, V. Ailawadhi and G. J. Pottie, “Protocols for self-organization of a wireless sensor network,” IEEE Personal Communications, vol.7 pp.16–27, Oct. 2000. 48. T. He, J. A. Stankovic, L. Chenyang and T. Abdelzaher, “SPEED: a stateless protocol for real-time communication in sensor networks” IEEE ICDCS’03, p.46–55, May 2003. 49. C. Pandana, K.J.R. Liu, “Maximum connectivity and maximum lifetime energy-aware routing for wireless sensor networks”. Global Telecommunications Conference GLOBECOM 2005 50. Konstantinos Kalpakis, Koustuv Dasgupta, and Parag Namjoshi, “Maximum Lifetime Data Gathering and Aggregation in Wireless Sensor Networks”, Proceedings of the 2002 IEEE International Conference on Networking (ICN’02), Atlanta, Georgia, August 26–29, 2002. pp. 685–696. 51. Ye F.; Chen, A.; Liu, S.; Zhang L. A scalable solution to minimum cost forwarding in large sensornetworks. Proceedings of Tenth International Conference on Computer Communications and Networks, Pages 304–309, 2001 52. S. Mukhopadhyay, D. Panigrahi, S. Dey, “Data Aware, Low cost Error correction for Wireless Sensor Networks”, Proc. IEEE Conference in Wireless Communications and Networking, 2004, pp. 2492–2497 53. B. Yu, K. Sycara, “Learning the Quality of Sensor Data in Distributed Decision Fusion”, Proc. Intl. Conference on Information Fusion, 2006, pp. 1–8 54. Q. Han, I. Lazaridis, S. Mehrota, N. Venkatasubramanian, “Sensor Data Collection with Expected Guarantees”, Proc. IEEE Intl. Conference on Pervasive Computing and Communications Workshop, 2005, pp. 374–378
184
D. Pesch et al.
55. A. Barroso, U. Roedig, C. Sreenan, “Maintenance Efficient Routing in Wireless Sensor Networks”, in Proc. of EMNETS-II, Sydney, Australia, May 2005 56. Ian F. Akyildiz, Mehmet C. Vuran, Ozgur B. Akan, “A Cross-Layer Protocol for Wireless Sensor Networks” in Proc. Conference on Information Sciences and Systems (CISS ‘06), Princeton, NJ, March 2006. 57. L. van Hoesel, T. Nieberg, J. Wu, and P. J. M. Havinga, “Prolonging the lifetime of wireless sensor networks by cross-layer interaction,” IEEE Wireless Communications, vol. 11, no. 6, pp. 78–86, Dec. 2004. 58. H. Balakrishnan, V. N. Padmanabhan, S. Seshan, R. H. Katz. “A comparison of mechanisms for improving TCP performance over wireless links”, IEEE/ACM Trans. On Networking, vol. 5, no. 6, Dec. 1997, pp. 756–769 59. Z. Shelby, P. Mähönen, J. Riihijärvi, R. O., and P. Huuskonen. NanoIP: The Zen of Embedded Networking. In Proceedings of the IEEE International Conference on Communications, May 2003. 60. Y. Sankarasubramaniam, O. B. Akan, I. Akyildiz, “ESRT: Event-to-Sink Reliable Transport in Wireless Sensor Networks”, Proceedings of ACM Mobihoc’03, June, 2003. 61. IEEE Draft Std P802.11n/D2.00, February 2007, Vol., Iss., 2007 62. 802.15.1, IEEE 802.15 WPAN Task Group 1 (TG1). [Online] http://www.ieee802.org/15/pub/ TG1.html. 63. P. Muthukumaran, R. Spinar, K. Murray, D. Pesch, “Enabling Low Power Multi-hop Personal Area Sensor Networks”, in Proc. of 10th International Symposium on Wireless Personal Multimedia Communications, Jaipur, India, December 2007 64. Gartner Group, “Gartner Says Mobile Messages to Surpass 2 Trillion Messages in Major Markets in 2008” [Online] 17th. December 2007. http://www.gartner.com/it/page.jsp?id=565124. 65. E. Dahlman, et al., “3G Evolution: HSPA and LTE for Mobile Broadband”, Elsevier, 2007. ISBN: 9780123725332. 66. B. H. Walke, P. Seidenberger, M. P. Althoff, “UMTS. The Fundamentals” John Wiley & Sons, 2003. ISBN: 978-0470845578. 67. H. Holma, A. Toskala, “WCDMA for UTMS. Radio Access for Third Generation Mobile Communications”, 3rd ed., John Wileys & Sons, 2004. ISBN: 978-0470870969. 68. H. Holma, A. Toskala, “HSDPA/HSUPA for UMTS” John Wiley & Sons, 2006. ISBN-13: 978-0-470-01884-2. 69. B. H. Walke, “Mobile Radio Networks: Networking, Protocols and Traffic Performance”, 2nd ed. John Wiley, 2001. ISBN: 978-0-471-49902-2. 70. B. H. Walke, S. Mangold, L. Berlemann, “IEEE 802 Wireless Systems: Protocols, Multi-Hop Mesh/Relaying, Performance and Spectrum Coexistence”,John Wiley, 2007. 978-0470014394. 71. J. Schiller, “Mobile Communications” Addison Wesley, 2003, 978-0321123817. 72. Bluetooth. Specification of the Bluetooth System V2.0 + EDR, November 04, 2004, Bluetooth SIG/IEEE805.15.1 73. WiBree. [Online] [Zitat vom: 25. Juli 2007.] http://www.wibree.com/technology/Wibree_ 2Pager.pdf. 74. ZigBee. ZigBee Specification. 2006. ZigBee Document 053474r13. 75. http://focus.ti.com/docs/toolsw/folders/print/simpliciti.html 76. http://www.thisisant.com/
Part V
Systems Technologies Context, Smart Behaviour and Interactivity
1.1
Summary
A priority at a systems-level is a complete understanding of the required functionality of the system in question. For Ambient Intelligence, and in particularly for the creation of smart (or cooperating) objects, this represents a significant challenge, since we must design for unpredictable future functionality, at least to a certain extent. In (very) simplistic terms, this means harnessing in concert the flexibility of the hardware, network and software to facilitate a seamless and unobtrusive performance, which offers an acceptable level of proactivity. In other words, the system operates within the context of what a user needs now and in the near future, offering services that add to the current task (or experience) and avoiding distractions. Context awareness for the development of Pervasive Computing and Ambient Intelligence is a research topic in itself. In fact, as it attempts to frame the performance of future systems in a manner that fits what we would find intuitive as human beings, it is an important and growing area of study. This part focuses the systems discussion, at a technology level, upon the issue of context awareness. The first chapter discusses context and context-awareness in some detail, providing a perspective on its evolution within the application areas of pervasive systems and services. The second chapter investigates a particular approach to developing smart artifacts that can be controlled by users to cooperate in a manner that is contextaware. It will investigate the conceptual abstractions and formal definitions used to model collections of artifacts and discuss engineering guidelines for building applications. The methodology is broadly analogous to the process of designing software systems using components.
1.2
Relevance to Microsystems
On one level, the issue of microsystems could be considered to be somewhat decoupled from this area of research into Ambient Intelligence, since the specification of context can be highly qualitative. This has, perhaps, more to do with the
186
Part V Systems Technologies
fact that context awareness is in its early stages as an area of research than with any real prospect that the discontinuity exists. There is a deep repository of information currently available that can be tapped into to create the kind of contextual states that will be needed for pervasive systems. This includes items like e-calendars, as well as user profiles and stated user preferences, all of which can be integrated into context statements without an explicit need for microsystems devices. However, microsystems are essential, if for no other reason than to validate qualitative data (e.g. a calendar meeting validated by location sensing to confirm that the use has arrived can provide a trigger for a number of pre-defined system actions that would be expected by that user). In reality, there will be a series of trade-offs at the system- and context-level between the required infrastructure (as in the level of local computation, sensing and actuation) and the available context information (that is, the data from which a trustworthy context scenario can be inferred). This is, in a sense, an area where an aspect of the ‘technology-push’ of microsystems will see resistance in favour of maximizing the output of what is currently available. However, this should be seen as an effect that will channel innovation in microsystems into areas where real gaps exist.
1.3
Recommended References
Context awareness research is, in a sense, itself becoming pervasive as an approach that is being adopted, or at least acknowledged, by many different forms of systems-level research. In the area of pervasive computing, there is a more specific origin, namely, the research activities of Xerox PARC and the development of the PARCTAB. An aspect of this research was focused upon location and it would be true to state that the topic of context-awareness has been dominated by collecting effective location information, though this is changing. The following references offer useful insights into the origins and progress of context aware computing. For the interested reader, the publications of Dr. Roy Want and Professor Anind K. Dey are recommended. 1. G. A. Abowd, A.K. Dey, P. J. Brown, N. Davies, M. Smith, P. Steggles, Towards a Better Understanding of Context and Context-Awareness, Proceedings of the 1st international symposium on Handheld and Ubiquitous Computing, Karlsruhe, Germany, 1999 2. Dey, A.K., Salber, D. Abowd, G.D. A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications, anchor article of a special issue on Context-Aware Computing. Human-Computer Interaction (HCI) Journal, Vol. 16 (2–4), 2001, pp. 97–166. 3. B. N. Schilit, N. I. Adams, and R. Want. Context-Aware Computing Applications. In Proceedings of the Workshop on Mobile Computing Systems and Applications, Santa Cruz, CA, December 1994. Pages 85–90. IEEE Computer Society 4. A. Schmidt, M. Beigl and H.W. Gellersen, There is more to context than location, Computer & Graphics 23(6), December 1999, pp. 893–901. 5. J. Coutaz, J. Crowley, S. Dobson and D. Garlan. Context is key. Communications of the ACM 48(3). March 2005.
Chapter 8
Context in Pervasive Environments Donna Griffin, Dirk Pesch
Abstract We as humans are very successful at conveying ideas to each other and reacting to them. This is down to a number of factors, such as the wealth of our vocabulary, our common viewpoint on the world and our ability to implicitly understand or derive meaning from situational changes in our environment [1]. These factors combined are often referred to as context1 and it helps us as humans to gain a precise viewpoint on our world. Unfortunately this ability to convey ideas and meaning about our world does not translate well to the world of computing environments. This is partly because of the preciseness of computational devices, which constrains them from looking in an abstract manner at a situation and deriving a proper contextual picture. If computing devices in pervasive environments were able to successfully obtain and reason about context then this would increase the richness of communication by providing users, as well as computing and communication services, with relevant information. This would make it possible to create value added services for smart environments and augmented artefacts, making the system effectively context-aware2 [2]. This chapter will explore both the concepts of context and context-awareness in more detail, particularly, as described in the application areas of pervasive systems and services.
Keywords Context, context-awareness, acquisition, context modeling and reasoning, middleware, sensors, programming methodologies.
Centre of Adaptive Wireless Systems, Cork Institute of Technology, Ireland 1 Context is formally defined by as “any information that can be used to characterise the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and the applications themselves” 2 According to a “system is context aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the users task”
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
187
188
1
D. Griffin, D. Pesch
Introduction
The first chapter of this book presented a number of open research challenges that must be addressed to realise the vision of pervasive systems proposed by Weiser [3, 4]. Amongst these open challenges, context awareness was highlighted as an intrinsic characteristic of intelligent environments and will be further examined in this chapter. Ubiquitous computing devices within Weiser’s vision need to weave themselves into the fabric of our everyday existence so that they can be accessed by anyone, anywhere, anytime or anything [5]. This vision places a number of requirements on pervasive computing applications; to gracefully integrate themselves with human users and to operate in highly dynamic environments placing minimal overhead on users. These requirements are important as users are continually exposed to distractions in what is a highly mobile environment, where walking, driving, and other real world activities preoccupy the user [6]. To minimize such negative effects, pervasive systems applications must be aware of, and adapt to, the situation in which they are running, including the state of the physical space, the users, and the available resources. Here, context-awareness is defined as a system that can respond intelligently in either a reactive or proactive fashion to the combined information acquired by the applications, either explicitly through the physical environment (i.e. through sensors) or implicitly (i.e. by reasoning about this environment). This contextawareness in turn enhances services provided to users by taking into account their personal tastes; it makes the environment more knowledgeable and it promotes autonomy and adaptability in applications themselves, liberating users from being concerned with distractions. The Smart home scenario provides a motivating example of how contextawareness can be used to enable the vision of pervasive computing. In Smart homes, the environment is augmented with sensors and positioning systems that can determine the various states of context of its inhabitants and the environment, including for example lighting, location or sound level. In such a smart home the environment uses this information to provide context sensitive services for home automation to the inhabitants. For example, if the environment senses that the user is in a bedroom, that the user is laying down on the bed and that the light level is low then the home should infer that the person is sleeping and would prefer not to be disturbed. Consequently, the environment would then put their phone on silent and redirect the home phone to a mailbox, without making any noise. Within the area of augmented materials and smart objects several projects have incorporated context as a key ingredient such as: MediaCup [7]; AwareOffice [8]; Technologies for Enabling Awareness (TEA) [9]; MemoClip [10]; Smart-Its applications [11, 12]; Aware Goods [13]; eSeal [14]; DigiClip [15]; ContextAsKey [16]; UbicompBrowser [17]; Point&Click [18]; Augment-able Reality [19]; IPerG [20]; Cooperative Artifacts [21]; and Daidalos [22]. One of the better known projects in the augmented objects domain is Smart-Its, and its spin off project ‘Friends’, which were part of the EU-funded Disappearing Computer Initiative. The idea behind the
8 Context in Pervasive Environments
189
project was to develop smart, small-scale embedded devices that are able to sense, actuate, compute and communicate. These devices are introduced to help develop and study collective context awareness and increase widespread development of ubiquitous computing systems. While there have already been a large number of projects investigating context awareness in pervasive systems, a problem with such systems, according to Nelson [23], is that there exists very little separation of concerns; architectures and context aware applications have been built in an ad hoc manner, heavily influenced by the underlying technologies used to obtain the context. This results in a lack of generality, requiring each new application to be built from the ground up. To enable designers to easily build context aware applications, there needs to be the architectural support to provide the general mechanisms required by context. General context-aware systems and architectures will require these mechanisms to support context sensing and context reasoning. In context sensing, mechanisms are provided to deal with the acquisition of information from the physical environment while context reasoning deals with the interpretation of the acquired information. Due to the lack of reusable context sensing and reasoning mechanisms, many existing context aware systems are difficult and costly to build. In the following sections, the main approaches presented in the literature for creating context based architectures and the methods used for context acquisition and reasoning are presented.
2
Context Acquisition
In order to gather the systems context, certain aspects of the internal and external state of the system need to be discovered. The internal state typically refers to the internal operating variables of the system, which includes parameters such as the availability of particular communication links in computer networks or hardware issues, such as the temperature of the processor in computer systems. If the ‘system’ includes a human being then the internal state may refer to body temperature, blood pressure, heart beat, etc. Similarly, the external states of a system refer to the states in which the environment the system operates; this may include factors such as the ambient temperature, the humidity and light conditions. In order to provide this state information it must first be discovered. Software is often used as a tool in computer networks and systems to discover the conditions of its network or processors, whereas sensors (or probes) are used to derive the environmental state. With sensors, the functionality can be simple - physical sensors for temperature, light, humidity - or it can be more complex - meta-sensors that sense certain properties in the system, such as the received radio signal strength of a transmitted beacon, the roundtrip node-to-node propagation or location using a sensor such as a Global Positioning System (GPS) device. It is important to note however that deriving context from a single sensor would tend to be insufficient as a single node is just one piece in an overall information puzzle and thus it is often required that a network of sensors are utilized to obtain viable context states.
190
D. Griffin, D. Pesch
To provide a wide range of diverse services these sensors would also have to be dispersed geographically across the user’s area of concern providing different readings and state about the user’s environment. To minimize the overhead of collecting and computing context in such networks it is often proposed that intelligent and energy efficient in-network aggregation is needed to generate data and state information with a high informational utility whose timely delivery to the user is valued. Data aggregation and fusion techniques in Wireless Sensor Networks (WSNs) are mechanisms for in-network or in-situ processing of sensor data. This is aimed at reducing the number of redundant transmissions along the path from the source to the sink and at inferring a particular physical state, based upon the data of a set of sensors, where each individual sensor cannot determine this state alone. The actual benefit of such aggregation depends upon the location of the data sources relative to the data sink. Intuitively, when all data sources are spread out as shown in Fig. 8.1, the paths to the sinks do not intersect and there is little opportunity to aggregate data at any intermediate nodes. If, on the other hand, the sources are all proximally clustered and are located far away from the sink then their paths to the sink should merge early, as shown in Fig. 8.2; the expected benefits of this aggregation would be significant. Data fusion on the other hand is employed to assure that no vital information is lost during aggregation while minimizing network load and data volume. For a more comprehensive analysis on these subjects please see [24, 25] for data aggregation and [26] for data fusion.
Fig. 8.1 Data Aggregation - non-intersecting data paths limits scope for any aggregation
8 Context in Pervasive Environments
191
Fig. 8.2 Data Aggregatio – data paths merging to support useful aggregation for sources that are located far from the sink
There are a number of approaches employed in order to acquire context based information from sensors, including direct sensor access and middleware approaches [26]. The following sections will provide an overview of these methods.
2.1 Direct Sensor Access In direct sensor context acquisition methods, the client software gathers the desired information directly from these sensors. There is no additional layer for refining and processing the sensor data obtained. For example in the design of the Forget-me-not devices [27], these devices directly access Active Badge sensors [28] to determine the location of the users and people whom these users have encountered. Using Active Badges in the call forwarding system described by Want et al [28] and in the teleporting system described by Bennett et al [29], context aware agents also directly access these sensors to acquire up-to-date location information about the users. Similarly, Lengheinrich et al [30] describes an RFID chef application that recommends cooking recipes for the food available on the kitchen table; the agents can directly access the RFID sensors to identify the type of food that is available. In such systems the raw data is provided by the low level sensors and the individual agents apply their own interpretation of context.
192
D. Griffin, D. Pesch
However, such direct sensor access presents a number of problems with regard to extensibility and reusability. It is difficult to extend the application that is gathering the data and because the sensor data acquisition is tightly coupled with the application and the reusability of code is limited.
2.2
Middleware Approaches
The term middleware itself can sometimes be used as a buzzword, taking on different meanings depending on the specific field of computing and the perspective of the person using to it. In distributed systems, Burnstein [31] defined a middleware service “as a general purpose service that sits between platforms and applications. By platform, we mean a set of low level services and processing elements defined by the processor architecture and Operating Systems (OSs) API”. Generally, middleware is expected to hide the internal workings and the heterogeneity of the system, providing standard interfaces, abstractions and a set of services that depends largely on the application. Another definition provided in [32] defined a middleware service as the ability to support the development, maintenance, deployment and execution of sensor network applications. For context acquisition, middleware provides a generic interface between the physical and virtual (computing) worlds, converting physical measurements into qualities that can be then harnessed for a wide ranging spectrum of computing applications. The Context Toolkit [33] and Context Fabric [34] are examples of such middleware approaches. The Context Toolkit shields the context sensing details from the agents and applications; its design builds on the widget concept where context widgets are responsible for collecting information from the environment through the use of software and hardware based sensors [33]. They are named after Graphical User Interface (GUI) widgets and share many similarities with them. Just as GUI widgets mediate between the application and the user, context widgets mediate between the application and its operating environment. Both types of widget provide an abstraction that allows events to be placed in, and taken out of, the event queue without applications and widgets needing to know each others details. Where GUI widgets are owned by the application that creates them, context widgets are not owned by any single application and are shared among all executing applications. The benefits of context widgets are that they provide: ●
●
●
Separation of concerns by hiding the complexity of the actual sensors used from the application They provide easy access to context data through querying and notification mechanisms They provide reusable and customizable building blocks of context sensing where a widget that tracks the location of a user can be used by a variety of applications and can be tailored and combined in ways similar to GUI widgets
8 Context in Pervasive Environments
193
The Context Toolkit defines a specific collection of widgets called context widgets to deal with low level sensing details. For example, the IdentityPresence widget provides callbacks for application to be notified about the arrival and departure information of people. A number of context aware applications have been developed using the widgets provided by the Context Toolkit; for example, the In/ Out Board, which tracks the presence of people in an office, and the DUMMBO Meeting Board [33], which is an instrumented digitized whiteboard that supports the capture and the access of meeting minutes. Similarly, Arizona State University [35] presented the Reconfigurable Context-Sensitive Middleware (RCSM), which made use of the contextual data of a device and its surrounding environment to initiate and manage ad hoc communication with other devices. RCSM provides core middleware services by using dedicated reconfigurable Field Programmable Gate Arrays (FPGA), a context-based reflection and adaptation triggering mechanism and a context-sensitive object request broker that invokes remote objects based on contextual and environmental factors, thereby facilitating autonomous exchange of information. Within middleware there are numerous methodologies used in classifying various approaches. Hadim et al [36] classified middleware into: classic, data-centric, virtual machines, and adaptive four types, while Molla et al [37] classified middleware according to the relationships among different forms of middleware. Regardless of the classification methodology, all middleware systems in general aim to hide low level sensing details, which, when compared to direct sensor access, eases the extensibility and simplifies the issue of the reusability of hardware dependent sensing code. For a more complete survey of middleware see Molla et al [37].
3
Context Modeling and Reasoning
The previous section provided an overview on how context is acquired from a system. This section will examine techniques for context modeling and reasoning, which relates to the task of using context data in an intelligent way and is perhaps amongst the most challenging of contemporary research tasks in creating context-awareness. According to Nurmi et al [38] a more precise definition for contextreasoning is: deducing new information relevant to the use of application(s) and user(s) from the various sources of context-data. Service behavior adaptation is an output from context reasoning where services can adapt their functionality and behavior to a context by applying either rule- or learning-based logic [39]. With regard to rule-based logic, reasoning in artificial intelligence has traditionally been viewed as a process of drawing conclusions by the sequential application of formal rules from the commencement of problem solving. There are several approaches within this category, including First Order Logic (FOL), fuzzy logic, description logic and temporal logic. In these approaches, actions are triggered by a set of rules whenever the current context changes. However, these approaches do not allow actions to be performed when the user’s context has not been envisioned
194
D. Griffin, D. Pesch
beforehand. This represents the inherent limitation of this approach; while it may be possible to characterize and describe certain user situations for a particular service behavior, it is arduous and exhaustively time consuming to define all user contexts where the user expects a service behavior. This is turn has resulted in a focus on learning based reasoning approaches [39, 40]. Within learning based reasoning, several approaches have been proposed: casebased reasoning, neural networks and Bayesian approaches. In case-based reasoning, problem solving is based on remembering specific experiences that might be useful for the problem being solved. The origins of case-based reasoning come from both cognitive science and artificial intelligence. It models human problem solving through the perspective that people often solve problems by remembering the way in which similar problems were dealt with in the past. On the other hand, neural networks have traditionally been viewed as simplified models of neural processing in the brain and have also been used in context reasoning as a method of learning and adapting to context. Bayesian networks provide a probability-based reasoning method that is very useful to model the uncertain nature and to reason about the probabilistic occurrence of a context. These learning methods essentially provide a more sophisticated approach compared to rule-based approaches, which lack robustness and flexibility, are confined to narrow problem domains and are difficult to maintain and update throughout their life-cycle. Learning approaches make use of specific knowledge in both problem-solving and learning. The methodology aims at enhancing knowledge acquisition, knowledge maintenance, the efficiency of problem solving, the quality of solutions and user acceptance [39, 40]. Context modeling is a topic that is very closely related to context reasoning; a well designed system model is a key accessor to the context in context aware systems. This is important to enable context-aware adaptation and reasoning; context information must be gathered and eventually presented to the application performing the adaptation. As there will be a wide range of heterogeneous context information in pervasive systems, modeling of contextual information is highly necessary [41]. The following scenario is presented to illustrate the importance of context modeling in pervasive systems [42]: Bob has finished reviewing a paper for Alice, and wishes to share his comments with her. He instructs his communication agent to initiate a discussion with Alice. Alice is in a meeting with a student, so her agent determines on her behalf that she should not be interrupted. The agent recommends that Bob contact Alice via email or meet her directly in half an hour. Bob’s agent consults his schedule, and realizing that he is not available at the time suggested by Alice, composes an email on the workstation he is currently on, and dispatches it according to the instructions of Alice’s agent. A few minutes later, Alice’s supervisor, Charles wants to know if the report that he requested is ready. Alice’s agent decides that the query needs to be answered immediately and suggests that Charles telephone her on her office number. Charles agent establishes the call using the mobile phone that he is carrying with him.
In order for the agents in the above scenario to reason about context they rely on information about the participants and their communication devices and channels. The agents must be able to understand the relationships between people, such as
8 Context in Pervasive Environments
195
who is a supervisor and who is a peer. This type of information can be collected by sensors as discussed; other pieces of information can be collected explicitly through user input and yet other pieces of information can be gathered by calendars or (partly) derived from related context. All of this information needs to be appropriately modeled for the agents to understand the context of the information held in the data. However, according to [43] ubiquitous computing systems make high demands on context modeling approaches for the following reasons: 1. Distributed composition – Pervasive systems are essentially distributed systems that lack a central instance responsible for the creation, deployment, maintenance of data and services and in particular context descriptions. As a result, composition and administration of a context model and its data varies with time, network topology and source. 2. Richness and Quality of Information – the quality of information delivered by sensors varies over time, and the richness of the information provided by different kinds of sensors (characterizing an entity in an ubiquitous computing environment) may differ 3. Incompleteness and ambiguity – both sensed and interpreted context is often ambiguous. A challenge facing the development of realistic and deployable context-aware services, therefore, is the ability to handle ambiguous context. Henrickson et al [44] determined that a characteristic of context information is to be ‘imperfect’: “Information may be incorrect if it fails to reflect the true state of the world it models, inconsistent if it contains contradictory information, or incomplete if some aspects of the context are not known” 4. Formality and interrelated context – it is always a challenge to describe contextual facts and interrelationships in a precise and traceable manner. For instance to perform the task “print a document on printer near to me”, a precise definition of the terms used in the task is required. Similarly, using the scenario described above, a person’s current activity may be partially derived from other context information, such as the person’s location and the history of past activities. Strang et al [43] refers to this type of relationship - where the characteristics of the derived information are intimately linked to the properties of the information that it is derived from - as a dependency. 5. Context has many alternatives [44] – Sometimes in pervasive systems there can exist a gap between the sensor output gathered and the level of information useful to applications. For example, a location sensor may supply raw coordinates, whereas the application might be interested in the identity if the building or room that the user is in. Based upon the above presented features of context information in pervasive environments, it is clear that there is a need to represent and manage this information through appropriate modeling techniques. Strang et al [43] presented a survey of six context modeling approaches: (1) Key-value modeling, (2) Markup scheme modeling, (3) Graphical modeling (UML, ER, etc), (4) Object oriented modeling (5) Logic based modeling and (6) Ontology based modeling approaches. The key findings of this survey are outlined below in more detail:
196 ●
●
●
●
●
D. Griffin, D. Pesch
Key value modeling techniques [43] – this uses key value pairs to model context by providing the value of context information to an application as an environmental variable. Key value modeling techniques are easy to manage but lack the capabilities for sophisticated structuring to enable efficient context retrieval algorithms. In particular it does not resolve the problems associated with distributed composition; partial validation is very difficult and provides little support for quality meta-information. Linear search methods are normally used with this approach to provide a very simplistic form of context reasoning. Markup Scheme modeling techniques [43] – A technique common to all markup scheme modeling approaches is a hierarchical data structure consisting of markup tags with attributes and content. Within this classification a number of approaches have been proposed. Composite Capability/Preference Profiles (CC/PP) [45] is a W3C proposed standard, which according to Held et al [46] is able to fulfill most of the requirements for profile presentation, but its vocabulary is not rich enough and it needs to be extended in order to represent the existence of complex relationships and constraints. Similarly Held et al [46] proposed the Comprehensive Structured Context Profile (CSCP), which overcomes the shortcomings of CC/PP regarding structuring and user preferences but does not address the representation of complex relationships and constraints. A key strength to this approach, however, is its applicability to existing markup centric infrastructures in pervasive computing environments using Web services. Graphical Models – the Unified Modeling Language (UML), according to Bauer [47], is appropriate to model context. Consequently, Henricksen et al [42] provided an extension to Object-Role Modeling (ORM) to include context. While the work presented in [42] provides a comprehensive model, which includes quality and dependency relations, it fails to represent the dependency relations accurately. The main point to note regarding graphical models is that the level of computer formality is relatively low and it is typically used for human structuring purposes. Object Oriented Modeling – this allows modeling of context by using object oriented techniques, offering the full power of object orientation, including encapsulation, reusability and inheritance. The modeling approach was used on numerous projects, such as TEA [48] and GUIDE [49]. Within these projects, it was shown that applicability to existing object oriented approaches is possible, but such methods usually make strong additional demands on the resources of the computing devices. These are demands that cannot always be fulfilled in resource constrained pervasive computing systems. Logic Based Models – these models have a high degree of formality where (typically) facts, expressions and rules are used to define the context model. There are several works within this area, including the McCarthy et al [50] work on Formalizing Context. According to [43] logic based context models may be composed in a distributed manner, but partial validation is difficult to maintain. Incompleteness and ambiguity are other problems with this approach. Applicability to existing ubiquitous computing environments also seems to be a major issue as full logic reasoners are usually not available on pervasive computing devices.
8 Context in Pervasive Environments ●
197
Ontology based Models-Ontologies present a description of the concepts and relationships and provide a very promising instrument for modeling contextual information due to their high and formal expressiveness and due to the possibilities for applying ontology reasoning techniques. According to Strang [43], ontologies are the most expressive models and fulfill most requirements for ubiquitous computing environments as they are simple, flexible, extensible, generic and expressive.
Based on the requirements set out above and the analysis of modeling techniques, it is evident that the most promising context modeling approaches for pervasive computing systems to date are ontologies. However, this analysis also indicates that alternative context modeling approaches should not be discarded; rather it highlights the advantages and disadvantages of each approach.
4
Programming Context
Over the past few years a proliferation of context aware applications and systems have been developed. Each of these systems uses diverse sensing capabilities to achieve their higher level functionality. These include: ●
●
●
●
Cyberguide [51] and GUIDE [49] are two tour guide applications that use events generated by location sensors to update the user’s screen according to physical location. The Stick-e Document [52] framework allows users to create notes that are triggered when a user encounters the associated context. Both Active Badge [28] and PARCTab [53] provide a framework on top of which developers can build context-aware applications; small devices moving about a research complex provide location events to the context aware infrastructure via infrared communication links. The FieldNotes [54] application incorporates context information, such as time, weather and user information by allowing researchers in the field to attach varied contextual information to their notes.
It is clear from the above examples that a large number of diverse context information frameworks have been developed. However, the proliferation of contextaware applications is inhibited by the absence of a generic approach to such framework development and by a lack of programming support to rapidly develop them. Currently, to build a context-aware application, developers need to either design or implement their own applications from scratch, requiring them to write code that directly interacts with devices, or to use a toolkit that hides many of the device details from them. The problem with this approach lies in the fact that programme behaviors in pervasive systems are difficult to predict and understand. This is due to the large numbers of different devices involved, the complexity, the spontaneity of interactions, the fluidity of contexts and the overhead of configuring systems. To reduce this complexity and to enable programmers to easily develop
198
D. Griffin, D. Pesch
and deploy context aware applications, a programming framework needs to be used. This issue of programming for pervasive systems is not well understood. Existing approaches, such as PIMA [55] and Aura [56], utilize the idea that the application model for pervasive computing must decouple the application logic from details that are specific to the run-time environment - the Aura project created a programming model for task based pervasive computing. In addition programming languages - such as Telescript, Obliq and Java - have been investigated to address the practical concerns of building distributed pervasive systems, but none of these approaches have specifically considered context as a central modeling concept. As a result a number of programming methodologies have been devised, including: Object Oriented (OOP); Aspect Oriented (AOP); Feature Oriented (FOP); and Context Oriented programming (COP). Using these programming methodologies and frameworks, programmers can focus on modeling and using context information and functionality specific to their application while relying on a basic infrastructure to handle the actual management and distribution of this information. In regard to Object Oriented (OO) programming, [57] the approach uses hardwired sensor drivers and spreads if statements out over the entire application to achieve context-dependent behavior. This approach was deemed impracticable [58] as context acquisition and adaptation concerns are very often distributed, leading to a complex distributed OO design that limits the reusability and maintainability of the code. In order to overcome these concerns a new paradigm motivated by the OO programming methodology - called Context-Oriented Programming (COP) has been proposed. According to Rakotonirainy [59], this approach addresses the inherent constraints of pervasive environments by easing the task of programming through definition of a special construct called open terms to express pervasive applications. The use of open terms has also been proposed by Cardelli et al [60]. To illustrate its use consider the following example: The programmer writes a program P, where P “outputs” a message v. Each pervasive device has a different function type and name that outputs messages. The programmer in this scenario could not possibly know the type and name of the output functions at every device a-priori, but using COP he/she can use an open term expression such as output (v) to state in the abstract that a value v must be outputted. COP facilitates the execution of the program P in different devices where the targeted device will substitute dynamically the open term output (v) with the closest matching function that will output v.
By allowing application programmers to make use of special constructs such as open term expressions, the context of the executing host will provide information on how to intelligently complete the partially described behavior by refining the open terms during runtime. Implementations of COP include ContextL [61] which is an extension of the Common Lisp Object System, ContextS [62], ContextR, ContextPy, ContextJ [63] and Java Revolution (also known as Javolution) [64], which uses COP to program real time embedded systems. According to Kiczales [65], AOP “aims to separate concerns that crosscut a particular decomposition strategy, such as object orientation, and traditionally uses
8 Context in Pervasive Environments
199
weaving techniques to recombine them”. An interesting contribution of aspect-oriented programming is its ability to decrease code scattering, which was a highlighted limitation of OO programming approaches [57]. AOP [57] has been proposed as a means to modularize several parts of a context-aware application, such as context acquisition, location/proximity context and context-dependent behavior. Using this approach creates aspects capable of performing context fusion/fission and adapting modeled context into non-modeled contextual information; this may be required by the application to perform the context dependent behavior adaptation. To support composition of concerns in distributed environments, many AOP distributed strategies have emerged, such as EJB, JBoss AOP and Spring AOP. Another programming technique that is often discussed in the implementation of context-aware systems is Feature Oriented Programming (FOP). Feature Oriented Programming (FOP) introduced by Batory et al [66] is a design methodology for software product lines. The variation points in FOP are units of functionality (i.e. features) that distinguish product variations within a family of related programs. Alternatively, the variation points of context-aware systems are places where the program behavior can be adapted according to runtime context information. However, according to Desmet et al [67], FOP is not very suitable for context enabled frameworks, since context aware systems require a more sophisticated set of relationships than the FOP method can offer. In addition, as feature compositions in FOP are not subject to change at runtime this makes it unsuited to the dynamic nature of context aware systems, which are required to adapt continuously to changes in its environment.
5
Summary and Conclusions
“Context is what surrounds” and in pervasive systems the term is used to reference the physical and virtual world observations that surround us in our everyday lives. Observations relating to our surroundings are currently being leveraged by various research domains, including augmented materials, wearable computing and smart spaces to name but a few. In these domains context serves as an abstraction that makes the environment accessible and enables context aware applications that can adapt to our surroundings and personal preferences. The actual utility of context aware applications has been demonstrated in a diverse number of application areas, such as intelligent calendars, chef/recipe applications and tourism. The heterogeneity of the underlying technological approaches used to facilitate context awareness has brought with it a number of challenges; specifically, there is no common or de-facto method of acquiring, modelling and reasoning about context. This has inhibited the leveraging of output from various projects, because of a deficit in reusable approaches and methodologies to support context and context awareness. The purpose of this chapter was to discuss the architectural methods to facilitate context currently proposed in the literature, highlighting what is distinctive about each approach.
200
D. Griffin, D. Pesch
Following introductions and a definition of scope in the first sections, section 2 dealt with context acquisition. Context can either be acquired directly from sensors or via a middleware layer. Direct sensor access is not advocated as it presents problems in extensibility and reusability due to the vast number of heterogeneous sources of context. Middleware presents a more scalable approach, providing a generic interface between the physical and virtual world where physical quantities and data acquired by devices/sources can be harnessed in a generic manner in a much wider range of applications. Section 3 dealt with the subject of context reasoning and modelling. These two related subjects are important because system and service behaviour adaptation is an output of context reasoning. Modelling is necessary to enable computer systems to properly understand the terms used within the domain of discourse. Learning based approaches for context reasoning have been advocated as they give the context framework more flexibility and robustness and they prevent arduous sequences of input by system developers in maintaining the system. The section presented the various context modelling approaches available and included a summary of keyvalue, markup scheme, graphical modelling, object oriented modelling, logic based modelling and ontology based approaches. Ontologies have the greatest potential for context aware systems as they are simple, extensible, generic and expressive. These characteristics ensure their use in pervasive computing devices where constraints in power and computation are primary concerns. Section 4 discusses the problem of complexity that application developer’s face when actually programming context aware systems – this is a result of the complexity of pervasive computing devices, the fluidity of context, the range of interactions, and the overhead in transactions. To help resolve these complexities a number of programming methodologies have been proposed, including Object, Context, Aspect and Feature Oriented Programming. Context Oriented Programming in particular is a very interesting programming methodology as it makes use of special constructs to help express the nature of pervasive systems. While there are a number of challenges in developing context aware frameworks for pervasive environments steps can be taken to reduce the complexity and increase the reusability of the system architecture to fully realise Weiser vision of disappearing computers.
References 1. G. A. Abowd, A.K. Dey, P. J. Brown, N. Davies, M. Smith, P. Steggles, Towards a Better Understanding of Context and Context-Awareness, Proceedings of the 1st international symposium on Handheld and Ubiquitous Computing, Karlsruhe, Germany, 1999 2. A K Dey, Understanding and Using Context, Personal Ubiquitous Computer, Vol. 5, No. 1. (February 2001), pp. 4–7 3. M. Weiser, “ The Computer of the 21st Century,” Scientific American, vol. 265, no. 3, Sept. 1991, pp. 66–75 4. M. Weiser, “The World Is Not a Desktop,” ACM Interactions, vol. 1, no. 1, Jan. 1994, pp. 7–8 5. ITU Internet Report - The Internet of Things, Executive Summary,
8 Context in Pervasive Environments
201
6. J. Anhalt, A. Smailagic, D.P. Siewiorek, F. Gemperle, D. Salber, S. Weber, J. Beck, J. Jennings, Toward context-aware computing: experiences and lessons, IEEE Intelligent Systems, Vol 16:3, May–Jun 2001 7. M. Beigl, H-W, Gellerson, A. Schmidt, Mediacups: experience with design and use of computer-augmented everyday artifacts, Computer Networks: The International Journal of Computer and Telecommunications Networking, Vol 35(4), March 2001 8. Aware Office Initiative, http://www.teco.edu/awareoffice/ 9. A. Schmidt, K. A. Adoo, A. Takaluoma, U. Tuomela, K. Van Laerhoven, W. Van de Velde, Advanced Interaction in Context, In H. Gellersen, editor, Proc. of Intl. workshop on Handheld and Ubiquitous Computing (HUC99), number 1707 in LNCS, Heidelberg, Germany 10. M. Beigl, MemoClip: A Location-Based Remembrance Appliance, Personal and Ubiquitous Computing, Vol 4 (4), pp. 230–233 11. M. Beigl, A. Krohn, C. Decker, P. Robinson, T. Zimmer, H. Gellersen, A. Schmidt, Context Nuggets: A Smart-Its Game, Ubicomp Conference, 2003 12. M. Beigl and H. Gellersen. Smart-Its: An Embedded Platform for Smart Objects, in Proc. Smart Objects Conference (SOC 2003), Grenoble, France, May 2003 13. A. Thede, A. Schmidt, C. Merz, Integration of Goods Delivery Supervision into E-commerce Supply Chain, Proceedings of the Second International Workshop on Electronic Commerce, 2001 14. C. Decker, M. Beigl, A. Krohn, P. Robinson, U. Kubach, eSeal - A System for Enhanced Electronic Assertion of Authenticity and Integrity, Pervasive 2004, Wien 15. C. Decker, M. Beigl, A. Eames, U. Kubach, DigiClip: activating physical documents, Proceedings. 24th International Conference on Distributed Computing Systems Workshops, March 2004 16. P. Robinson, M. Beigl, Trust Context Spaces: An Infrastructure for Pervasive Security in Context-Aware Environments, In First International Conference on Security in Pervasive Computing, 2003 17. M. Beigl, A. Schmidt, M. Lauff, H-W Gellersen, The UbicompBrowser, ERCIM Workshop on User Interfaces for all, 1998 18. M. Beigl, Point & Click - Interaction in Smart Environments, Proceedings of the 1st international symposium on Handheld and Ubiquitous Computing, Germany, 1999 19. J. Rekimoto, Y. Ayatsuka, K. Hayashi, Augment-able reality: situated communication through physical anddigital spaces, Second International Symposium on Wearable Computers, October 1998 20. Integrated Project on Pervasive Gaming, http://iperg.sics.se/ 21. M. Strohbach, H-W Gellersen, G Kortuem, C Kray, Cooperative Artefacts: Assessing Real World Situations with Embedded Technology, in Proceedings of the Sixth International Conference on Ubiquitous Computing (UbiComp), 2004 22. Designing Advanced network Interfaces for the Delivery and Administration of Location independent, Optimised personal Services (DAIDALOS), http://www.ist-daidalos.org/default. htm 23. G. J. Nelson, Context-Aware and Location Systems, Ph. D. Dissertation, University of Cambridge, Jan. 1998 24. R. Rajagopalan, P.K. Varshney, Data Aggregation techniques in sensor networks: A survey, IEEE Communications Surveys and Tutorials, Vol: 8(4), 2006 25. H. Karl, A. Willig, Protocols and Architectures for Wireless Sensor Networks, Wiley, London, 2006 26. M. Baldauf, S. Dustdar, A Survey on Context-aware systems, International Journal of Ad Hoc and Ubiquitous Computing, Vol 2(4), 2007 27. M. Lamming, M. Flynn, “Forget-me-not” Intimate Computing in Support of Human Memory, proceedings FRIEND21 Symposium on Next Generation Human Interfaces, Tokyo Japan, 1994 28. R. Want, A. Hopper, V. Falcao, and J. Gibbons, “The active badge location system,” ACM Transactions on Information Systems, vol. 10, pp. 91–102, Jan. 1992.
202
D. Griffin, D. Pesch
29. F.Bennett and T.Richardson and A.Harter, “Teleporting - Making Applications Mobile”, 1994 Workshop on Mobile Computing Systems and Applications, December, 1994 30. M. Langheinrich, F. Mattern, K. R omer, H. Vogt, First steps towards an event-based infrastructure for smart things, Wireless Networks, Vol 10(6), Special issue: Pervasive computing and communications, 2004 31. P. A. Bernstein, Middleware: A Model for Distributed System Services, Communications of the ACM, Feb 1996, Vol 39, No. 2 32. I. Chatzigiannakis, G. Mylonas, S. Nikoletseas, 50 ways to build your application: A Survey of Middleware and Systems for. Wireless Sensor Networks 33. D. Salber, A.K Dey, G.D. Abowd, The Context Toolkit: Aiding the Development of ContextAware Applications. In Proceedings of Human Factors in Computing Systems, Pittsburgh, 1999 34. J.I. Hong. Context fabric: Infrastructure support for context aware systems. Qualifying Exam Proposal, 2001. 35. S.S. Yau, F. Karim, Y. Wang, B. Wang, S. Gupta, Reconfigurable Context-Sensitive Middleware for Pervasive Computing, IEEE Pervasive Computing, Vol 1(3), 2002 36. S. Hadim, N. Mohamed, Middleware: Middleware Challenges and Approaches for Wireless Sensor Networks, IEEE Computer Society, 7, 3, pp 1–19, 2006 37. Mohammad M. Molla, Sheikh Iqbal Ahamed (2006). A Survey of Middleware for Sensor Network and Challenges, 2006 Internal Conference on Parallel Processing Workshops (ICPPW’06), Columbus, Ohio, USA, pp 223–228. 38. Petteri Nurmi, Patrik Floréen, Reasoning in Context-Aware Systems, A Position Paper, Helsinki Institute for Information Technology 39. S. Jie, W. ZhaoHui, Context Reasoning Technologies in Ubiquitous Computing Environment, Lecture Notes in Computer Science, in Embedded and Ubiquitous Computing, 2006 40. A T Binh, Young-Koo Lee, L. Sung-Young, Modeling and reasoning about uncertainty in context-aware systems, IEEE International Conference on e-Business Engineering, 2005 41. M.A. Razzaque, Simon Dobson, and Paddy Nixon, “Categorization and Modelling of Quality in Context Information”, 2005, http://csiweb.ucd.ie/UserFiles/publications/1124274826156.pdf 42. K. Henriksen, J. Indulska, A. Rakotonirainy, Modeling Context Information in Pervasive Computing Systems, In Proc 1st International Conference on Pervasive Computing, Zurich, Switzerland, Springer (2002) 43. T. Strang, C Linnhoff-Popien, A context modeling survey. In: UbiComp 1st International Workshop on Advanced Context Modelling, Reasoning and Management, Nottingham (2004) 44. K. Henricksen, J. Indulska, A. Rakotonirainy. “Modeling Context Information in Pervasive Computing Systems,” Proceedings Pervasive 2002, 2002. 45. G. Klyne, F. Reynolds, C. Woodrow, H. Ohto, “Composite Capability/Preference Profiles (CC/PP): Structure and Vocabularies”, W3C Working Draft, Mar 15, 2001. 46. Held, A., Buchholz, S., Schill, A. “Modeling of Context Information for Pervasive Computing Applications” Proc. of the 6th World Multiconference on Systemics, Cybernetics and Informatics (SCI2002), Orlando, FL, Jul 2002 47. J. Bauer, Identification and Modeling of Contexts for Different Information Scenarios in Air Traffic, PhD Thesis 48. Esprit Project 26900: Technology for enabled awareness (tea), 1998 49. Cheverst, K., Mitchell, K., Davies, N. (1998). Design of an Object Model for a Context Sensitive Tourist GUIDE. Proceedings of the International Workshop on Interactive Applications of Mobile Computing (IMC98), Rostock, Germany, November (1998) 50. J. McCarthy, Notes on formalizing contexts, in Proc of the 13th International Joint Conference on Artificial Intelligence, California, 1993 51. CyberGuide Project, http://www.cc.gatech.edu/fce/cyberguide/ 52. P.J. Brown, The stick-e document: a framework for creating context-aware applications, in In Proceedings of EP’96, Palo Alto, 1996 53. The PARCTAB Project, http://www.ubiq.com/parctab/
8 Context in Pervasive Environments
203
54. R. B. Yeh and S. Klemmer. Field Notes on Field Notes: Informing Technology Support for Biologists, 55. Banavar, G., Beck, J., Gluzberg, E., Munson, J., Sussman, J., and Zukowski, D. 2000. Challenges: an application model for pervasive computing. In Proceedings of the 6th Annual international Conference on Mobile Computing and Networking (Boston, Massachusetts, United States, August 06–11, 2000). MobiCom ‘00. ACM, New York, NY 56. Garlan, D.; Siewiorek, D.P.; Smailagic, A.; Steenkiste, P., “Project Aura: toward distractionfree pervasive computing,” Pervasive Computing, IEEE, vol.1, no.2, pp. 22–31, Apr–Jun 2002 57. Dantas, F.; Batista, T.; Cacho, N., “Towards Aspect-Oriented Programming for ContextAware Systems: A Comparative Study,” Software Engineering for Pervasive Computing Applications, Systems, and Environments, 2007. SEPCASE ‘07. First International Workshop on, vol., no., pp.4-4, 20–26 May 2007 58. P. Costanza, R. Hirschfeld, Language Constructs for Context Oriented Programming, An Overview of ContextL, Proceedings of the 2005 symposium on Dynamic languages, California, 2005 59. Rakotonirainy, Andry (2003) How to Program Pervasive Systems. In Proceedings 14th International Workshop on Database and Expert Systems Applications, pages pp. 947–948, Prague, Czech Republic 60. L. Cardelli, A.D. Gordon, Mobile Ambients, Theoretical Computer Science, 240/1, 2000 61. Pascal Costanza and Robert Hirschfeld. Language Constructs for Context-oriented Programming - An Overview of ContextL. In Proceedings of the Dynamic Languages Symposium (DLS), colocated with the Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA), San Diego, California, USA, October 18, 2005 62. Robert Hirschfeld, Pascal Costanza, and Michael Haupt. An Introduction to Context-oriented Programming with ContextS. In Proceedings of the 2nd Summer School on Generative and Transformational Techniques in Software Engineering (GTTSE 2007), Braga, Portugal, July 2–7, 2007 63. Context J, ContextPh http://www.swa.hpi.uni-potsdam.de/cop/ 64. http://javolution.org 65. G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. Lopes, J.-M. Loingtier, and J. Irwin. Aspect-oriented programming. In M. Ak_sit and S. Matsuoka, editors, 11th Europeen Conference Object-Oriented Programming, 1997. 66. D. Batory. Feature models, grammars, and propositional formulas. In SPLC ‘05: Proceedings of 9th International Software Product Line Conference, September 2005. 67. Brecht Desmet, Jorge Vallejos, Pascal Costanza, Robert Hirschfeld, Layered design approach for context-aware systems
Chapter 9
Achieving Co-Operation and Developing Smart Behavior in Collections of Context-Aware Artifacts Christos Goumopoulos, Achilles Kameas
Abstract One of the most exciting and important recent developments in ubiquitous computing (UbiComp) is to make everyday appliances, devices, and objects context aware. A context-aware artifact uses sensors to perceive the context of humans or other artifacts and to respond sensibly to it. Adding context awareness to artifacts can increase their usability and enable new interactions and user experiences. The aim of the research and development work discussed here is to examine at how artifact collections (or ambient ecologies, a metaphor introduced for modelling UbiComp applications) can be made to work together, and provide functionality that exceeds the sum of their parts. The underlying hypothesis is that even if an individual artifact has limited functionality, it can harness more advanced behaviour when grouped with others. The realization of this hypothesis is possible by providing appropriate abstractions and a new affordance (composeability) that objects acquire. Specifically our contribution is firstly to discuss the conceptual abstractions and formal definitions used to model such artifact collections, which are inherent to ubiquitous computing, and secondly to discuss engineering guidelines for building ubiquitous computing applications based on well-known design principles and methods of analysis. It is argued then that the process where people configure and use complex collections of interacting artifacts can be viewed as having much in common with the process where system builders design software systems out of components. The design space consists, in this view, of a multitude of artifacts, which people (re)combine in dynamic, ad-hoc ways. Artifacts are treated as reusable “components” of a dynamically changing physical/digital environment, which involves people. In a nutshell, in this work we have attempted to define ambient ecologies, specify design patterns and programming principles, and develop infrastructure and tools to support ambient ecology designers, developers and end-users. Keywords ubiquitous computing; ambient ecologies; formal model; contextaware artifacts; design patterns; ontology; composeability; middleware; intelligent systems Computer Technology Institute, Distributed Ambient Information Systems Group, Patras, Hellas
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
205
206
1
C. Goumopoulos, A. Kameas
Introduction
An important characteristic of Ubiquitous Computing (UbiComp) environments is the merging of physical and digital space (i.e. tangible objects and physical environments are acquiring a digital representation). As the computer disappears in the environments surrounding our activities, the objects therein become augmented with Information and Communication Technology (ICT) components (i.e. sensors, actuators, processor, memory, wireless communication modules) and can receive, store, process and transmit information; in the following, we shall use the term “artifacts” for this type of augmented objects. Individually, artifacts may have a small range of capabilities but together can exhibit a much broader range of behaviours. Consequently, the true potential of all of these disappearing computers is realised once they are interconnected in digital space to form combinations of artifacts or services to accomplish the goal of its user(s). Because such units can be re-configured, or recombined either by people or another supervisory authority their collective behaviour is neither static nor random and collections of artifacts can evolve to produce new behaviours. Smart behavior, then, either at individual or collective levels, is possible because of the artifacts’ abilities to perceive and interpret their environment (peer artifacts being themselves a part of an artifact’s environment). Artifacts that is, everyday appliances, devices, and objects become context aware [1]. The aim of the research and development work to be presented in this chapter is to examine at how artifact collections (or ambient ecologies, a term to be introduced in Section 2), can be made to work together, and provide functionalities that exceed the sum of their parts. Specifically our contribution is firstly to discuss the conceptual abstractions and formal definitions used to model such artifact collections, which are inherent to ubiquitous computing and secondly to discuss engineering guidelines for building UbiComp applications based on well-known design principles and methods of analysis. The realization of our main hypothesis (even if an individual artifact has limited functionality, it can cause the emergence of more advanced behaviour when grouped with others) is possible by providing appropriate abstractions and a new affordance (composeability) that the objects acquire. Composeability can give rise to new collective functionality as a result of a dynamically changing number of well-defined interactions among artifacts. Composeability is perceived by users through presentation - via the object’s digital self - of the object’s connectable capabilities, and thus providing users the possibility to achieve connections and compose applications of two or more objects. In implementation terms, this is achieved via a communication unit that artifacts possess and the provision of semantic descriptions of their services. It is argued then that the process where people configure and use complex collections of interacting artifacts can be viewed as having much in common with the process where system builders design software systems from components. The design space consists, in this case, of a multitude of artifacts that people
9 Achieving Co-Operation and Developing Smart Behavior
207
(re)combine in dynamic, ad-hoc ways. Artifacts are treated as reusable “components” of a dynamically changing physical/digital environment, which involves people. Naturally, the idea of building UbiComp applications out of components is possible only in the context of a supporting component framework which acts as a UbiComp middleware.
1.1
A Motivating Scenario
In order to investigate the previously discussed research direction we define an illustrative scenario that we have started to develop. In the following sections we highlight the parts of this scenario that strongly suggest the use of a componentbased approach and explain why composeability is a key factor in this vision. It’s 7:30 in the morning. Catherine, an administrative officer in Brussels, is still sleeping when the alarm clock rings to her wake up. In the meantime the venetian blinds in the bedroom open automatically as do the blinds in the kitchen and the mp3 player turns on playing a random song on her favourite music list. As she leaves her bed the coffee-machine and the toaster start automatically and the bath heater is activated. After taking her morning shower, while she prepares her breakfast, she decides that she would like to change her usual menu for that day. She recalls in the screen of the refrigerator the recipe inventory and she selects through the touch screen interface her favourite Chinese recipe. This new selection initiates a check in the supplies inventory, which indicates that several ingredients are missing. Those ingredients and possibly other that are close to be exhausted are ordered automatically in the web order-service of the nearest supermarket, while SMS (Short Message Service) is used to inform the housekeeper so that she collects the shopping on her way to the house. Catherine is ready to leave her home, when her smart plant alerts her that it needs water. She receives proper notification, which depends upon her location context; when she is inside the house she will get the message through the nearest displaying object that has been set up for that purpose. When she is outside the house she will get an SMS. She then waters the plant and when she leaves her house a taxi is waiting for her outside in the street. The taxi was called for her a few minutes earlier through a conversation between her smart calendar and the appointment Web service of the taxi center. Later in the afternoon Catherine decides to use his smart office to read a book. Objects like books, chair, desk, and lights are instrumented with appropriate sensors that can capture the intent of the user and cooperate to provide an appropriate service, e.g., turning on the desk-lamp. While she is reading her book Catherine receives a notification. It is her friend Amanda inviting her to go for a walk down the near park. An awareness visualization object (for example, a personal item like an electronic bunny moving its ears) near her desk is used as a means for their informal social communication.
208
1.2
C. Goumopoulos, A. Kameas
Outline
The remainder of this work is organized as follows. In Section 2 the concept of ambient ecologies is introduced, describing a space populated by connected devices and services that are interrelated with each other, the environment and the people. Aspects of programming ambient ecologies inspired by the component software engineering paradigm are discussed. Section 3 provides in a formal manner the basic elements of our conceptual model for programming ambient ecologies, specifying terms such as artifact, ambient ecology, state, transition and behavior modeling. In the following section the Gadgetware Architectural Style (GAS) is briefly presented as the consistent conceptual and technical referent among artifact designers and application designers. Section 5 puts the conceptual framework in perspective and refines the theoretical work into an application-engineering paradigm. An example is used to demonstrate the features of our definitions. Section 6 discusses the supporting framework for achieving co-operation and developing smart behavior in collections of context-aware artifacts. The framework provides a runtime environment to build applications from artifact components, tools and programming principles in the form of a design pattern to support application designers and developers. Related work is presented in Section 7. Finally we conclude with a discussion on the approach presented and lessons learned, as well as on issues and research problems that may arise regarding the adoption of such assembled systems.
2
The Emergence of Ambient Ecologies
Thanks to developments in the field of electronic hardware, in miniaturization and cost reduction, it is possible nowadays to populate everyday environments (e.g., home, office, car, etc.) with “smart” devices for controlling and automating various tasks in our daily lives. At the dawn of the ubiquitous computing era, an even larger number of everyday objects will become computationally enabled, while micro/nano sensors will be embedded in most engineered artifacts, from the clothes we wear to the roads we drive on. All of these devices will be networked using wireless technologies evolved from Bluetooth [2], Zigbee [3] or IEEE 802.11 [4] for short range connectivity. Furthermore, the omnipresence of the Internet via phone lines, wireless channels and power lines facilitates ubiquitous networks of smart devices that will significantly change the way we interact with (information) appliances and can open enormous possibilities for innovative applications, as advocated by interaction design experts [5, 6]. The Merriam-Webster OnLine dictionary defines the word ecology as the interrelationship of organisms and their environments and the word ambient as existing in the surrounding area. We use the ambient ecology metaphor to conceptualize a space populated by connected devices and services that are interrelated with each
9 Achieving Co-Operation and Developing Smart Behavior
209
other, the environment and the people, supporting the users’ everyday activities in a meaningful way. Everyday appliances, devices, and context aware artifacts are part of ambient ecologies. A context-aware artifact uses sensors to perceive humans or other artifacts and to respond sensibly. Adding context awareness to artifacts can increase their usability and enable new user interaction and experiences. Given this fundamental capability single artifacts have the opportunity to participate in artifactbased service orchestration ranging from simple co-operation to developing smart behavior. Integrating such systems will not only enable ambient ecologies, but the ecology can also partially drive members’ interactions. For example, given a collocated set of grocery items, an application might search for recipes and display them on a kitchen screen; once the user confirms the recipe, various appliances could be preset according to the cooking instructions. So, the same ecology enables actions on appliances and artifacts based on the contexts of appliances, artifacts, and users. Through the ecology, appliances and artifacts become aware of each other. In general, the context information that the system uses in reasoning can concern a particular device, appliance, or user, or a collection of such entities. In turn, an entity might be aware only of its own context or that of a specific group of entities. Furthermore, an entity can respond individually to its perceived context or to the group’s, or a higher level application might coordinate a response among various devices. In this work we address the need for a high level of abstraction to describe how context-aware artifacts could work together and to manage their interaction for building ambient ecologies. The presented approach is based on an ontological model of components where artifacts represent everyday objects; hence (a) their services are affected by their physical properties, (b) their context of operation is defined by the existence/availability of objects and (c) their collective functionality is emerging from a set of interactions among them. Regarding the evolution of ambient ecologies at the level of artifacts we may benefit from borrowing the notion of agents’ properties [7]. The concept of intelligent agent refer to a software entity endowed with specific properties such as persistence, context awareness, proactivity, continuity of operation and interactivity with one or more users or other similar entities in a shared environment. These properties require reasoning capabilities and control mechanisms for ensuring agent autonomy.
2.1
Programming ambient ecologies
Since distributed and concurrent systems have become the norm, some researchers are putting forward theoretical models that portray computing as primarily a process of interaction. A challenge related with the programming of ambient ecologies is to establish a common language so that artifacts can actually interact with each other and function in a collaborative manner. The main reason for these semantic interoperability difficulties is the heterogeneity of devices and the large variety of their
210
C. Goumopoulos, A. Kameas
embedding context. Devices take part in several activities of our daily lives including environmental controls, lighting, alarm systems and security, telecommunication, cooking, cleaning and entertainment. There exist a vast number of potential scenarios for integrating such devices. It is not possible to foresee all possible applications and equip devices with functionality that enables collaboration with every other device a customer would like to integrate. Consequently, there is the need for customization mechanisms that can be used for integrating different artifacts into a common process exemplified within the ambient ecology. Such customization mechanisms can be seen as the “programming language” for ambient ecologies. Primary requirements for such a programming language are ease of use and rapid deployment. Effective programming mechanisms for ambient ecologies require innovative paradigms that lift programming to a level of abstraction that is similar to plugging in a new stereo or TV set. We propose a model which provides a convenient abstraction for the development of small to medium sized ubiquitous computing applications. These systems are powerful enough to support the everyday activities of people (such as home control, shopping entertainment etc); thus, we expect that most user-developed systems will fall into these categories. When more complex systems must be developed (i.e. involving over a dozen interacting artifacts or more than one user), the direct management of interactions becomes difficult, as several issues will now become important and demand the user’s attention. These include how goals and tasks can be distributed over artifacts, how can the distributed control be coordinated in order to insure that the overall system requirements are addressed, how the system can be configured with minimum user intervention, etc. Although in principle such issues can be addressed via direct manipulation, the cognitive load imposed on the user and the extended learning curve may affect the adoption and utilization of the system. We attempt to address this problem by developing end-user tools which would provide abstractions of the applications and support semantically rich interaction. For example, an agent that could learn how users act in their environment could receive user requirements and propose sets of connections to realize desired behaviours.
2.2
The Enabling Paradigm – Component Software
Component-based software systems are assembled from a number of pre-existing software modules called “software components”. Thus, software components should be made to be (re)usable in many different application contexts in the construction of software. Using the component paradigm has various benefits: it increases the degree of abstraction during programming, provides proven (errorfree) solutions for certain aspects of the application domain, increases productivity, and facilitates maintenance and evolution of software systems. Component-based
9 Achieving Co-Operation and Developing Smart Behavior
211
software development has become an important part of modern software engineering methods [8]. For example, lightweight components (i.e., fairly small in size) have become part of modern programming languages (e.g., the Swing library within Java). Our approach uses the principles of software component technology as an enabling paradigm for describing the process where by people configure and use complex collections of interacting artifacts [9]. According to this paradigm a component in the UbiComp domain is an artifact, physical or digital, which is independently built and delivered as an autonomous functional unit. It offers interfaces by which it can be connected with other components to compose a larger system, without compromising its shape or functionality. The above definition emphasizes the fact that a component provides functionality in terms of services via well-defined interfaces by sending messages to, and receiving messages from, other components and performing its computation in response to the receipt of triggering events. It also emphasizes the black-box nature of components, which represents the encapsulation of its implementation details. An interface is a description of a set of operations related to the external specification of a component. An interface consists of the artifact properties and capabilities, a set of operations that a component needs to access in its surrounding environment (required interface) and a set of operations that the surrounding environment can access on the given component (provided interface). An operation is a unit of functionality implemented by a component, which may map to a method, a function or a procedure. Although our approach for composing UbiComp applications builds on the foundations of established software development approaches such as object oriented design and component frameworks, it extends these concepts by exposing them to the end-user to be used and configured in dynamic and ad hoc ways. In contrast to the majority of component-based models that have focused on software components with an emphasis on supporting the programmer, our component model embraces a heterogeneous collection of artifacts in a way that is easily comprehensible to end-users. To achieve this, composition tends to be as simple as possible, although some reduction in the expressiveness follows. The analogy with software components upon which the notion of artifact components relies, leads naturally to a visualization of component-based ubiquitous applications as a network of boxes communicating with each other via connecting wires. The component based architectural abstraction is common in several engineering disciplines (i.e. software, buildings etc). Due to the properties of the digital self of artifacts, users can conceptualize their tasks in a variety of ways, such as stimulus-desired response, rules, sequences and constraints between entities, etc. Consequently, there will always be an initial gap between their intentions and the resulting functionality of an artifact composition, which they will have to bridge based on the experience they will develop after a trial-anderror process.
212
3
C. Goumopoulos, A. Kameas
A Conceptual Model for Programming Ambient Ecologies
In practical terms, conceptual modeling is at the core of systems analysis and design. One category of approaches towards development of the theoretical foundations of conceptual modeling draws on ontology. Our approach is based on the socalled Bunge-Wand-Weber (BWW) ontology that describes a set of models in order to model information systems [10]. The BWW ontology is based on the scientific and dialectical-materialist ontology developed by Mario Bunge [11, 12]. Basic constructs of the BWW ontology, which have been used as a starting point for our work include: ● ●
● ●
● ●
●
Thing: “The world is made of things that have properties”. Composite Thing: “A composite thing may be made up of other things (composite or primitive)”. Conceivable State: “The set of all states that the thing may ever assume”. Transformation of a Thing: “A mapping from a domain comprising states to a co-domain comprising states”. Property: “We know about things in the world via their properties”. Mutual Property: “A property that is meaningful only in the context of two or more things”. System: “A set of things will be called a system, if, for any partitioning of the set, interactions exist among things in any two subsets”.
In the following section, we elaborate on those concepts taking into consideration the requirements of the UbiComp application domain. We extend the concept of Thing that of eEntity and the concept of Composite Thing to that of Ambient Ecology. An artifact is defined as a special case of eEntity. The dynamic behavior of artifacts is modeled with statecharts which incorporate states and events. New concepts are introduced like the plug and synapse in order to provide detailed representation of the interaction among artifacts.
3.1
Basic Elements
Our model defines the logical elements necessary to support a variety of applications in smart spaces. Its basic definitions are given below. A graphical representation of the concepts and the relations between them is given as a UML class diagram in Fig. 9.1. eEntity: An eEntity is the programmatic bearer of an entity (i.e. a person, place, object, another biological being or a composition of them). An eEntity constitutes the basic component of an Ambient Ecology. ‘e’ stands here for extrovert. Extroversion is a central dimension of human personality, but in our case the term is borrowed to denote the (acquired through technology) competence of an entity to interact with other entities in an augmented way for the purpose of meaningfully
<<derive>>
<
>
eEntity
Service
2
1
0..1
Descriptor
>=1 Synapse
Is composed
0..* Ambient Ecolo...
<> Service Provider
Out-Plug
Is Formed
1 target {disjoint} 1 source
>=2 Contains
Ambient Ecology
In/Out-Plug
belongs to * Plug
has 1
Artifact
In-Plug
<> Service Consumer
Calculation
0..* Composite
Place
Fig. 9.1 UML model of the ambient ecology concept
Function
Constraint
Primitive
Contains 0..* Property
Derived
<>
0..*
Access
Person
Entity
Inference
Object
Resource
Contains
Device
9 Achieving Co-Operation and Developing Smart Behavior 213
214
C. Goumopoulos, A. Kameas
supporting the users’ everyday activities. This interaction is mainly related to either the provision or consumption of context and services between the participating entities. A coffee maker, for instance, publishes its service to boil coffee, while context for a person may denote her activity and location. An augmented interaction between the coffee maker and the person is the activation of the coffee machine when the person wakes in the morning. For this to happen we will probably need a bed instrumented with pressure sensors (an artifact) and a reasoning function for the persons’ process of waking, which may not be trivial to describe. to the entity itself; relational, which relate the entity to other entities; and behavioral, which determine possible changes to the values of structural and relational properties. Artifacts: An artifact is a tangible object - biological elements like plants and animals are also possible here, see [13] which bears digitally expressed properties. Usually, it is an object or device augmented with sensors, actuators, processing, networking, or a computational device that already has embedded some of the required hardware components. Software applications running on computational devices are also considered to be artifacts. Examples of artifacts include furniture, clothes, air conditioners, coffee makers, a software digital clock, a software music player, a plant, etc. Services: Services are resources capable of performing tasks that form a coherent functionality from the point of view of provider entities and requester entities. Services communicate only through their exposed interfaces. Services are selfcontained, can be discovered and are accessible through signatures. Any functionality expressed by a service descriptor (a signature and accessor interface that describes what the service offers, requires and how it can be accessed) is available within the service itself. Ambient Ecology: Two or more eEntities can be combined in an eEntity synthesis. Such syntheses are the programmatic bearers of Ambient Ecologies and can be regarded as service compositions; their realization can be assisted by end-user tools. Since the same eEntity may participate in many Ambient Ecologies the whole-part relationship is not exclusive. In the UML class diagram (see Figure 9-1) this is implied by using the aggregation symbol (hollow diamond) instead of the composition symbol (filled diamond). Ambient Ecologies are synthesizable since an Ambient Ecology is an eEntity itself and can participate in another Ecology. Properties: Entities have properties, which collectively represent their physical characteristics, capabilities and services. A property is modeled as a function that either evaluates an entity’s state variable into a single value or triggers a reaction, typically involving an actuator. Some properties (i.e. physical characteristics, unique identifier) are entity-specific, while others (i.e. services) are not. For example, attributes like color/shape/weight represent properties that all physical objects possess. The ‘light’ service may be offered by different objects. A property of an entity composition is called an emergent property. All of the entity’s properties are encapsulated in a property schema which can be sent on request to other entities, or tools (e.g. during an entity discovery). Functional Schemas: An entity is modeled in terms of a functional schema: F = {f1, f2 … fn}, where each function fi gives the value of an observed property i in
9 Achieving Co-Operation and Developing Smart Behavior
215
time t. Functions in a functional schema can be as simple or complex is required to define the property. They may range from single sensor readings, through rulebased formulas involving multiple properties, to first-order logic so that we can quantify over sets of artifacts and their properties. State: The values for all property functions of an entity at a given time represent the state of the entity. For an entity E, the set P(E) = {(p1, p2 … pn)|pi = fi (t)} represents the state space of the entity. Each member of the state vector represents a state variable. The concept of state is useful for reasoning about how things may change. Restrictions on the value domain of a state variable are then possible. Transformation: A transformation is a transition from one state to another. A transformation happens either as a result of an internal event (i.e. a change in the state of a sensor) or after a change in the entitys’ functional context (as it is propagated through the synapses of the entity). Plugs: Plugs represent the interface of an entity. An interface consists of a set of operations that an entity needs to access in its surrounding environment and a set of operations that the surrounding environment can access on the given entity. Thus, plugs are characterized by their direction and data type. Plugs may be output (O) where they manifest their corresponding property (e.g. as a provided service), input (I) where they associate their property with data from other artifacts (e.g. as service consumers), or I/O when both happens. Plugs also have a certain data type, which can be either a semantically primitive one (e.g. integer, boolean, etc.), or a semantically rich one (e.g. image, sound etc.). From the user’s perspective, plugs make visible the entities’ properties, capabilities and services to people and to other entities. Synapses: Synapses are associations between two compatible plugs. In practice, synapses relate the functional schemas of two different entities. When a property of a source entity changes, the new value is propagated through the synapse to the target entity. The initial change of value caused by a state transition of the source entity causes a state transition in the target entity. In that way, synapses are a realization of the functional context of the entity.
3.2
Formal Definitions
To define formally the artifacts and the ambient ecology constructs we first introduce three auxiliary concepts: the domain D is a set which does not include the empty element; P is an arbitrary non-infinite set called the set of properties or property schema - each element p of which is associated with a subset D denoted τ(p) called the type of p; τ is actually a function that defines the set of all elements of D that can be values of a property. The domain D might include values from any primitive data type such as integers, strings, enumerations, or semantically rich ones such as light, sound and image.
216
3.2.1
C. Goumopoulos, A. Kameas
Artifact
An artifact is a 4-tuple A of the form (P, F, IP, OP) where: ● ● ●
●
P is the artifacts’ property schema F is the artifacts’ functional schema IP is a set of properties (ip1, ip2, …, ipn) for some integer n ≥ 0 that are imported from other artifacts (corresponding to input plugs); OP is a set of properties (op1, op2, …, opm) for some integer m ≥ 0 that are exported to other artifacts (corresponding to output plugs).
The role of artifacts in an ambient ecology can be seen as analogous to that of primitive components in a component-based system. In that sense they provide services implemented using any formalism or language. Plugs (input and output) provide the interface through which the artifact interacts with other artifacts. The functionality of an artifact is implemented through its functional schema F. In general an artifact produces data on its OP set in response to the arrival of data at its IP set. There are two special cases of artifacts: ● ●
a source artifact is one that has an empty IP set; a sink artifact is one that has an empty OP set.
A source artifact from the point of view of the application in which it is embedded generates data. For example, an eClock generates an alarm event to be consumed by other artifacts. On the other hand, a sink artifact receives its input data from its input plugs but produces no data. For example, the eBlinds artifact receives the awake event from the eClock and opens the blinds without producing any new data.
3.2.2
Ambient ecology as a composite artifact
Ambient ecologies are synthesizable since an ambient ecology is an entity itself and can participate in another ecology. Then we can formally define an ambient ecology as a 5-tuple S of the form (C, E, S, IP, OP). Let s be a composite artifact then: ●
C is the set of constituent artifacts (see previous section for artifact definition) not including S at time t. It follows that the composition of s at time t is: Θ(σ, t ) = {x | x ∈C}
●
E is the surrounding environment, the set of entities that do not belong to C but interact with artifacts that belong to C at time t. It follows that the surrounding environment of s at time t is: ∏(σ, t ) = {x | x ∉Θ(σ, t ) ∧ ∃y ∈Θ(σ, t ) ∧ ∃d ( x, y) ∈ S} where d(x, y) denotes a synapse existence between x and y.
●
S is a set of synapses that is a set of pairs of the form (source, target) such that if d is a synapse, then:
9 Achieving Co-Operation and Developing Smart Behavior ● ● ● ●
217
source(d) is either an input plug of S or an output plug of an element of C; target(d) is a set of properties of S not containing source(d); For each target r of d, t (source(d)) Õ t (r). It follows that the interconnection structure of σ at time t is: ∆(σ, t ) = {d ( x, y) | x, y ∈Θ(σ, t )} ∪ {d ( x, y) | x ∈Θ(σ, t ) ∧ y ∈ ∏(σ, t )}
●
●
●
IP is a set (possibly empty) of distinct properties that are imported from the surrounding environment (corresponding to input plugs); OP is a set (possibly empty) of distinct properties, called emergent properties, that are exported to the surrounding environment (corresponding to output plugs); Auxiliary to the above we define the following items: The property schema of S is defined as the set: IP ∪ OP ∪ { p | ∃x ∈C ∧ p ∈ P( x )}
where P(x) is the property schema of constituent artifact x. ∀x, y ∈C , the sets IP, OP, P( x ) and P ( y) are pairwise disjoint. A composite artifact is a set of interconnected artifacts through synapses. A synapse associates an output plug of one artifact (the source of the synapse) with the input plugs of one or more other artifacts (the targets of the synapses). A synapse reflects the flow of data from source to targets. Each target should be able to accept any value it receives from the source, so its type must be a subset of the type of the source. Synapses cause the interaction among artifacts and the coupling of their execution. When a property of a source artifact changes, the new value is propagated via the synapse to the target artifact. The initial change of value caused by a state transition of the source artifact, causes eventually a state transition of the target artifact and thus their execution is coupled.
3.2.3 States, Transitions and Behavior Modeling A state over a property schema P is a function f: P → D such that f(p) ∈ t(p) ∀ p ∈ P. A state is an assignment of values to all properties. The dynamics of an artifact are described in terms of its changes of states. When an artifact a undergoes a state change the value of at least one of its properties will alter. A change of state constitutes an event. Thus an event may be defined as an ordered pair 〈k, k '〉where k, k ' are states in the state space of a. If a is an artifact (it can be a composite one) and k is a state, then an execution of a from k1 is a sequence of the form k1 → k2 → k3 → … → kn. For each i ≥ 1 three kinds of transitions are identified: 1. ki is a propagation of ki-1; 2. otherwise ki is a derivation of ki-1; 3. otherwise ki is an evaluation of ki-1
218
C. Goumopoulos, A. Kameas
The propagation is the simplest transition as it simply copies values that have been generated by an artifact along the synapses from the artifact’s output plug to the other artifacts. These values may arrive at input plugs of some artifacts, which can trigger accordingly an evaluation of the artifact’s function(s). The derivation is a composite transition, which incorporates the propagation and evaluation of a relational property at the synapse level. The derivation associates logically (using logical operators) the properties that are found at the end-points of the synapse essentially deriving a new relational property, which serves as an input plug to subsequent evaluation. The evaluation transition refers to a situation where the input plugs of an artifact have been defined through propagation or derivation transitions and the function(s) of the artifact can be executed so that the results are passed to its output plugs. Based on the above discussion it emerges that a natural way to model the behavior of artifacts and the behavior of ambient ecologies viewed as assemblies of artifacts is to use statechart diagrams. Statecharts are a familiar technique to describe the behavior of a system. They describe all of the possible states that a particular object can have and how the object’s state changes as a result of events that reach the object. In principle, a statechart is a finite-state machine with a visual appearance that is greatly enhanced by introducing a specialized graphical notation. Statecharts allow nesting of states (hierarchical statecharts). The expressive power of statecharts is enhanced by using Object Constraint Language (OCL) for conditional triggering of communication events. Statecharts play a central role in object-oriented software engineering methodologies (e.g., Unified Process) and is one of the diagrams supported by the UML standard [14]. The UML style is based on David Harel’s statechart notation [15]. Statecharts represent states by using rounded rectangles. Input and output control ports are attached to states, representing the states’ entry and exit points, respectively. Transitions between states are represented by arrows linking control ports of states. Statecharts may contain ports not attached to any state. These control ports refer to the entry/exit points of superstates. The states of a statechart define the states of the artifact and the links between the states define the events of an artifact.
4
Gas Architectural Style
The ways that we can use an ordinary object are a direct consequence of the anticipated uses that object designers “embed” into the object’s physical properties. This association is in fact bi-directional: objects have been designed to be suitable for certain tasks, but it is also their physical properties that constrain the tasks people use them for. According to Norman [16] affordances “refer to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used”. Due to their “digital self”, artifacts can now publicize their abilities in digital space. These include properties (what the object is), capabilities (what the object
9 Achieving Co-Operation and Developing Smart Behavior
219
can do) and services (what the object can offer to others). At the same time, they acquire extra capabilities, which during the formation of UbiComp applications (ambient ecologies), can be combined with the capabilities of other artifacts or adapted to the context of operation. Thus, artifacts offer two new affordances to their users: ●
●
Composeability: artifacts can be used as building blocks of larger and more complex systems. Changeability: artifacts that possess or have access to digital storage can change or adapt their functionality. For example, an artifact can aggregate service information into its repository on behalf of artifacts that are less equipped facilitating in that way a service discovery process.
Both these affordances are a result of the ability to produce descriptions of properties, abilities and services, which carry information about the artifact in digital space. This ability improves object/service independence, as an artifact that acts as a service consumer may seek a service producer based only on a service description. For example, consider the analogy of someone wanting to drive a nail and asking not for the hammer, but for any object that could offer a hammering service (could be a large flat stone). In order to be consistent with the physical world, functional autonomy of UbiComp objects must also be preserved; thus, they must be capable to function without any dependencies from other objects or infrastructure. As a consequence artifacts are characterized by the following basic principles: ●
●
Self-representation: the digital representation of artifact’s physical properties is in tight association to its tangible self. Functional autonomy: artifacts function independently of the existence of other artifacts.
We have designed GAS (the Gadgetware Architectural Style), as a conceptual and technological framework for describing and manipulating UbiComp applications [9]. It consists of a set of architecture descriptions (syntactic domain) and a set of guidelines for their interpretation (semantic domain). GAS extends componentbased architectures to the realm of tangible objects and combines a software architectural style with guidelines on how to physically design and manipulate artifacts. For the end-user, this model can serve as a high level task interface; for the developer, it can serve as a domain model and a methodology. In both cases, it can be used as a communication medium, which people can understand, and by using it they can manipulate the “invisible computers” within their environment. GAS defines a vocabulary of entities and functions (e.g. plugs, synapses etc.), a set of configuration rules (for interactively establishing associations between artifacts), and a technical infrastructure (the GAS middleware). Parts of GAS lie with the artifact manufacturers in the form of design guidelines and APIs, with people-composers in the form of configuration rules and constraints for composing artifact societies and with the collaboration logic of artifacts in the form of communication protocol semantics and algorithms.
220
5
C. Goumopoulos, A. Kameas
Application Engineering Paradigm
To achieve the desired collective functionality, based on the GAS architectural style, one forms synapses by associating compatible plugs, thus composing applications using entities as components. Two levels of plug compatibility exist: Direction and data type compatibility. According to direction compatibility output or I/O plugs can only be connected to input or I/O plugs. According to Data type compatibility, plugs must have the same data type to be connected via a synapse. However, this is a restriction that can be bypassed using value mapping in a synapse. No other limitation exists in making a synapse. Although this may result in the fact that meaningless synapses are allowed, it has the advantage of letting the user create associations and cause the emergence of new behaviors that the artifact manufacturer may never consider. Meaningless synapses can also be seen as having much in common with runtime errors in a program, where the program may be compiled correctly but it does not manifest the behavior desired by the programmer. The idea of building UbiComp applications out of components is possible only in the context of a supporting component framework that acts as a middleware. The kernel of such a middleware is designed to support basic functionality such as accepting and dispatching messages, managing local hardware resources (sensors/ actuators), plug/synapse interoperability and a semantic service discovery protocol.
5.1
Synapse-Based Programming
The introduction of synapse-based programming has been driven mainly by the introduction of the previously discussed enabling paradigm, component software. Traditional software programs have followed the procedure call paradigm, where the procedure is the central abstraction called by a client to accomplish a specific service. Programming in this paradigm requires that the client has intimate knowledge about the procedures (services) provided by the server. However, this kind of knowledge is not possible in an ambient ecology because it is based on artifacts that may come from different vendors and were separately developed. That is why ambient ecology programming requires a new programming paradigm, which we have called synapse-based programming. In synapse-based programming, synapses between artifacts are not implicitly defined by procedure calls but are explicitly programmed. Synapses represent the glue that binds together interfaces of different artifacts. The basis for synapse-based programming is typically, the so-called, Observer design pattern [18]. The Observer pattern defines a one-to-many dependency between a subject object and any number of observer objects so that when the subject object changes state, all of its observer objects are notified and updated automatically. This kind of interaction is also known as publish/subscribe. The subject is the publisher of notifications. It sends out these notifications without having to know who its observers are.
9 Achieving Co-Operation and Developing Smart Behavior
221
Fig. 9.2 Publish/subscribe model for implementing synapses
The strength of this event-based interaction style lies in the full decoupling in time, space and synchronization between publishers and subscribers [19]. Thus the relationship between subject and observer can be established at run time and this gives a lot more programming flexibility. In a UbiComp space (see for example the scenario outlined in Section 1.1), the Observer pattern can be applied as in the following diagram (see Fig. 9.2). The Coffee Maker, Blinds, and MP3 player are the observer objects. The Alarm Clock is the subject object. The Alarm Clock object notifies its observers whenever an awake event occurs to initiate the appropriate service. The observer pattern works like a subscription mechanism that handles callbacks upon the occurrence of events. Artifacts interested in an event that could occur in another artifact can register a callback procedure with this artifact. This procedure is called every time the event of interest occurs. The typical interfaces of software components have to be tailored for synapse-based programming — they have to provide subscription functions for all internal events that might be of interest to external artifacts. This part of the interface is often called the outgoing interface (associated with output plugs) of an artifact, as opposed to its incoming interface (associated with input plugs) that consists of all callable service procedures.
5.2
An Example
The following example refers to the motivating scenario discussed earlier in Section 1.1. Fig. 9.3 depicts the internal structure of a composite artifact with the constituent artifacts, their properties and the established synapses. The composition uses two source artifacts (eBook, eChair), one sink artifact (eDeskLamp) and one simple artifact (eDesk). The interconnection is accomplished with three synapses between properties of the constituent artifacts. For example, the ReadingActivity property associated with a eDesk artifact depends on the input properties defined as
222
C. Goumopoulos, A. Kameas
Fig. 9.3 An artifact composition implementing a UbiComp application
BookOnTop and ChairInFront; the later have been derived as relational properties between eDesk and the pair of eBook and eChair artifacts, respectively (see Figure 9-3). This example illustrates the definition of a simple UbiComp application that we may call the eStudy application. The scenario that is implemented is as follows: when the particular chair is near the desk and someone is sitting on it and a book is on the desk and the book is open then we may infer that a reading activity is taking place and we adjust the lamp intensity according to the luminosity on the book surface. The properties and plugs of these artifacts are manifested to a user via the UbiComp Application editor tool [20], an end-user tool that acts as the mediator between the plug/synapse conceptual model and the actual system. Using this tool the user can combine the most appropriate plugs into functioning synapses as shown in Fig. 9.3. Fig. 9.4 depicts the statechart diagram modelling the behavior of the participating artifacts in the eStudy. States and transitions for each artifact are shown as well as the use of superstates for modelling the behavior of the ambient ecology as a whole. Note that the modelling of the behavior of the artifact/ambient ecology helps us to decide upon the distribution of properties to artifacts and the establishment of
do / opened=FALSE
closed
Book opened
do / Chair In Front=FALSE
[In sen sor reach] Away from
[Out of se nsor reach] Chair moved
Chair moved
do / Chair In Front =TURE
In proximity
eDesk/eChair
eDesk/eBook
Book removed from desk
Displaced from top do / Book On Top=FALSE
Book pl aced on the top of desk
Placed on top do / Book On Top=TRUE
Fig. 9.4 Statechart diagram modelling the behavior of eStudy participating artifacts
Light off do / Light(off, 0)
do / Light(on, luminosity-level)
eDesk Lamp
Not Reading do / Reading Activity=FALSE
Light on
Reading do / Reading Activity=TRUE
[~eChair. Occupied OR ~eBook. Opened OR ~eDesk.Chair In Front OR ~eDesk. Book On Top]
[eChair Occupied AND eBook. Opened AND eDesk .Chair In Front AND eDesk. Book On TOP]
Released do / occupancy=FALSE
A person has sit
Book closed
do / opened=TRUE
do / occupancy=TRUE
A person has risen
eBook Opened
eChair
Occupied
Ambient Ecology
9 Achieving Co-Operation and Developing Smart Behavior 223
224
C. Goumopoulos, A. Kameas
synapses. For example, the states that refer to a relational property, like the ChairInFront property, identify the end-point plugs of a synapse. An example of an execution scenario for the above application may have the following sequence of states (the latest defined property is given as underlined): k0: {eChair.Occupancy = TRUE; all other properties undefined} Propagation applies for the Occupancy property k1: {eChair.Occupancy = TRUE; eBook.Opened = TRUE; all other properties undefined} Propagation applies for the Opened property k2: {eChair.Occupancy = TRUE; eBook.Opened =TRUE; eDesk.ChairInFront = TRUE; all other properties undefined} Derivation applies for the ChairInFront, property based on the propagated Occupancy property k3: {eChair.Occupancy = TRUE; eBook.Opened =TRUE; eDesk.ChairInFront = TRUE; eDesk.BookOnTop = TRUE; all other properties undefined} Derivation applies for the BookOnTop property based on the propagated Opened property k4: {eChair.Occupancy = TRUE; eBook.Opened = TRUE; eDesk.ChairInFront = TRUE; eDesk.BookOnTop = TRUE; eDesk.ReadingActivity = TRUE;} Evaluation applies for the ReadingActivity property based on a simple rule-based formula. k5: {eChair.Occupancy = TRUE; eBook.Opened = TRUE; eDesk.ChairInFront = TRUE; eDesk.BookOnTop = TRUE; eDesk.ReadingActivity = TRUE; eDeskLamp.Light = On}; Derivation applies for the Light property based on the propagated ReadingActivity property. Although the above example is rather simple, it does demonstrate many of the features of our definitions. From the example, we see that composite artifacts provide an abstraction mechanism for dealing with the complexity of a component-based application. In a sense a composite artifact realises the notion of a “program”, that is we can build a UbiComp application by constructing a composite artifact.
6 6.1
The Supporting Framework GAS-OS Middleware
To implement and test the concepts presented in the previous sections we have introduced the GAS-OS middleware, which provides UbiComp application designers and developers with a runtime environment to build applications from artifact components. We assume that a process for turning an object into artifact has been followed [17]. Broadly it will consist of two phases: a) embedding the hardware
9 Achieving Co-Operation and Developing Smart Behavior
225
modules into the object and b) installing the software modules that will determine its functionality. The outline of the GAS-OS architecture is shown in Fig. 9.5 (adopted from [21], where it is presented in more detail). The GAS-OS kernel is designed to support accepting and dispatching of messages, managing local hardware resources (sensors/ actuators), and implementing the plug/synapse interaction mechanism. The kernel is also capable of managing service and artifact discovery messages in order to facilitate the formation of the proper synapses. The GAS-OS kernel encompasses a P2P Communication Module, a Process Manager, a State Variable Manager, and a Property Evaluator module which are briefly explained in Table 9.1. Extending the functionality of the GAS-OS kernel can be achieved through plug-ins, which can be easily incorporated into an artifact running GAS-OS, via the plug-in manager. Using ontologies, for example, and the ontology manager plug-in all artifacts can use a commonly understood vocabulary of services and capabilities in order to mask heterogeneity in context understanding and real-world models [22]. In that way, high-level descriptions of services and resources are possible independent of the context of a specific application, facilitating the exchange of information between heterogeneous artifacts as well as the discovery of services. GAS-OS can be considered as a component framework, which determines the interfaces that components may have and the rules governing their composition. GAS-OS manages resources shared by artifacts and provides the underlying mechanisms that enable communication (interaction) between artifacts. For example, the proposed concept supports encapsulation of the internal structure of an artifact and provides the means for composition of an application, without having to access any of the code that implements the interface. Thus, our approach provides a clear separation between computational and compositional aspects of an application, leaving
UbiComp Applications
Application layer
Plug/Synapse API
Ontology
Security Manager plug-in (SM)
Ontology Manager plug-in (OM)
Learning Module plug-in (LM)
GAS-OS plug-ins
Plug-in Manager Rule base
Property Evaluator (PE)
Process Manager (PM)
Communication Module (CM) Fig. 9.5 GAS-OS modular architecture
State Variable Manager (SVM)
GAS-OS Kernel
226
C. Goumopoulos, A. Kameas
Table 9.1 Modules in the GAS-OS Kernel Module
Explanation
Communication Module (CM) Process Manager (PM)
The P2P Communication Module is responsible for application-level communication between the various GAS-OS nodes. The Process Manager is the coordinator module of GAS-OS and the main function of this module is to monitor and execute the reaction rules defined by the supported applications. These rules define how and when the infrastructure should react to changes in the environment. Furthermore, it is responsible for handling the memory resources of an artifact and caching information from other artifacts to improve communication performance when service discovery is required. The State Variable Manager handles the runtime storage of the artifacts’ state variable values, reflecting both the hardware environment (sensors/actuators) at each particular moment (primitive properties), and properties that are evaluated based on sensory data and P2P communicated data (composite properties). The Property Evaluator is responsible for the evaluation of the artifacts’ composite properties according to its Functional Schema. In its typical form the Property Evaluator is based on a set of rules that govern artifact transition from one state to another. The rule management can be separated from the evaluation logic by using a high-level rule language and a translator that translates high-level rule specifications to XML, which can be exploited then by the evaluation logic.
State Variable Manager (SVM)
Property Evaluator (PE)
the second task to ordinary people, while the first can be undertaken by experienced designers or engineers. The benefit of this approach is that, to a large extent, the systems design is already done, because the domain and system concepts are specified in the generic architecture; all people have to do is realize specific instances of the system. Composition achieves adaptability and evolution: a component-based application can be reconfigured with low cost to meet new requirements. The possibility to reuse devices for numerous purposes - not all accounted for during their design provides opportunities for emergent uses of ubiquitous devices, where this emergence results from actual use.
6.2
ECA rule modeling pattern
Event-Condition-Action (ECA) rules have been used to describe the behavior of active databases [23]. An active database is a database system that carries out prescribed actions in response to a generated event inside or outside of the database. An ECA rule consists of the following three parts: ● ● ●
Event (E): occurring event Condition (C): conditions for executing actions Action (A): operations to be carried out
9 Achieving Co-Operation and Developing Smart Behavior
227
An ECA rule modeling pattern is employed to support autonomous interaction between artifacts that are represented as components in a UbiComp environment. The rules are embedded in the artifacts, which invoke appropriate services in the environment when the rules are triggered by some internal or external event. Following this design pattern, the applications hold the logic that specifies the conditions under which actions are to be triggered. The conditions are specified in terms of correlation of events. Events are specified up front and types of events are defined in the ontology. The Process Manager (PM) subscribes to events (specified in applications logic) and the Property Evaluator (PE) generates events based on data supplied by the State Variable Manager (SVM) and notifies the Process Manager when the subscribed events occur. When the conditions hold, the Process Manager performs the specified actions, which could consist of, for example, sending messages through the P2P Communication Module (CM) and/or request an external service (e.g., toggling irrigation, calling a Web service, etc.). Consider, as an example, the smart plant application discussed in Section 1.1, which enables interactions similar to communication between plants and people. The main artifact is the ePlant. The ePlant decides whether it needs water or not using its sensors readings (e.g. thermistors and soil moisture probe) and the appropriate application logic incorporated in it. A second artifact is a set of keys that is “aware” as to whether it is in the house or not. If we assume that the user always carries her keys when leaving home then the keys can give us information about whether the user is at home or not. User presence at home can be determined by using the Crossbow MICA2Dot mote [24] placed in the user’s key-fold. When the user is at home, any signal from the mote can be detected by a base station and interpreted as presence. Fig. 9.6 depicts the flow of information between the middleware components applying the ECA pattern. The ECA rule defined for the ePlant artifact in the above application is: ● ● ●
E: PlantDryEvent C1: location = HOME A1:SendNotifyRequest(DRY_PLANT) C2: location != HOME A2:SendSMSRequest(DRY_PLANT)
The Location Plug actor in Figure 9-6 represents the user location context supplied by the key artifact. The application requires interaction with a couple of artifacts that will respond to the requests produced by the ePlant artifact, such as a notification device (e.g. TV, MP3 player) and a mobile phone for sending/receiving SMS messages corresponding to the DRY_PLANT code. By employing an ECA rule modeling pattern we can program applications easily and intuitively through a visual programming rule-editing tool. We can modify the application logic dynamically since the application logic is described as a set of ECA rules and each rule is stored independently in an artifact.
3: User Location
2: Soil Moisture Measurements
Sensor Device Sensor Device 1: Temperature Measurements
Fig. 9.6 Interaction sequence in the smart plant application
Location Plug
alt
6: User Location
ePlant_CM: CM
7: Handle Event
9: Send SMS Request
[else]
8: Send Notify Request
[location=HOME]
ePlant_PM: PM
5: Plant Dry Event
ePlant_PE: PE
4: Measurements
ePlant_SVM: SVM
228 C. Goumopoulos, A. Kameas
9 Achieving Co-Operation and Developing Smart Behavior
6.3
229
Tools
A toolbox complements this framework and facilitates management and monitoring of artifacts, as well as other identified eEntities, which when collectively operating, define UbiComp applications. The following tools have been implemented: ●
●
●
The Interaction Editor, which administers the flexible configuration and reconfiguration of UbiComp applications by graphically managing the composition of artifacts into ambient ecologies, the interactions between them, in the form of logical communication channels and the initiation of the applications (see Figure 9-3); The Supervisor Logic and Data Acquisition Tool (SLADA), which can be used to view knowledge represented into the Ontology, monitor artifact/ecology parameters and manage dynamically the rules taking part in the decision-making process in co-operation with the rule editor; The Rule Editor, which provides a Graphical Design Interface for managing rules, based on a user friendly node connection model. The advantage of this approach is that rules will be changed dynamically without disturbing the operation of the rest of the system and this can be done in a high-level manner.
In Fig. 9.7 we show as an example the design of the NotifyUserThroughNabaztag rule for the wish-for-walk awareness application defined as part of our motivating scenario (see Section 1.1). The rule consists of two conditions combined with an AND gate. The first condition checks the ‘wish-for-walk’ incoming awareness state. The second condition checks whether the user to be notified is in the living room (this state is inferred by an artifact - an instrumented couch). The rule, as designed, states that when the conditions are met that the user will be presented with the awareness information through an artifact, called Nabaztag, as this object will be probably in his/her field of vision. Using a rule editor for defining application business rules emphasizes system flexibility and run-time adaptability. In that sense, our system architecture can be regarded as a reflective architecture that can be adapted dynamically to new requirements. The decision-making rules can be configured by users external to the execution of the system. End-users may change the rules without writing new code. This can reduce the time-to-production of new ideas and applications to a few minutes. Therefore, the power to customize the system is placed in the hands of those who have the knowledge/need to do it effectively.
6.4
Implementation
The prototype of GAS-OS has been implemented in J2ME (Java 2 Micro Edition) CLDC1 (Connected Limited Device Configuration), which is a very low-footprint 1
java.sun.com/products/cldc
Fig. 9.7 Designing the ‘NotifyUserThroughNabaztag’ rule for the wish-for-walk awareness application
230 C. Goumopoulos, A. Kameas
9 Achieving Co-Operation and Developing Smart Behavior
231
Java runtime environment. The proliferation of end-systems, as well as typical computers capable of executing Java, make Java a suitable underlying layer providing a uniform abstraction for our middleware. The use of Java as the platform for the middleware decouples GAS-OS from typical operations like memory management, networking, and so forth. Furthermore, it facilitates deployment on a wide range of devices from mobile phones and PDAs to specialized Java processors. Up to now, GAS-OS has been tested in laptops, IPAQs, in the EJC (Embedded Java Controller) board2 and on a SNAP board3. Both EJC and SNAP boards are network-ready, Java-powered plug and play computing platforms designed for use in embedded computing applications. The EJC system is based on a 32-bit ARM720T processor running at 74 MHz and has up to 64Mb SDDRAM. The SNAP device has a Cjip microprocessor developed by Imsys which has been designed for networked, Java-based control. It runs at 80 MHz and has 8 Mb SDDRAM. The main purpose of programming our middleware to run on these types of boards was to demonstrate that the system was able to run on small embedded-internet devices. The artifacts communicate using wired/wireless Ethernet, overlaid with TCP/IP and UPnP (Universal Plug and Play) middleware programmed in Java. The inference engine of the Property Evaluator is similar to a simple Prolog interpreter that operates on rules and facts and uses backward-chaining with depth-first search as its inference algorithm. We have implemented a lightweight Resource Discovery Protocol for eEntities (eRDP) where the term resource is used as a generalization of the term service. eRDP is a protocol for advertisement and location of network/device resources. There are three actors involved in the eRDP: 1. the Resource Consumer (RC), an artifact that has need for a resource, possibly with specific attributes and initiating for that purpose a resource discovery process, 2. the Resource Provider (RP): an artifact that provides a resource and also advertises the location and attributes of the resource to the Resource Directory, provided that there is one, 3. the Resource Directory (RD): an artifact that aggregates resource information into a repository on behalf of artifacts that are less equipped. The Resource Directory (RD) is an optional component of the discovery protocol and its aim is to improve the performance of the protocol. In the absence of an RD, the Resource Consumers (RC) and Resource Providers (RP) implement all of the functions of the RD with multicast/broadcast messages, with the optional and undeterministic use of resource cache within each artifact. When one or more RDs are present (see Fig. 9.8), the protocol is more efficient, as an RC or RP uses unicast messages to the RDs.
2 3
www.embedded-web.com/ www.imsys.se/documentation/manuals/snap_spec.pdf
232
C. Goumopoulos, A. Kameas Resource Directory (RD)
Resource Provider (RP)
REQUEST(RD) RD_ADVERTISE(RD_spec) PUBLISH(res_spec)
Resource Consumer (RC)
REQUEST(RD) RD_ADVERTISE(RD_spec)
unicast
REQUEST(res_class, attr) multicast/broadcast
ACKNOWLEDGE (status) REPLY(res_spec)
Fig. 9.8 eRDP with a RD facility
eDeskLamp res name> light eRDP:PLUG:CTI-eDLamp-ONOFF_PLUG 150.140.30.5 4758693030 Never
Fig. 9.9 XML description of eDeskLamp resource specification
This service discovery protocol makes use of typed messages codified in XML. Each message contains a header part that corresponds to common control information including local IP address, message sequence number, message acknowledgement number, destination IP address(es) and message type identification. kXML is used for parsing XML messages. If we assume, for example, that the synapse between the eDesk and eDeskLamp artifacts is broken. When this happens, the system will attempt to find a new artifact having a plug that provides the service classified as “light”. The eDesk system software is responsible for initiating this process by sending a message for service discovery to other artifacts (RD may be present or not) that participate in the same application or are present in the surrounding environment. This type of message is predefined and contains the type of the requested service and the service’s attributes. A description of the eDeskLanp resource specification is shown in Fig. 9.9.
9 Achieving Co-Operation and Developing Smart Behavior
233
When the system software of an artifact receives a service discovery message it queries its local service repository in order to find if this artifact has a plug that provides the service “light”. When the answer is positive it returns, as a reply, the description of this service as a resource specification. If such a service is not provided by the artifact itself, the repository is checked in order to find if another artifact, with which the queried artifact has previously collaborated, provides such a service. If this is not the case, the query message for the service discovery may pass to another artifact.
7
Related Work
Service composition in ubiquitous computing environments has been investigated mainly by automatic or user-assisted composition of semantically annotated web services [25–27]. Two other important cases in this area are the approach developed by Fujitsu Laboratories and the University of Maryland called Task Computing [28] and the approach developed by Xerox PARC called Recombinant Computing [29]. In the former case the functionality of the environment is exposed as semantic web services, which the user can in turn discover and arbitrarily compose. In the latter case, a model is used where each computational entity in the network is treated as a separate component. Central to this approach is the notion that users must be the final arbitrator of the semantics of the entities they are interacting with because applications could not have a priori knowledge of all of the devices they may encounter. There are systems that permit users to aggregate and compose networked devices for particular tasks [30]. However, those devices are not context aware, acting more as service providers; e.g., web services usually in the UPnP (universal plug and play) style. Approaches to modelling and programming such devices for the home have been investigated, where devices have been modelled as collections of objects [31], as web services [32], and as agents [33]. However, there has been little work on specifying, at a high level of abstraction, how such devices would work together at the application level. A palpable assembly as a dynamic combination of devices and services with a programmatic representation that includes both a component and a connector has been proposed [34]. This is similar to our conceptual model for programming ambient ecologies; however, our approach offers a complete framework consisting of an architectural style, a programming model, a supporting middleware and a toolset as opposed to an architectural prototype in the case of the palpable assembly concept. Other research efforts are emphasizing the design of ubiquitous computing architectures. In the Disappearing Computer initiative, the project “Smart-Its” [35] aimed to develop small devices, which, when attached to objects, enable their
234
C. Goumopoulos, A. Kameas
association based on the concept of “context proximity”. The objects are usually everyday devices (e.g. cups, tables, chairs etc) equipped with various sensors as well as a wireless communication module, such as RF or Bluetooth. The goal is to add smartness to real-world objects in a post-hoc fashion by attaching small, unobtrusive computing devices to them. While a single Smart-It is able to perceive context information from its integrated sensors, a federation of ad hoc, connected Smart-Its can gain collective awareness by sharing this information. However, the “augmentation” of physical objects is not related in any way with their “nature”, thus the objects tend to become just physical containers for the computational modules they host.
8
Conclusions and Discussion
The ultimate goal of ambient ecologies is to serve people; this undoubtedly entails interaction with, and control by, users – dealing with errors, customizing settings, etc. On the other hand, much of its management (e.g., configuration, handling of faults and adaptation to context) will be done autonomously and people will not be aware of it. An ambient ecology may involve large – even enormous – populations of entities that deploy themselves flexibly and responsibly in a working environment. An entity may be a hardware device, a software agent or an infrastructure server; for some purposes it may be a human; it may also be an aggregation of smaller entities. The advent of ambient ecologies will change the way we conduct our everyday activities by gradually introducing artifacts that are able to perform local computation, to collaborate with each other and to interact in an adaptive way with the user. A research agenda is needed that will facilitate a user-centered evolution of this new (Ambient Intelligence) environment by defining the conceptual framework and developing an integrated component platform, tools and design methods for people involvement. Research is also required to span across all layers ranging from infrastructure to applications. More specifically, the following multidisciplinary efforts are required to: ●
●
●
●
Develop an open framework for conceptualizing the ecologies of devices and services. This framework may consist of a set of concepts implemented as an ontology, a description of capabilities implemented as basic and higher level behaviors and a novel interaction metaphor implemented as a language. Research on adaptation mechanisms aimed at understanding how the properties of self-configuration, self-optimization, self-maintenance and robustness arise from or, depend upon the behaviors, goals and self-* properties of individual artifacts, the interactions between them and the context of the application. Conceptualize heterogeneity by developing and testing theories of ontology alignment to achieve task-based semantic integration of heterogeneous devices and services. Understand the structure and behavior of ambient ecologies and design adaptable and evolvable ecology architectures
9 Achieving Co-Operation and Developing Smart Behavior ●
●
235
Develop the necessary components and services, including APIs, to interface with existing hardware modules and communication protocols, ontology based knowledge representation and decision-making mechanisms, learning mechanisms, purpose based border negotiation and privacy enforcement and composeable interaction components. Develop test-bed applications to demonstrate the capabilities of ambient ecologies. Research must address these issues from three different perspectives:
1. The theoretical perspective, which focuses upon concepts and models that capture the behavior of ambient ecologies at varying levels of abstraction. 2. The engineering perspective, which focuses upon the architectural challenges posed by the heterogeneous and dynamic nature of their synthesis. 3. The experience perspective, which focuses upon how people might share a world with artifact ecologies. As is the case with every new technology, the major issue that research and development efforts must address is that of adoption. People are usually reluctant to give up on the habits and procedures they feel comfortable with unless the reward is high. UbiComp systems have great potential, but the application that will pave the way for their adoption has not been engineered yet. Things are made worse by the fear of privacy infringement that is developing among people as they become aware of the ability of novel artifacts to record, process and transmit huge volumes of information, much of it beyond the direct perception or control of people. Ambient ecologies are complex systems; their global behavior results from local interactions between small collections of artifacts having some kind of property or task-based proximity. Thus, the evolution of ecology behavior, or structure, cannot be programmed. It seems that people will have to learn to co-exist with complex artifact ecologies, which they will only influence, but not be able to command. Consequently, the major goal of our research is to build systems that are at the same time pro-active and understandable, transparent and adaptable, robust and evolvable; thus, they enable people to balance on the thin line between asking and acting. Although much work still needs to be done, in this work we have attempted to define ambient ecologies, to specify design patterns and programming principles, and to develop the infrastructure to provide a paradigm of application engineering and the tools to support ambient ecology designers, developers and end-users. Acknowledgements Part of the research described in this chapter was conducted in the EU Funded e-Gadgets (IST-25240) and ASTRA (IST-29266) projects; the authors wish to thank their fellow researchers in those consortiums.
References 1. Loke S. W., 2006, Context-aware artifacts: two development approaches, IEEE Pervasive Computing, 5(2):48–53.
236
C. Goumopoulos, A. Kameas
2. Bluetooth, 2008, The official Bluetooth Website, Information available at http://www.bluetooth.com/, accessed February 2008. 3. IEEE 802.15.4, 2003, IEEE Standard for Wireless Medium Access Control (MAC) and Physical Layer (PHY), Specifications for Low-Rate Wireless Personal Area Networks (LRWPANs), IEEE Computer Society. 4. IEEE 802.11, 1997, IEEE Standard for Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification, IEEE Computer Society. 5. Norman, D., 1999, The Invisible Computer, MIT Press. 6. Bergman, E., 2000, Information Appliances and Beyond, Morgan Kaufmann Publishers. 7. Wooldridge, M., and Jennings, N.R, 1995, Intelligent agents: Theory and Practice, Knowledge Eng. Rev. 10(2):115–152. 8. Szyperski C., 1998, Component Software, Beyond Object-Oriented Programming, ACM Press, Addison-Wesley, NJ. 9. Kameas, A., et al., 2003, An architecture that treats everyday objects as communicating tangible components, in: Proceedings of the first IEEE International Conference on Pervasive Computing and Communications (PerCom03), IEEE CS Press, pp. 115–122. 10. Wand, Y., and Weber, R., 1990, An ontological model of an information system, IEEE Transactions on Software Engineering, 16(11):1282–1292. 11. Bunge, M., 1977, Treatise on Basic Philosophy: Volume 3: Ontology I: The Furniture of the World, Reidel, Dordrecht. 12. Bunge, M., 1979, Treatise on Basic Philosophy: Volume 3: Ontology II: A World of Systems, Reidel, Dordrecht. 13. Goumopoulos, C., Christopoulou, E., Drossos, N., and Kameas, A., 2004, The PLANTS system: enabling mixed societies of communicating plants and artefacts, in: Proceedings of the 2nd European Symposium on Ambient Intelligence (EUSAI 2004), Springer LNCS 3295, pp. 184–195. 14. Fowler, M., and Scott, K., 1999, UML Distilled Second Edition, A Brief Guide to the Standard Object Modeling Language, Addison Wesley. 15. Harel, D., 1987, Statecharts: a visual formalism for complex systems, Science of Computer Programming, 8(3):231–274. 16. Norman, D. A., 1988, The Psychology of Everyday Things, Basic books, New York. 17. Kameas, A., Mavrommati, I., and Markopoulos, P., 2005, Computing in tangible: using artifacts as components of Ambient Intelligence Environments, in: Ambient Intelligence, Riva, G. Vatalaro, F. Davide, F. and Alcañiz M. (eds.), IOS Press, pp. 121–142. 18. Gamma, E., Helm, R., Johnson, R., and Vlissides, J., 1995, Design Patterns: Elements of Reusable Object-Oriented Software, Reading Mass., Addison Wesley. 19. Eugster, P., Felber, P., Guerraouli, R., and Kermarrec, A., 2003, The many faces of publish/ subscribe, ACM Computing Surveys, 35(2):114–131 20. Mavrommati, I., Kameas, A., and Markopoulos, P., 2004, An editing tool that manages device associations in an in-home environment, Personal and Ubiquitous Computing, SpringerVerlag, 8(3–4):255–263. 21. Drossos, N., Goumopoulos, C., and Kameas, A., 2007, A conceptual model and the supporting middleware for composing ubiquitous computing applications, Journal of Ubiquitous Computing and Intelligence, American Scientific Publishers(ASP), 1(2):174–186. 22. Christopoulou, E., and Kameas, A., 2005, GAS Ontology: an ontology for collaboration among ubiquitous computing devices, International Journal of Human-Computer Studies, 62(5):664–685. 23. Paton, N. W., and Diaz, O., 1999 Active Database Systems, ACM Computing Surveys, 31(1):63–103. 24. CrossBow Mica2Dot Data Sheet, Wireless Microsensor, Document Part Number: 6020-004304 Rev A, http://www.xbow.com/Products/Product_pdf_files/Wireless_pdf/6020-0043-04_A_ MICA2DOT.pdf, accessed February 2008.
9 Achieving Co-Operation and Developing Smart Behavior
237
25. Higel S., O’Donnell T., and Wade V., 2003, Towards a natural interface to adaptive service composition, in: Proceedings of the 1st International Symposium on Information and Communication Technologies, ACM Series, pp. 169–174 26. Ben Mokhtar, S., Georgantas, N., and Issarny, V., 2005, Ad hoc composition of user tasks in pervasive computing environments, in: Proceedings of the 4th Workshop on Software Composition (SC 2005), Springer LNCS 3628, pp. 31–46. 27. Charif, Y., and Sabouret, N., 2006, An Overview of Semantic Web Services Composition Approaches, Electronic Notes in Theoretical Computer Science, 146(1):33–41. 28. Masuoka, R., Labrou, Y., Parsia, B., and Sirin, E. 2003, Ontology-enabled pervasive computing applications, IEEE Intelligent Systems, 18(5):68–72. 29. Edwards, W.K., Newman, M.W., and Sedivy J.Z., 2001, The Case for Recombinant Computing, Technical Report CSL-01-1, Xerox Palo Alto Research Center, Palo Alto, CA. 30. Kumar, R., Poladian, V., Greenberg, I., Messer, A., and Milojicic, D., 2003, Selecting devices for aggregation, in: Proceedings of the IEEE Workshop on Mobile Computing Services and Applications, IEEE CS Press, pp. 150–159. 31. Jahnke, J. H., d’Entremont, M., and Stier, J., 2002, Facilitating the programming of the smart home, IEEE Wireless Communications, 9(6):70–76. 32. Matsuura, K., Hara, T., Watanabe, A., and Nakajima T., 2003, A new architecture for home computing, in: Proceedings of the IEEE Workshop on Software Technologies for Future Embedded Systems, IEEE CS Press, pp. 71–74. 33. Ramparany, F., Boissier, O., and Brouchoud, H., 2003, Cooperating autonomous smart devices, in: Proceedings of the Smart Objects Conference (sOc’2003), pp. 182–185. 34. Ingstrup, M., and Hansen, K. M., 2005, Palpable assemblies: dynamic service composition for ubiquitous computing, in: Proceedings of the Seventeenth International Conference on Software Engineering and Knowledge Engineering (SEKE’2005), edited by William C. Chu et al., pp. 632–638. 35. Holmquist, L. E., Mattern, F., Schiele, B., Alahuhta, P., Beigl, M., and Gellersen, H.-W., 2001, Smart-its friends: A technique for users to easily establish connections between smart artifacts, in: Proceedings of the 3rd International Conference on Ubiquitous Computing (UBICOMP 2001), Springer-Verlag LNCS 2201, pp. 116–122.
Part VI
System-Level Challenges Technology Limits and Ambient Intelligence
1.1
Summary
With the possible exception of Part I, this book has up to this point concentrated upon providing an overview, and certain insights, into specific technology domains within what might be called an Ambient Intelligence (AmI) construct. The remaining sections of the book will seek to elaborate on the issues of creating AmI systems, and in particular building collaborative smart objects, from a number of perspectives, not the least of which will be how researchers can work together to achieve systems that many users will want to employ in their everyday lives. As well as summarizing technology domains, the early chapters also sought to draw linkages between these domains to illustrate the interdependence that exists within the process of creating and realising AmI to any effective, recognised threshold of performance. It is clear that to achieve solutions with any longevity that a whole-systems perspective is a minimum requirement; in fact, it is more likely that we will need to create and manage a series of coherent co-innovation processes that evolve in some manner across all disciplines. This is acknowledged at a grand challenge level in roadmapping processes that have grown around the evolution of the AmI concept itself; of course, as noted previously, AmI itself has grown and evolved from concepts of Pervasive and Ubiquitous Computing, which have their own vectors of progress. In practical terms, there are very many systems-level technical challenges that could be discussed here. However, in order to maintain a focus upon the wholesystems perspective, two particular areas are highlighted in this section. These are the issue of energy management for the type of smart systems that we are discussing and the challenge of creating and managing reliable performance at the systems-level. The first of these issues is well established as a problem-statement and thus as an opportunity for research. The second, reliability, is an area that is likely to become a much more significant issue in the future as the functional scale of the foundation platforms for AmI (particularly at the sensor interface) grows and evolves.
240
1.2
Part VI System-Level Challenges
Relevance to Microsystems
The scope of each chapter in this section of the book is taken from a perspective that aligns strongly with Microsystems technologies. The chapter on power management, in fact, acknowledges the use of Microsystems as tools to manage the natural resource limits present in local sensing entities, such as wireless sensor network nodes. This includes maximizing what can be achieved under these constraints as well as accessing additional sources of energy in a potentially autonomous manner. Three such sources of energy are discussed, light, vibrations and thermal gradients, for which practical results have been demonstrated to date. The chapter on reliability regards the challenge of whole-systems reliability from the perspective of the hardware infrastructure through which the networked embedded systems (that we will use to compose AmI) will be built. This issue, at least at a wholesystems-level, is at an earlier stage and thus defining the most effective approaches remains elusive. This chapter advocates a proactive whole-systems-design that will impact upon all aspects of the AmI infrastructure and most likely drive the development of new forms of microsystems as part of this co-innovation process.
1.3
Recommended References
Power management and energy harvesting is a feature of research activities in both wearable computing and wireless sensor networking. A number of the following references offer reviews of selected investigative programmes and new technologies in these topics. In the domain of reliability there is a significant body of research knowledge on standards-led development for qualifying hardware technologies in numerous application domains (including consumer electronics such as mobile phones, automotive technologies, the aerospace industry, electronics for marine and maritime applications, etc). These are very relevant, though quite focused. For those interested in a more systems-orient approach, the publications of Professor Michael Pecht, founder of the Centre for Advanced Life Cycle Engineering (CALCE) is of relevance. 1. J. Paradiso, T. Starner, Energy Scavenging for Mobile and Wireless Electronics, IEEE Pervasive Computing, vol. 4, no. 1, Jan–March 2005. 2. Human Generated Power for Mobile Electronics, T. Starner, J Paradiso, Low Power Electronics Design, C. Piguet, ed., CRC Press, 2004, Chapter 45. 3. S.P.Beeby, M.J. Tudor, N.M. White, “Energy Harvesting Vibration Sources for Microsystems Applications”, Meas. Sci. Technol., 17, (2006), R175–195. 4. D. P. Arnold, “Review of Microscale Magnetic Power generation”, IEEE Transactions on Magnetics., Vol. 43, No. 11, 3940–3951 (2007) 5. S. Roundy, P.K. Wright, J. Rabaey, “A study of low level vibrations as a power source for wireless sensor nodes”, Computer Communications, 26, 2003, 1131–1144. 6. M. Pecht; A. Dasgupta; D. Barker; C. Leonard. The reliability physics approach to failure prediction modeling. Quality and Reliability Engineering International. Vol. 6, pp. 267–273. Sept.-Oct. 1990. 7. M. Pecht. Product Reliability, Maintainability, and Supportability Handbook. CRC Press. ISBN 0849394570. 1995.
Chapter 10
Power Management, Energy Conversion and Energy Scavenging for Smart Systems Terence O’Donnell, Wensi Wang
Abstract An integral part of the vision for ambient intelligence is the use of large numbers of wireless sensors in a “deploy and forget” manner. The long term provision of energy to wireless sensor nodes poses a challenge to this vision. Today the provision of power is almost exclusively by means of batteries. The trend in battery technology has been towards increasing energy densities; indeed small form factor batteries are available that have provided power to low power electronic systems (e.g. wristwatches) for years. However the power requirements for today’s wireless sensor nodes are considerably higher than the typical wristwatch, so the lifetime envisaged for many applications certainly exceeds the lifetime possible with today’s battery technology. In order to overcome the limitations on lifetime posed by the use of batteries as the energy source, the concept of energy harvesting has gained significant attention. This concept envisages the “harvesting” of energy from sources available in the environment to either directly power, or to augment the battery powering of, the wireless sensor node. Although many different sources of energy have been discussed, the sources for which practical results have been demonstrated are light, vibrations and thermal gradients. In this chapter the state of the art in each of these areas is reviewed, with respect to their potential for powering wireless sensor nodes, or motes. Keywords Energy Harvesting, Power Consumption, Wireless Sensor Networks, Light, Vibration, Thermal Gradients,
1
Introduction
Energy harvesting is not a new concept, as solar powered calculators and wristwatches have been available for decades. For example, the first generation solar powered calculators were produced in 1978 by NEC and Sharp. Many light
Tyndall National Institute, University College Cork, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
241
242
T. O’Donnell, W. Wang
powered wristwatches were also designed and manufactured in the early 1970s. The success of a product such as the solar powered calculator highlights the attractiveness of the energy harvesting approach and the idea of “batteryless” power. Clearly though, the use of photovoltaic cells to harvest solar energy is limited to applications where sufficient light is available and therefore the concept of energy harvesting has expanded to encompass other ambient sources. Again, wristwatches have also been successfully powered by harvesting energy from the motion of the human arm and body heat [1], [2]. However, with the recent explosion in the use of portable electronic equipment for consumers and the anticipated increase in the use of wireless sensors, the need for the expansion of energy harvesting to higher power devices has become significant. For example, a wristwatch may only require several micro-watts of power, but a wireless sensor node may require tens to hundreds of micro-watts. This has sparked an interest in new devices and techniques that could harvest larger power levels. Several recent papers have been dedicated to reviewing potential sources of energy, which could be harnessed to provide power for electronic systems [3], [4]. The more practical of these includes the use of environmental energy sources, such as vibration, movement and thermal gradients. At this time the development of energy harvesting from such sources is at a relatively early stage. Many energy harvesting devices have been developed and demonstrated in laboratory conditions; some have been demonstrated powering wireless sensor systems; very few have been demonstrated in long term field trials. The potential for energy harvesting to provide a long term alternative to battery powering for wireless sensor nodes therefore still remains to be proved. In this chapter we first review the power system requirements for a wireless sensor node and discuss how energy harvesting fits into this system. The state of the art in the various energy harvesting techniques is briefly reviewed. In this we restrict the discussion to harvesting from light, vibration and thermal gradients, which are techniques where practical results have been demonstrated. Detailed reviews of the state-of-the-art have been provided in recent literature; thus, only a selection of the most relevant and advanced results are reviewed here. Where possible we also distinguish between devices that have been achieved using microfabrication techniques and those that have been built using conventional fabrication techniques. The achievement of devices that can be fabricated using batch MEMS fabrication is important for eventual cost reduction and widespread adoption. The power conversion techniques associated with converting the electrical energy from the harvesting device into a usable form are also discussed. As will be shown later, each energy harvesting technique has different requirements for this conversion. In the final section, we review the few recent wireless sensor systems that have successfully demonstrated powering by energy harvesting alone and we draw some conclusions from the achievements to-date in energy harvesting, discussing the future potential for such techniques.
10 Power Management, Energy Conversion and Energy Scavenging
2
243
Power Requirements
A typical wireless sensor node, like the Mica mote and the Tyndall mote (shown), will have a power consumption profile similar to that outlined in the diagram in Fig. 10.1(top). The sensor will make measurements and transmit these with a certain application-defined duty cycle. During the measurement, there may be a relatively high current required, depending on the sensor and the data processing. Having processed the sensor measurement, the data must then be transmitted. In general, data transmission tends to require the most power and this is highlighted by the sharp peaks in the diagram in Fig. 10.1(bottom). Between transmissions the node should be in an ultra low power sleep mode. The difference between power consumption in sleep mode and in active transmission mode can be several orders of magnitude. However, any energy harvesting device must in principle only supply the average power, which is determined by the relative times spent in active modes (measurement, processing and transmitting) and in sleep mode. For the example in the diagram, which has a transmission duty cycle of 0.1 % (that is, for example, transmission for 50 ms, receiving for 50ms and sleep for 99.9s), the peak power is approximately 32 mW but the average is only 0.07 mW. Thus, with sufficient energy storage capability an energy harvesting device that produces an average power in excess of this can potentially power the node. It is vitally important that the energy storage is chosen so that it has the ability to supply the peak power without significant voltage drop. In general, a system for harvesting energy in order to supply the power requirements of a sensor node must consist of several parts; the energy harvesting device itself, a power conditioning circuit and an energy storage mechanism. A block diagram of the power system in a wireless sensor node supplied by an energy harvesting device is illustrated in Fig. 10.2. The energy harvesting device is a photovoltaic cell in the case of solar harvesting, an electromechanical device in the case of vibrations and a thermoelectric pile in the case of harvesting from thermal gradients. However, the energy generated from such devices is rarely in the form of a DC voltage that can be directly used by an electronic system. Therefore, a power conditioning circuit is generally required in order to convert the electrical energy from the harvesting device to a usable DC voltage level. In the simplest case, where the harvesting device generates DC voltage, such as in the case of solar cells, this may just be a diode and a regulator. For harvesting from vibrations, the electrical energy will typically be in the form of an AC voltage and therefore a rectifier and voltage step-up or step down circuit is required. In the case of thermoelectric generation a voltage converter may also be required to step-up the low level DC voltage that is generated. It is obviously very important that the efficiency of this power conditioning circuitry is high in order not to dissipate the harvested energy. As shown in Fig. 10.1 above, there may be orders of magnitude in the difference between the peak and average power consumption of a sensor node. Therefore, an energy storage mechanism is required as a buffer between the energy harvesting
T. O’Donnell, W. Wang [email protected] (Power Consumption and Generation) [mA]
244 102
101
100
10−1
10−2 0
50
100
150
200
250 300 Time T [sec]
350
400
450
Power Consumption of Wireless Sensor Node Energy Generation from EH Devices Average Power Consumption Average Power Consumption With Larger Duty Cycle
Current consumption in active mode [mA]
45 40 35 30
Rx mode Tx mode
25 20 15 10 5 0
50
100
150 Time T [ms]
200
250
Current Consumption in Active mode
Fig. 10.1 Typical Power consumption of the Tyndall mote
500
550
10 Power Management, Energy Conversion and Energy Scavenging Energy Harvester Light Vibrations Heat
Sensors µ-processors Transeiver
{
Power Conversion AC-DC Step-up Step-down Regulator
245
Storage Capacitor Battery
Load 1 Load 1
Regulator
Fig. 10.2 Block diagram of the power system in a wireless-sensor node supplied by an energy harvesting device. The various options for the requirements of each block are detailed. The actual requirements will depend on the harvesting device used, and the application
device and the load. This energy storage will typically be either a capacitor or a rechargeable battery. The simplest approach, which provides the longest lifetime, is the use of a capacitor. Such a system must be capable of self starting from a zero stored energy situation, which mightoccur if the energy harvesting source unexpectedly disappears. The use of a battery provides a longer term (higher energy storage capacity) storage solution at the expense of perhaps more complex conversion (e.g. battery charging circuits) and reduced lifetime due to limited battery charging cycles. The loads, which will typically consist of the sensors, the microprocessor and the transceiver, are then connected either directly, or through a further regulator or dc-dc converter, to the energy storage. Some sensor loads may be sensitive to supply voltage variation and therefore may require the use of a regulator. Although it is desirable, for the sake of system simplicity and efficiency, to operate the entire system form a single voltage level, this may not always be possible; extra conversion may be required between the load and the energy storage. The actual requirements for each block in Fig 10.2 will depend upon the energy harvesting techniques used and upon the application requirements. In the next sections the various energy harvesting techniques are reviewed in more detail and the requirements for each of the blocks in Fig. 10.2 are discussed for each approach.
3
Energy from Light
The photoelectric effect was first observed in the 1830’s, however, it was the 1950’s before the first solar panels were developed by Bell Labs, using crystalline silicon as the material. Since then a wide variety of materials have been investigated for solar cells; despite this, crystalline silicon still remains by far the most common material, accounting for approximately 90% of all solar cell production in 2001 [5].
246
T. O’Donnell, W. Wang Table 10.1 Record efficiencies of solar cells [8] Material / Technology
Record Efficiency
Year
Si (crystalline) Si (multi-crystalline) Si (thin film) Si (amorphous) GaAs (crystalline) GaAs (thin film) CIGS CdTe Dye sensitized Organic polymer Tandem GaInP/GaAs/Ge Tandem a-Si/µc-Si (sub-module)
24.7±0.5 % 20.3±0.5 % 16.6±0.4 % 9.5±0.3 % 25.1±0.8 % 24.5±0.5 % 18.4±0.5 % 16.5±0.5 % 10.4±0.3 % 3.0±0.1 % 32.0±0.5 % 11.7±0.4 %
1999 2004 2001 2003 1990 2005 2001 2001 2005 2006 2003 2004
The power available from solar cells varies widely depending upon the illumination level (indoors or outdoors) and upon the solar cell technology. The efficiencies of various solar cell technologies at different illumination levels have been reported in [6]. For a typical outdoor illumination level of 500 W/m2 (bright, sunny day), efficiencies vary from approximately 15 % (for polycrystalline silicon and gallium indium phosphide) to 2–5 % for amorphous silicon cells. Indoors, the illumination levels are significantly decreased and hence the power generation capability of the solar method is reduced. For typical indoor illumination levels of 1 W/m2, efficiencies vary from approximately 10% for crystalline silicon and gallium indium phosphide, through to 5% for amorphous silicon to approximately 2% for cadmium telluride. From the point of view of assessing the suitability of solar energy for powering wireless sensor nodes, it of interest to consider the typical power density in mW/mm2 achievable for outdoor and indoor applications. Assuming an outdoor illumination level of 500 W/m2 and an efficiency of 15%, a typical power density of 75 µW/mm2 could be expected. For indoor applications, a typical illumination level of 10 W/m2 and an efficiency of 10% will give a power density of 1 µW/mm2. This simple calculation highlights the significant reduction in solar cell capability for indoor applications. A recent paper published by a group of researchers in the University of Texas presents an indoor solar energy harvesting system for powering of sensor network router nodes [7]. This solar energy powered sensor network node utilized eight mono-crystalline solar panels, with the size of each individual solar panel being 9.5 cm by 6.35 cm, providing a total area of 48260 mm2. The light source was a normal office overhead 34W fluorescent light and the distance between the solar panels and the light source was approximately 30cm. With the conditioning circuits, the solar panel system generated a steady 14mA current and a 3.24V DC potential compared to the required 11.8mA at 3V DC power consumption of the wireless sensor router node. The power density of this system is approximately 1.06 µW/mm2.
10 Power Management, Energy Conversion and Energy Scavenging
247
Cost reduction and increases in efficiency remain the primary focus of research efforts in solar generation. Cost reduction is obviously a requirement in order to achieve wide spread use of large-scale solar electricity generation. Record efficiencies for solar cells are generally greater than the typical efficiencies, which have been quoted above and measured in [6]. Table 10.1 summarizes these record efficiencies [8], which have been achieved for various different solar technologies. It can be seen from this that the power densities quoted above could increase by a factor of two in the near future. Although solar power certainly has the potential to provide an everlasting energy source for wireless sensor nodes in certain applications, there are many applications where the use of solar is not practical, such as in poorly lit or dark indoor areas. As a result, there has been significant interest in the extraction of energy from other sources such as vibrations, which is discussed in the next section.
4
Energy from Vibrations
Ambient vibrations are present in many environments, including automotive, buildings (pumps, fans, and air conditioning units), structures (bridges, railways), industrial machines and household appliances. The energy present in vibrations can be converted to electrical energy for the powering of electronic systems using a suitable mechanical-to-electrical energy converter or generator. These generators typically make use of electromagnetic, electrostatic or piezoelectric principles in order to achieve the conversion. A detailed review of recent work in vibration generators is given in [9]. Most devices demonstrated to date have been macro-scale devices, assembled from discrete sub-components or hybrid devices, which have been assembled from some discrete and some micro-fabricated components. There is significant interest in the micro-fabrication of the entire device, as this leads to batch fabrication and cost reduction. However, to-date there are only a few examples of vibration generators fabricated entirely using micro-fabrication techniques. Electromagnetic generators make use of Faraday’s law of electromagnetic induction. The energy in the environmental vibration is used to make a magnetic mass (permanent magnet) move relative to a coil, thus inducing a voltage and causing a current to flow in the attached electrical load. Piezoelectric generators make use of the piezoelectric properties of some materials, which develop a voltage when stressed. The vibration is used to stress the piezoelectric element, thus developing a voltage that can be extracted as electrical energy. Electrostatic generators generally make use of the vibration energy to pull apart the plates of a charged capacitor, against the force of electrostatic attraction, thus converting this vibration energy to energy stored in the capacitor’s electric field. The majority of the vibration generators that have been proposed consist of linear generators where the vibration is used to excite linear motion in a mass. In order to understand some of the characteristics of such linear generators it is worth briefly reviewing the simple model for such a generator and the equations obtained from this model for the power generation.
248
T. O’Donnell, W. Wang
k
m z(t) De
Dp
x(t)
y(t)
Fig. 10.3 A schematic representation of a mass-spring-damper system, which approximates a vibration generator
The typical vibration to electrical energy generator can be treated as a single degree of freedom mass-spring-damper system, proposed by Williams [10], and as depicted in Fig. 10.3. In this case the generator can be considered to consist of a housing, which is mounted on the vibration source and which has a displacement y(t). A mass, m, is connected to the housing via a spring, k, and has a displacement x(t) relative to the housing. The motion of the mass is opposed by dampers, De and Dp, which exert a damping force on the mass. The damper, De, represents the energy conversion mechanism by which useful electrical energy is extracted from the system. However, there will also be unwanted or parasitic damping present, due to factors such as air damping, friction, material loss and this is represented by Dp. The equation of motion of the mass, when the housing is excited by a sinusoidal force, F(t), can be described by: m
d2 x dx + ( De + D p ) + kx = − ma(t ) = − F (t ) = F0 sin ωt 2 dt dt
(1)
where a(t) is the acceleration of the housing. Note that here it is assumed that the damping forces, both electrical and parasitic, are proportional to velocity, which is not always the case. For an electromagnetic conversion principle the damping force is proportional to the velocity. This may also be the case for the piezoelectric generator, but for the electrostatic principle the electrical damping force is constant. A detailed treatment of the differences in the generators is given by Mitcheson et al. in [11]. The damping forces for the piezoelectric and electrostatic cases are discussed in detail by Roundy in [12]. However, for the sake of illustrating some of
10 Power Management, Energy Conversion and Energy Scavenging
249
the characteristics of the vibration generators this is assumption is retained here. This equation has a solution for the displacement of the mass, which is given by: x (t ) =
F (t ) (k − mω ) + ( De + D p )2 ω 2 2 2
(2)
Note that such a system has a natural resonant frequency given by, ω n = k / m , and that the displacement is a maximum at this resonant frequency. Therefore, the majority of vibration-based generators attempt to match, through their design, the resonant frequency of the generator with the frequency of the source vibrations in order to achieve maximum displacement of the mass. The damping factors can also be expressed as damping ratios, z=D/2mwn,, and using this notation, the maximum power generated (the mechanical power associated with the moving mass), P, when then mass is vibrating at its mechanical resonant frequency, can be expressed as; P = ω n 3 m Y 2 / 4ζ
(3)
where z is the total damping ratio, consisting of the useful damping due to the extraction of electrical power and the parasitic damping due to mechanical damping in the structure. This equation suggests a dependence of the power generated on the cube of the frequency. However, in reality the amplitude of source vibration is likely to be significantly smaller at higher frequency, so that this dependence on the cube of frequency is not realizable in practice. A more practical way to express the dependence of power is to make use of the fact that for a sinusoidal source vibration, the amplitude of the acceleration, a, can be written as w2Y, and the above equation can be rewritten as: P = m a 2 / 4ζω
(4)
This highlights the fact that the generated power is proportional to the moving mass, the square of the acceleration and is inversely proportional to the frequency. This equation gives the power generated, but only the fraction of this dissipated in the electrical damping can be extracted. If the damping ratio is split into its electrical part, ζe and its parasitic part, ζp, then the electrical power can be expressed as: Pe = ζ e m a 2 /4ω(ζ e + ζ p )2
(5)
This equation has a maximum when the electrical damping is made equal to the parasitic damping. In general the electrical damping can be controlled by the load conditions of the generator and this in turn suggests that there are optimum load conditions for which the electrical power generation is maximized. Clearly the actual level of power that can be generated from vibrations depends upon the acceleration and frequency of the vibration source and also upon the size
250
T. O’Donnell, W. Wang
of the vibration generator. The generator size determines the mass and also the maximum allowable displacement of the mass. This maximum is not accounted for in the above equation and may place a restriction on the power that can be extracted using a particular generator design. There have been many different generator devices described in the literature in recent years, each reporting power output for different vibration inputs and different generator sizes. Equation (4), however, provides a basis for comparing all generators in terms of power density normalized by the square of the input vibration acceleration. Thus, for vibration generators the figure-of-merit of most interest is the normalised power density; that is, the power density scaled by the square of the source vibration acceleration. This figure will be quoted in the sections below as a means of comparing different generators. In practice however the assumption that the output power will have a dependence of power on the square of acceleration should be treated cautiously. One reason for this is that the parasitic damping in (4) is assumed to be a constant; however, in practice this is likely to depend upon the magnitude of the displacement and hence the acceleration. The results of this may be increased damping at the larger displacements resulting from higher acceleration. Moreover at higher displacements the spring constant for the system may not be linear; a k3x3 term may be introduced into the spring relationship and hence into equation (1). This can result in a decrease in the resonant frequency (spring softening) or an increase in the resonant frequency (spring hardening), depending upon whether k3 is negative or positive, respectively. In such cases the displacement versus frequency curve for the generators can have hysteretic behaviour and can have an abrupt change in displacement [13]. In fact, this effect can be observed in some of the displacement versus frequency curves for the generators reported in the literature.
4.1
Electromagnetic Generators
The basic principle of the electromagnetic generator involves making use of the vibrations in order to make a permanent magnetic move relative to a coil, thus creating a time changing magnetic field with respect to the coil and inducing a voltage according to Faraday’s law of induction: V = −N
df dt
(6)
where V is the voltage induced in the coil, N is the number of turns in the coil, and φ is the flux linking a coil turn. Assuming that the relative motion between coil and magnet is only in one dimension, (x direction), then this can be re-written as: V = −N
∂f ∂x ∂x ∂t
(7)
10 Power Management, Energy Conversion and Energy Scavenging
251
which highlights the fact that the important parameters for maximising voltage generation are the number of coil turns, the flux gradient in the direction of motion and the velocity. There have been many electromagnetic energy harvesting devices proposed and tested. These have been reviewed in a recent work by D. P. Arnold [14], who compared the history and current state of the art and discussed the challenges associated with compact magnetic power generation systems in the microwatts-to-tens of watts power range. Electromagnetic vibration generators, which have been demonstrated, range from relatively large devices of size 500 cm3 capable of generating 95 mW from 0.4g, 6 Hz vibrations [15], [16] to small, 0.025 cm3 micro-fabricated devices that have generated some 300 nW from 6.6 kHz, 3.9g vibrations [17]. The highest normalised power density for a vibration generator has been reported by Beeby et al. [18] using the device pictured in Fig. 10.4. This device consists of a wire-wound coil sandwiched between magnets, which are arranged side by side with alternating polarity as shown in Fig. 10.4. The magnets are arranged in this opposite pole configuration so as to maximise the flux gradient seen by the coil as it moves from the central position. The magnets are attached to one end of a beryllium copper beam with the other end of the beam fixed to the generator base. The dimensions of the beam are designed so as to match the resonant frequency of the beam plus magnet to the input vibration frequency, which was approximately 50 Hz in this case. This device was capable of generating 46 µW from a very low input vibration level of 0.06g at 52 Hz. This device had a normalised power density of 883 µW/mm3 at 1 m/s2. In general it can be observed from work to date that the highest power densities have been achieved by generators that have been assembled using conventional discrete parts (wire-wound coils and bulk, sintered magnets). There have been no reports to-date of fully micro-fabricated generators where both the coil and the magnets are micro-machined. However, there have been several reports of partially micro-fabricated electromagnetic generators, generally using micro-fabricated coils
Fig. 10.4 An electromagnetic vibration generator from [18] pictured beside a one euro cent coin.
252
T. O’Donnell, W. Wang
and bulk magnets [19–22] This includes contributions by Williams et al, Mizuno et al, Kulah et al, Serre et al and Kularni et al. The highest power levels achieved using this approach have been reported by Serre at al. [22], who achieved 55 µW for a 1.35 cm3 device at a frequency of 380 Hz; Serre et al, used a device that had electroplated copper coil and bulk NdFeB magnets glued to a polyimide membrane. This corresponds to a normalized power density of approximately 1.3 µW/mm3 at 1 m/s2. In this case the voltage generated by the device was approximately 110 mV. It is also worth noting that in general the voltage generated by micro-fabricated devices is often less than 100 mV. There area several reasons why the performance of micro-fabricated electromagnetic based generators have been poor. Micro-fabricated coils tend to be planar, which means that achieving a large number of turns in a small area necessarily means small coil cross-section and consequently high coil resistance. In fact it can be shown [23] that for a micro-fabricated planar coil the resistance is proportional to the cube of the number of turns, which means that increasing the number of coil turns can result in a degradation of the power output of the device. There is a tradeoff between maximising voltage, which requires a high number of turns, and maximising power output, which requires low resistance. This problem could be solved, in part, by the use of multiple layers of coils in order to achieve a large number of turns, although the use of a large number of layers would require a large number of process steps. The prospects for achieving significant power levels from fully micro-fabricated electromagnetic generators are further hindered by the availability of high quality, micro-fabricated (sputtered or electroplated) permanent magnets. It is the case that the properties, (coercivity, Hc, remenance, Br and energy density, BH) achievable from micro-fabricated magnets are considerably lower than those achievable from bulk-sintered rare earth magnets, such as samarium-cobalt or neodymium– iron-boron. For example, the highest coercivity and remenance reported for a 90 µm thick deposited magnet is 2.0 kOe and 0.5 T [24]. This compares to values of around 10 kOe and 1 T for a bulk magnet. Moreover the properties tend to degrade with thickness, so that thick (tens of microns) micro-fabricated magnets with good properties are difficult to achieve.
4.2
Piezoelectric Generators
Piezoelectric materials have the ability to convert applied mechanical stress/strain into voltage/electrical field due to the direct piezoelectric effect. In 1880, Pierre and Jacques Curie first proved this piezoelectric effect on certain crystals, such as Rochelle salts and quartz. Inversely, piezoelectric materials can convert the applied voltage/electric field into mechanical stress/strain, which is known as the indirect piezoelectric effect. The piezoelectric property of a material can be defined by several constants. One of the most important is the piezoelectric strain constant, d, defined by:
10 Power Management, Energy Conversion and Energy Scavenging
d=
253
short circuit ch arg e density C/N stress applied
(8)
These constants are generally anisotropic and have different values for the different axes of the material. The diagram in Fig. 10.5 illustrates how this constant applies for the 3-1 direction, which is commonly used in piezoelectric energy harvesting devices. In this case the stress, or force, is applied in the 1 direction and the voltage is developed in the 3 dimension, so that the constant of relevance is referred to as the d31 constant. The detailed analytical equations and definitions for the various constants are defined in an IEEE standard [25]. Two of the most common industrial piezoelectric materials are polyvinylidene fluoride (PVDF) and lead zirconate titanate (PZT), which have d31 constants of 23 and −274 respectively. The piezo-electric effect is most commonly exploited in vibration generators by attaching a mass to one end of a beam composed wholly or partially of piezoelectric material, with the other end of the beam fixed to the generator housing. With the resonant frequency of the beam matched to the vibration source frequency the beam will vibrate, causing stress on the piezoelectric material and hence generating a voltage. A typical example is the work described in [26], which consisted of a cantilever composed of a brass shim sandwiched between two PZT-5H beams, with a tungsten mass attached to the end (see Fig. 10.6). The total volume of this prototype was 1 cm3 and the prototype was capable of generating a maximum 375 µW 3 2
F 1
V F
Fig. 10.5 Piezoelectric element in the 31 mode housing beam
mass
PZT Brass PZT
Fig. 10.6 Piezoelectric vibration generator implemented using a PZT beam after [23]
254
T. O’Donnell, W. Wang
load power from 2.5 m/s2 acceleration at 120 Hz frequency. This corresponds to a normalized power density figure of approximately 60 µW/mm3 at 1 m/s2. The piezoelectric generator is compatible with micro-fabrication, using thin film piezoelectric materials such as AlN, which can be deposited by sputtering and PZT deposited by sol-gel techniques. Examples of fully micro-fabricated piezoelectric generators include the work of Jeon et al. [27] and Marzencki et al. [28]. The microfabricated piezoelectric generator reported in [28] was developed the European Union framework 6 programme, VIBES, (Vibrational Energy Scavenging). This device consists of a Silicon-on-Insulator (SOI) wafer, where the top silicon layer is heavily doped to form the bottom electrode of the piezoelectric material. The piezoelectric material is then sputter deposited and a top aluminium layer is also sputter deposited to form the top electrode. Deep Reactive Ion Etching (DRIE) is used to etch the top and the bottom Silicon layers to form the mass, beam and housing. The device achieved an output power of 700 nW, from a 1g acceleration at approximately 1.5 kHz frequency. Although the power output of the device is low, the device dimensions were only 2mm × 2mm × 0.5mm and it thus achieved a normalized power density figure of approximately 3.6 µW/mm3 at 1 m/s2.
4.3
Electrostatic Generators
The basic principle of electrostatic energy harvesting relies on a vibration dependent variable capacitance. A capacitor can be formed between two separated initiallycharged parallel metal plates, as shown in Fig. 10.7. If mechanical force is applied by the external vibration to pull the plates apart, then the mechanical work is stored in the field between the plates and can be extracted at the point where the plates are a maximum distance apart. The process can be explained by the basic equations of capacitance, voltage and stored energy for a parallel plate capacitor. e 0 e r lW d
(9)
Q C
(10)
Q 2 CV 2 = 2C 2
(11)
C=
V= E=
Where Q is the charge on the plates, l is the length of the plate, w is the width of the plate, d is the separation between the plates and ε0 is the dielectric constant of free space. It can be seen from the above equations that if the charge on the capacitor is fixed and the overlap area of the plates is reduced or the plates are pulled apart, the capacitance decreases according to equation (9), voltage increases (10) and hence stored energy increases (11). Similarly if the voltage is fixed and the
10 Power Management, Energy Conversion and Energy Scavenging
255
overlap area of the plates is reduced or the plates are pulled apart then the charge will increase and again stored energy increases. The energy can be extracted from this variable capacitor system by using a scheme illustrated in Fig. 10.7. In this scheme, the capacitor plates can move apart under the force of an external vibration and initially both switches S1 and S2 are open. At the position of maximum capacitance, Cmax, that is when the plates are closest together, the capacitor is charged to Vmin, from an external voltage source by closing the switch S1. When the plates move apart, S1 is opened and since both switches are open the charge on the plates remain constant and the energy input to move the plates apart is stored in the electric field of the capacitor. At the position of lowest capacitance, Cmin, that is when the plates are furthest apart, S2 is closed and the energy stored in the capacitor is extracted at the higher voltage, Vmax, and stored or dissipated in the load. S2 is then opened and the plates return the maximum capacitance position for the start of another cycle. The ideal expression for the energy harvested is given by: E = ½ Vmax Vmin (Cmax − Cmin )
(12)
Because micro-machined variable capacitors have been available for some time, electrostatic vibration harvesters are generally regarded as been the most compatible with micro-fabrication. A good review of some of the approaches to the microfabrication of different types of electrostatic converter was given by Roundy et al [29]. That work discussed three basic topologies for micro-machined variable capacitor electrostatic generators, the in-plane overlap, the in-plane gap-closing and the out-of-plane gap-closing. The first two make use of interdigitated capacitors formed in a single plane where the capacitance is varied either by changing the overlapping area of the fingers or by changing the distance between the fingers. The out-of-plane gap-closing more closely resembles the parallel plate capacitor, with one plate anchored by springs so that it can move out of plane. The implementation of this type of in-plane or out-of-plane capacitor in MEMS processes is relatively well understood. Despite its attraction as being one of the most MEMS compatible vibration generators there have been few reports of fully micro-fabricated devices successfully tested. It would appear that in practice micro-fabricated devices have been beset by a range of problems including large parasitic capacitances, significant leakage currents and problems of stiction. S1
Voltage source for pre-charging
+ _
S2
++++++++++++++ Moveable capacitor plates ____________
Fig. 10.7 Schematic of electrostatic energy harvester
Storage or load
256
T. O’Donnell, W. Wang
Most researchers have demonstrated working generators with larger scale conventionally machined devices. Depresse [30] achieved 1.05 mW from an electrostatic generator of the in-plane gap-closing type, which was machined from bulk tungsten and vibrated at 50 Hz with an 8.8 m/s2 acceleration. The volume of this device was approximately 1800 mm3, and it achieved a normalized power density of approximately 0.58 µW/mm3 at 1 m/s2. One disadvantage of the electrostatic generator is that it requires a separate voltage source for initial charging of the capacitor. An alternative is to use an electret material to supply the charge. An electret is a dielectric material that can semipermanently retain a charge and hence maintain an external electric field. Arakawa et al. [31] have reported results for a relatively large size (20 mm × 20 mm × 2mm) micro-machined device using a fluorocarbon polymer material as the electret. Their device consisted of two chips, one with a digitated capacitor electrode covered by the electret and the second with the other electrode. Their device achieved 6 µW of output power, for a 3.9 m/s2 input at 10 Hz when the two chips were moved relative to one another using fixed stages.
4.4
Comparison of vibration harvesting principles
It is of interest to ask whether any one of the three principles used for vibration energy harvesting has advantages over the others. From the review of work done to date, it is clear that there is no principle that is better than the others in all situations; however, it can be deduced that some techniques are more suitable than others in specific applications. If generator volume is not a constraint and the device can be large in size - and implemented using conventional manufacturing techniques - then the electromagnetic generator is probably the best option. Electromagnetic generators have demonstrated the highest power densities at larger sizes. Piezoelectric based devices are probably somewhat simpler to construct, but tend to have somewhat lower power densities. If the volume of the device is a constraint and micro-fabrication techniques are required to take advantage of cost reduction through batch processing, then piezoelectric and electrostatic devices are the better options. At the micro-scale, electromagnetic devices suffer from poor performance due to large coil resistances and poor properties for micro-fabricated magnets. Although electrostatic devices have generally been proposed as being the most micro-fabrication compatible, in practice their implementation using MEMS techniques has been problematic. In fact better results have been demonstrated for micro-fabricated piezoelectric devices. However, it should be remembered that for vibration harvesting, the power generated is directly proportional to the mass and hence the volume, so that only small power can be expected from small devices. Although many vibration energy harvesting devices have been developed and results for power generation have been demonstrated in the laboratory, there remains a significant amount of work to be done to prove that energy harvesting devices can be widely used as a solution for the powering wireless sensor nodes.
10 Power Management, Energy Conversion and Energy Scavenging
257
The majority of the devices demonstrated to date are resonant devices, that is, they are designed to have a resonant frequency that matches the frequency of the source vibrations exactly. If the frequency of the source vibrations varies then the power output decreases dramatically in such devices. The development of devices with a more broadband response is therefore an area requiring further investigation. Devices that could actively track the source vibration and adjust their resonant frequencies accordingly are an interesting possibility as it is highly likely that both the source vibration frequency and the resonant frequency of the generator will drift over time. Since energy harvesting devices, such as those discussed in this section, are being proposed as a replacement for batteries in order to achieve longer lifetimes, then the long term reliability of such devices needs to be assessed. This may prove to be a critical distinguishing feature, which will determine the feasibility of different principles or device designs. In the final analysis the overall feasibility of vibration energy harvesting will only be proven by implementation and long term tests in more field trials.
5
Energy from Heat
Generation of electrical energy from thermal gradients is achieved through the exploitation of the Seebeck effect - first described by Thomas Johan Seebeck in 1821 - who discovered that a compass needle would deflect when placed in close proximity to two conductors of different metals joined together at the ends when a temperature difference existed between the ends. The deflection of the compass needle was due to the current flowing in the conductor, which was driven by the temperature induced voltage across the conductors. Thus for conductors with a temperature difference, DT. between their ends, the induced voltage across the conductors, V, can be approximated by: V = (SB − SA )( ∆T)
(13)
Where SA and SB are the Seebeck coefficients of the metals used and are typically of the order of tens or hundreds of micro-volts per degree Kelvin. The Seebeck effect forms the basis for the operation of thermocouples, which are widely used to measure temperature. The connection of multiple thermocouples in series is known as a thermopile. These thermopiles are also widely used as the core of most thermoelectric cooling devices. The principle of these thermoelectric cooling devices derives from the Peltier effect, which was discovered by French physicist Jean Charles Athanase Peltier in 1834. Peltier described the effect as a temperature difference at the junctions, which can be detected when electrical current flows between different materials. Since the Peltier effect and Seebeck effect are essentially the inverse of one another and the basis of thermoelectrics, the thermoelectric effect can also be called the Peltier-Seebeck effect.
258
T. O’Donnell, W. Wang
Indeed in early experiments large thermoelectric generators were widely used as a steady voltage source, before the homo-polar generator (Faraday discs) became practical in 1880s. Fig. 10.8 shows the principle of the thermoelectric generator. When a temperature difference is applied on a thermoelectric material based thermocouple, as shown in Fig. 10.8(top), holes in the p-type element and electrons in the n-type element will flow to the cool side of the thermocouple, according to the Seebeck-Peltier effects. These two types of carriers will generate a current flow and hence a potential difference between the hot side and cool side of the thermocouples. The thermoelectric generator (Peltier cooler) with the structure shown in Fig. 10.8(bottom) utilizes such a principle to generate a steady voltage difference and current flow across the series connection of multiple thermocouples in the thermopile. One of the most common materials used for thermoelectric generators is bismuth telluride, which has a Seebeck coefficient of approximately 0.2mV/K. With such a coefficient, it is clear that many thermocouples must be connected in series in order to produce a significant voltage level. Because of the low efficiency of the thermoelectric generator, only a limited number of such generators are commercially available for niche markets. Thermoelectric cooling devices are more common. However, most of the thermoelectric cooling devices can be utilized as
Fig. 10.8 The principle of thermocouple devices
10 Power Management, Energy Conversion and Energy Scavenging
259
thermoelectric generator devices with minor modification. The current Peltier coolers normally use conventionally-machined thermocouples. The typical density of the thermocouples is approximately 50 pairs/cm2. Fig. 10.9 shows a typical Peltier cooler, which can be used as a generator by interchanging hot and cold sides and by using a heat sink to enhance cooling from the cold side. In the last several decades, thermoelectric generators have been used for electricity generation mainly in space and in marine applications; for example radioisotope thermoelectric generators were used in satellites and spaceships. Recently, small scale and micro scale thermoelectric generators are under intense research due to their potential to power body-worn electronic devices. However, the efficiency of thermoelectric generators is quite low; it is limited in the first instance by the Carnot efficiency then by the material properties and finally, and especially important for micro-fabricated devices, the design. The use of micro-fabrication techniques can significantly increase the number of the thermocouples, thus improving the performance of the thermoelectric generator. However, technical difficulties and cost-efficiency issue have become the main obstacles for the spread of such technologies. For fabrication of devices using micro-fabrication techniques the challenges include; the miniaturisation of the thermocouple in order that thousands of thermocouples can be accommodated in a small area; the design of the device so that the thermal resistance of the thermocouple can be maximised relative to the thermal resistances of the hot and cold ends; the electric connectivity between the miniaturised thermocouples and the thermal connectivity between hot/cool side and the thermocouples. At the micro-scale, the primary interest has been in the use of thermoelectric generators for the generation of power from body heat, as a means to power wearable devices. For example, Seiko produced a wrist watch powered by body heat [32], which required 1–2 micro-Watts. Researchers in IMEC Leuven, Belgium presented
Fig. 10.9 Typical Peltier cooler (Left) and Peltier cooler with heat sink (Right)
260
T. O’Donnell, W. Wang
a thermoelectric generator that can harvest human body heat to operate low power consumption wireless sensor nodes. This body heat thermoelectric generator uses a multistage design, which contains 4 stages of thermopile, with 158 thermocouples on each stage. The size of the thermopile component is 6.7mm×8.4mm×1.8mm×6 thermopiles/stage×4 stages, connected with a 38mm×34mm×5mm heat sink. With a matched resistive load, the output in a 25°C indoor environment is 0.93V and 250µW. The power density of this device is 0.2µW/mm2. With a 1350 mm2 thermopile area, the device is able to supply sufficient energy above the 50–100 µW minimal power requirements for low-power duty-cycled sensor nodes [33]. Other reported results for power densities achieved from micro-fabricated devices are 0.14 µW/mm2 for a 700 mm2 device [34], 0.37 µW/mm2 for a 68 mm2 device [35] and 0.60 µW/mm2 for a 1.12 mm2 device [36]. These results relate to a temperature gradient/difference of approximately 5K, which is typically achie-vable for wearable applications. Higher temperature differences may be achievable in other environments, for example, heaters in a building; in that case the assumed power density can in principle be scaled by the square of the temperature difference. Some recent research shows interest in nano-wire and quantum dot super-lattice technology in manufacturing the thermoelectric material [37], [38]. These technologies can provide further improvement in the density of the thermocouples. Some research claims the density could be over 100,000 thermocouples per cm2 [39].
6
Power Conversion for Energy Harvesting
All energy harvesting devices require some form of energy conversion circuit to step-up, step down or regulate the voltage to a level which is usable by the system, generally a dc voltage level somewhere between 1 V and 3 V. In the case of solar and thermoelectric generators, the voltage generated is DC. However vibration generators generate an AC voltage, so that some form of AC-DC conversion is required in order to provide a voltage at a usable dc level. An analysis of the devices fabricated to-date shows that electromagnetic devices typically generate voltage levels ranging from tens of milli-volts for partially micro-fabricated devices to several volts for macro-scale devices. Piezoelectric devices generate voltages from tens of milli-volts for micro-fabricated devices to up 10 V for macro devices. Electrostatic devices typically generate tens to hundreds of volts. Therefore in the case of relatively large electromagnetic or piezoelectric generators, which generate a sufficiently high voltage, a common rectifier followed by a regulator may be sufficient. However, for many smaller scale and micro-fabricated devices the voltage levels generated are less than the forward voltage drop of a diode, so more complex techniques have to be considered for the rectification. A simple approach in such a case is to use a voltage multiplier circuit with some form of active switch, such as a diode connected transistor [28] or an analogue switch [40]. A four stage voltage
10 Power Management, Energy Conversion and Energy Scavenging
261
multiplier circuit implemented using discrete off-the-shelf analogue switches and comparators, was shown to be capable of stepping up voltages as low as 150 mV, at power levels of approximately 10 µW, with efficiency in the range of 70–90%. To achieve better efficiency at lower power levels a dedicated ASIC circuit is required. For piezoelectric devices, a technique known as synchronised switch harvesting (SSH) has been developed to step up the output voltage and maximize the power generation [41]. This technique causes the piezoelectric blocking capacitance to resonate with an inductor, which is switched in at the peak of the voltage. According to [42] this technique has the potential to increase the energy harvested from a piezoelectric device by nearly one order of magnitude. Electrostatic devices present particular challenges for power conversion. The typical power output from an electrostatic generator is in the form of pulses of high voltage but low current. Parasitic capacitances, and leakage currents in the power devices used, is a particular problem, which can significantly reduce the power harvested. The requirements for such devices were analyzed in [43]. For all energy harvesting devices, there exists an optimum load resistance for which the power delivered to the load is maximized. In the case of the thermoelectric generator this load resistance is the resistance which equals the internal resistance. In the case of solar cells, where typically the output current remains constant over a range of voltages, there exists an optimum load resistance for maximum power. For vibration harvesting devices the choice of optimum load resistance depends on the generator principle used. For electromagnetic and piezoelectric generators the choice of load resistance affects the electrical damping and, from general principles, the optimum load resistance is the one that enables matching of the electrical damping to the parasitic damping. In the case of an electrostatic device the electrical damping is a constant. As pointed out in [43], the function of a power conversion circuit should therefore not only be to provide a regulated load voltage but also to ensure an optimum damping is maintained. Topologies for different types of generators are discussed in [43]. It is also the case that when a power conversion circuit is required between the generator and the load the optimum load value can change significantly. For example, if a step-up converter is required between the load, RL, and the generator then the converter will perform an impedance transformation, so that for a converter which steps up by a factor, n, the load impedance presented to the generator is RL/n2. The results of this is that the optimum load resistance required for the generator, converter combination is n2 times the optimum load required by the generator alone [40]. In practical systems, however, the generator is not used to supply a fixed resistive load. As indicated in Fig. 10.2, the generator is required to charge an energy storage component such as a battery or a capacitor. In this case the generator is not presented with its optimum load value and indeed the load value will change significantly depending on the state of charge of the energy storage component. Roundy [12] analyzed the situation for a piezoelectric generator supplying a storage capacitor through a rectifier, which is more representative of generator operation in
262
T. O’Donnell, W. Wang
an electronic system. His analysis showed that the maximum power transferred to the storage capacitor is a function of the voltage on the storage capacitor. For very low voltages on the storage capacitor, very little power is transferred. Power transfer was maximized for a storage capacitor voltage that was approximately half of the generator output voltage. It was therefore suggested that the system be designed so as to ensure that the storage capacitor voltage never drops lower than this value during operation. In any case, the correct choice of energy storage component is critical to ensure an optimal circuit operation from the point of view of energy. The options for energy storage components are the use of a re-chargeable battery, a capacitor or some combination of both. Batteries have higher energy densities than capacitors, but have lifetimes limited by the number of charge cycles. If energy harvesting is being employed as a means to ensure “everlasting” power to the system then the use of a capacitor as the energy storage element may be more appropriate as the lifetime of capacitors is typically much greater in terms of charge cycles. The disadvantage of the capacitor is the limited energy storage, which means that the system must be able to tolerate downtime if the energy harvesting source disappears for an extended period of time, for example, when using solar power. Such systems should also be capable of self-starting from a zero energy stored Table 10.2 Typical Energy Harvesting Powered Wireless Sensor Networks Sensor/Function
Voltage, Light, Temperature
Conversion Efficiency 75% Conversion Active rectifier Step-up dc-dc Storage Re-chargeable NiMH battery Voltage 0.93 V Power 100 µW @ 220 Size 9000 mm3 Energy source Heat, Human body Reference [33]
Sensor/Function
Vibration, light, temperature etc.
Conversion Efficiency 75∼80% Conversion Rectifier, DC-DC Converter Storage Supercapacitor, Capacitor Voltage 5V Power 1500 µW Size 6000 mm3 Energy source Vibration, Vehicle Reference [12]
Wireless Router node
Temp
82% Supercapacitors charge/discharge Supercapacitors, Battery back-up 3.24V 45.3mW 48260 mm2 Light, Office lights [7]
53% Voltage tripler Capacitor 2.3V 640 µW AA-battery Vibration [44]
Temperature, light, air/ Monitoring strains on soil humidity, water helicopter’s main & quality monitoring etc tail rotor N/A N/A
N/A N/A
Re-chargeable NiMH battery 5V N/A Less than 1000mm2 Light, Outdoor [45]
Supercapacitor N/A 250µW N/A Rotation, Helicopter [46]
10 Power Management, Energy Conversion and Energy Scavenging
263
situation. This has implications for the design of power conversion circuitry, which requires active components, including switches, comparators, etc. Hence, the combination of a capacitor, as the main energy storage for the energy harvesting device, combined with a backup battery may be an option for some systems. If a capacitor is used for the energy storage then its values should be chosen considering several factors. The energy stored should be sufficient to supply the peak power demands of the system without significant voltage drop. The storage should be small enough to be charged in a sufficient time by the energy harvesting device, particularly when starting from a very low or zero energy stored situation. The volume required by the energy storage should not exceed the volume constraints for the system. Lastly, it is also worth keeping in mind that the internal resistance of the energy storage component should be small enough so that significant voltage drop is not encountered during the peak current demand of the system. Table 10.2 above summarizes the characteristics of a selection of wireless sensor systems, which have been reported to be powered entirely by energy harvesting techniques. The details in the table highlight the fact that different approaches to the storage and power conversion is required for different applications and energy sources.
7
Conclusions and Outlook
Energy harvesting techniques have the potential to provide a long life energy supply for wireless sensor nodes. This is a perspective supported by the research work done to-date, which has been reviewed in this chapter and has demonstrated that in principle the energy available from environmental sources, such as light, vibrations and heat, can provide the energy requirements for typical low power sensor nodes. Guideline figures for the power densities of each technique are summarized in table 10.3 below. A network of wireless sensor nodes would probably have to implement energy harvesting from a variety of techniques, depending on the energy sources available. Combined with knowledge of the available volume of a device, the figures in the above table can serve as a guideline to making a rough estimation of the feasibility of energy harvesting to meet the power requirements. These figures can undoubtedly be improved upon, with further developments in devices and technology. Table 10.3 Power density of energy harvesting techniques Technique
Power density
Conditions
Light
75 µW/mm 1 µW/mm2 800 µW/mm3 0.6 µW/mm2
Outdoors Indoors 1 m/s2 acceleration ∆T of 5 degrees
Vibrations Heat
2
264
T. O’Donnell, W. Wang
Further advances are also possible in the reduction of the power consumption of sensor node electronics. Advances in micro-processors have resulted in a range of low power micro-processors being available, but the reduction of sensor power consumption is critically important. It is probably also true to say that more work is required on power conversion and energy management circuitry, particularly in the optimisation of power components to achieve high efficiency at low powers. For wireless sensor nodes in general the development of techniques that allow a level of energy awareness to be implemented is an interesting area of research. In a network composed of many such sensors, which is harvesting energy from different sources, this would allow each node to be aware of its energy state and to modify its operation accordingly.
References 1. http://www.seikowatches.com/technology/kinetic/index.html 2. Seiko Instruments Inc, http://www.sii.co.jp/info/eg/thermic_main.html 3. J. Paradiso, T. Starner, Energy Scavenging for Mobile and Wireless Electronics, IEEE Pervasive Computing, vol. 4, no. 1, Jan-March 2005. 4. Human Generated Power for Mobile Electronics, T. Starner, J Paradiso, Low Power Electronics Design, C. Piguet, ed., CRC Press, 2004, Chapter 45. 5. Goetzberger et al, “Photovoltaic materials, history, status and outlook, Material Science and Engineering Review”, n. 40, 2003. 6. J.F. Randall, Designing Indoor Solar Products: Photovoltaic Technologies for AES, John Wiley & Sons, chapter 5, 2005. 7. Abhiman Hande et al, “Indoor solar energy harvesting for sensor network router nodes”, Microprocessors & Microsystems, Vol. 31, Issue 6, pp. 420–432, Sep 2007 8. Green, Emery, King, Hishikawa, Warta, “Solar Cell Efficiency Tables (version 28)”, Progress in Photovoltaics: Research and Applications, n. 14, 2006 9. S.P.Beeby, M.J. Tudor, N.M. White, “Energy Harvesting Vibration Sources for Microsystems Applications”, Meas. Sci. Technol., 17, (2006), R175–195. 10. C. B. Williams and R. B. Yates, “Analysis of a micro-electric generator for microsystems”, Sens. & Actu. A, Vol. 52, No. 1, pp. 8–11 (1996) 11. PD Mitcheson, TC Green, EM Yeatman, AS Holmes, “Architectures for vibration-driven micropower generators”, J. Microelectromech Syst 13, 2004, 429–440. 12. S. Roundy, Energy Scavenging for Wireless Sensor Nodes with a focus on vibration to electricity conversion”, PhD Thesis, University of California, Berkeley, 2003. 13. M.V. Andres, K.W.H. Foulds, M.J. Tudor, “Nonlinear Vibrations and Hysteresis of Micromachined Silicon Resonators designed as Frequency-out Sensors”, Electronic Letters, vol. 23, no. 16, 1987, 953–954. 14. D. P. Arnold, “Review of Microscale Magnetic Power generation”, IEEE trans. On Magn., Vol. 43, No. 11, 3940–3951 (2007) 15. J. Okazaki, Y. Osaki, H. Hosaka, K. Sasaki, H. Yamakawa, and K. Itao, “Dynamic analysis and impedance control of automatic power generators using mechanical vibration,” in Tech. Dig. 2002 Int. Workshop Power MEMS (Power MEMS 2002), pp. 110–113, Tsukuba, Japan, Nov. 2002. 16. K. Sasaki, Y. Osaki, J. Okazaki, H. Hosaka, and K. Itao, “Vibration-based automatic powergeneration system,” J. Microsyst. Tech., vol. 11, no. 8–10, Aug. 2005
10 Power Management, Energy Conversion and Energy Scavenging
265
17. C. Shearwood, R. Yates, “Development of an Electromagnetic Micro-generator, Electron. Lett., 33, 1883–1884, 1997. 18. S. P. Beeby, R. N. Torah, M. J. Tudor, P. Glynne-Jones, T. O’Donnell, C. R. Saha and S. Roy, “A micro electromagnetic generator for vibration energy harvesting”, J. Micromech. Microeng. 17, No 7, 1257–1265 (July 2007) 19. C Serre, A. Perez-Rodriquez, N. Fondevilla, J. R. Morante, J. Montserrat and J. Esteve, “Vibrational energy scavenging with Si technology electromagnetic inertial microgenerators”, J. of Microsystems technologies, Vol 13, Numbers 11–12, Pages-1655–1661(2007) 20. M. Mizuno and D. Chetwynd, “Investigation of a resonance microgenerator”, J. Micromech. Microeng., 13, 209–16 (2003) 21. H Kulah and K Najafi, “An electromagnetic micro power generator for low-frequency environmental vibrations”, Micro Electro Mechanical Systems- 17th IEEE Conf. on MEMS, Maastricht, pp 237–240 ( 2004) 22. C. Serre, A. Pérez-Rodríguez, N. Fondevilla, E. Martincic, S. Martínez, J. Ramon Morante, J. Montserrat and J. Esteve, “Design and implementation of mechanical resonators for optimized inertial electromagnetic microgenerators”, Microsystem Technologies, Volume 14, Numbers 4–5 / April, 2008. 23. T. O’Donnell, C. Saha, S. Beeby, J. Tudor, “Scaling issues for electromagnetic vibrational energy harvesting”, Microsystem Technolgies, Jan., 2007. 24. W. B. Ng, A. Takado and K. Okada, “Electrodeposited CoNiReWP thick array of high vertical magnetic anisotropy”, IEEE trans. on magn., 41, 10, 3886–3888 (2005) 25. IEEE Standard on Piezoelectricity, ANSI/IEEE Standard No. 176–1987 26. Roundy S and Wright P K, “A piezoelectric vibration based generator for wireless electronics” Smart Mater. Struct. 13, 1131–42, 2004 27. Y.B. Jeon, R. Sood, J.-h.Jeong and S.-G.Kim,“MEMS power generator with transverse mode thin film PZT”, Sensors and Actuators A, 122, 16–22, 2005 28. M. Marzencki, Y. Ammar, S. Basrour, “Design, Fabrication abd Characterisation of a Piezoelectric Microgenerator including a Power Management Circuit”, Proc. Symp. On Design, Test, Integration and Packaging of MEMS/MOEMS, DTIP 2007, Stresa, Italy, 25–27 April 2007. 29. S. Roundy, P.K. Wright, J. Rabaey, “A study of low level vibrations as a power source for wireless sensor nodes”, Computer Communications, 26, 2003, 1131–1144. 30. G. Depresse, T. Jager, J. Chaillout, J. Leger, A. Vassilev, S. Basrour, “Fabrication and Characterisation of high damping electrostatic micro devices for vibration energy scavenging”, Proc. Symp. On Design, Test, Integration and Packaging of MEMS/MOEMS, DTIP 2005, 1st–3rd June, Montreux, Switzerland, 2005, pp386–390. 31. Y. Arakawa, Y. Suzuki, N. Kasagi, “Micro seismic power generator using electret polymer film”, Proc. Power MEMS 2004, Kyoto, Japan, 2004, 18–21. 32. Seiko Instruments Inc, http://www.sii.co.jp/info/eg/thermic_main.html 33. Leonov, V. Torfs, T. Fiorini, P. Van Hoof, “Thermoelectric Converters of Human Warmth for Self-Powered Wireless Sensor Nodes”, IEEE Sensors Journal, Vol. 7, No. 5, 2007, pp. 650–657 34. V. Leonov, P. Fiorini, S.Sedky, T. Torfs, C. Van Hoof, “Thermoelectric MEMS generators as a power supply for a body area network”, Tranducers’05, Seoul, Korea, June5–9, 2005 35. Thermo Life Energy Corp. http://www.poweredbythermolife.com/ 36. H. Bottner et al., New Thermolectric Components Using Microsystem Technologies, J. Microelectromechanical Systems, vol. 13, no. 3, Jun. 2004, pp. 414–420 37. Harman, Theodore C. et al, Self-assembled quantum dot super-lattice thermoelectric materials and devices, United States Patent 7179986 38. J. R. Lim, J. F. Whitacre, J.-P. Fluerial, C.-K. Huang, M. A. Ryan, N. V. Myung, “Fabrication Method of Thermoelectric Nanodevices,” Advanced Materials, 17, 12, 1492–1496, 2005.
266
T. O’Donnell, W. Wang
39. W. Wang, F. Jia, Q. Huang, and J. Zhang, “A new type of low power thermoelectric microgenerator fabricated by nanowire array thermoelectric material,” Microelectron. Eng., vol. 77, no. 3/4, pp. 223–229, Apr. 2005. 40. C. Saha, T.O’Donnell, J. Godsell, L. Carlioz, N. Wang, P. McClosky, S. Beeby, J. Tudor, “Step-up Converter for Electromagnetic Vibrational Energy Scavenger””, Proc. Symp. On Design, Test, Integration and Packaging of MEMS/MOEMS, DTIP 2007, Stresa, Italy, 25–27 April 2007. 41. D Guyomar, A Badel, E Lefeuvre, C Richard, “Toward energy harvesting using active materials and conversion improvement by nonlinear processing”. IEEE Trans Ultrason Ferroelectr Freq Control 52, 2005, 584–595. 42. D. Guyomar, Y. Jayet, L. Petit, E. Lefeuvre, T. Monnier, C. Richard, M. Lallart, “Synchronised switch harvesting applied to self powered smart systems: Piezoactive microgenerators for autonomous wireless transmitters”, Sensors and Actuators, A, 138, 2007, 151–160. 43. P. D. Mitcheson, T. C. Green, E. M. Yeatman, “Power processing circuits for electromagnetic, electrostatic and piezoelectric inertial energy scavengers”, Microsyst Technol (2007) 13:1629–1635. 44. S. C. Yuen, J. M. Lee, W. J. Li, and P. H. Leong, “An AA-Sized Vibration-Based Microgenerator for Wireless Sensors,” IEEE Pervasive Computing, vol. 6, no. 1, pp. 64–72, 2007. 45. Crossbow Technology, Inc. http://www.xbow.com/Eko/index.aspx 46. Microstrain, Inc. http://www.microstrain.com/wireless_strain.aspx
Chapter 11
Challenges for Hardware Reliability in Networked Embedded Systems John Barrett
Abstract Networked embedded systems (NES) researchers tend to have a general vision of these systems as being low cost, autonomous, pervasive, invisible and reliable. A proactive, integrated approach to reliability analysis and planning with a vision towards future applications is therefore essential if NES are to be accepted in the marketplace. As overall NES reliability is a distributed function across all layers of the protocol stack, each layer must look not only at its individual responsibilities for reliability but also at how design decisions at any individual layer interact with other layers to effect overall reliability and, in a co-design approach, at what information must be provided to, and requested from, other layers of the stack to allow optimising of overall reliability. This is not a trivial challenge and, yet, it is one which receives comparatively little attention in NES research programmes. In this chapter, the issue of NES reliability is examined primarily in the context of the systems hardware, principally the node, and the associated reliability research challenges are discussed. While there is much to be learnt from past research on reliability research in distributed systems, mobile devices and wireless networks, NES present extra challenges which need to be more actively and proactively considered at this still relatively early stage in global NES research. Retrospective reliability improvement is always considerably more difficult, expensive and less effective than using a proactive design for reliability approach Keywords Networked Embedded Systems, Reliability, Wireless Sensor Networks, Heterogeneous System-in-a-Package, Stresses, Strengths, Failure Mechanisms, Sensor Node Lifetime.
1
Introduction
If, as Networked Embedded Systems (NES) researchers frequently claim, NES will become a dominant and pervasive technology in all aspects of our lives from the mundane to life-critical applications and that one of the benefits of these networked Smart Systems Integration Group, Centre for Adaptive Wireless Systems, Department of Electronic Engineering, Cork Institute of Technology
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
267
268
J. Barrett
systems will be to actually enhance the reliability of other technologies and systems, these same researchers have a responsibility to ensure that NES will deliver an acceptable level of reliability to their users, regardless of whether these users are consciously aware of the existence and influence of NES in their lives. Yet, a 2006 meta-study [1] which analysed 114 papers on wireless sensor networks (WSN) showed that only 5% addressed the topic of reliability and none addressed the topic of fault forecasting, one of the most fundamental elements of traditional reliability analysis. Why is this? Perhaps, since NES and WSN are relatively youthful topics with few long-term field deployments, the effects of a lack of reliability have had not time to have a worry-inducing impact; perhaps since reliability studies take a long time, there is a plethora of papers soon to be published; perhaps the multilayered nature of NES reliability with the complex interactions between all the layers of the stack1 leads to a lack of clarity on who exactly is responsible for reliability in NES design with the result that nobody volunteers for this (unenviable?) job. Maybe the answer lies more in what a NES is – a distributed intelligence, like the human brain, made up of nodes that are, in themselves, of relatively low intelligence where the pressures are to reduce individual node energy consumption, size and cost so that the overall networked system can be deployed economically. This brings about the situation highlighted by Hill et al. in their paper “The Platforms Enabling Wireless Sensor Networks” [2]: “All emphasize low-cost components operating on shoestring power budgets for years at a time in potentially hostile environments without hope of human intervention”. These pressures leave little room for the traditional approaches to enhancing reliability, such as “bullet-proofing” of hardware, the use of top quality “hi-rel” components, extra hardware overhead for redundancy or monitoring of node reliability. At the individual node level, we can add the complication of the geographically distributed nature of NES, which may mean that nodes will experience widely varying ambient environments depending on their geographic location2. A further complication is the operationally distributed nature of NES which may mean that individual nodes undergo varying wake-sleep duty cycles and RF transmission power levels depending on their functionality in the system and the level of “activity” at varying points in the NES that, in turn, depends on how it is managed at the higher levels of the stack. For geographically dispersed wireless NES where adaptive RF transmission power may be implemented to minimise overall node power consumption, e.g. [3], RF transmission power will vary depending on not only weather conditions but also on the physical geography in the NES deployment area. 1 The use of the term ‘stack’ in this Chapter relates to the fully structure of a heterogeneous ‘whole’ system, from sensor devices and components through hardware architectures and firmware to networking protocols, middleware and the application software. 2 “Geographic”, in this context, does not necessarily mean that nodes are dispersed over a very large geographic area although this is within the scope of the description. It rather implies that the node dispersion and the nature of the space in which the nodes are dispersed are such that nodes, even with the same function, experience different “geography” in terms of climate and stresses. This could occur within a single room where, for instance, a node near a window experiences considerably greater solar exposure than other nodes or a node near an air conditioner experiences greater vibration.
11 Challenges for Hardware Reliability in Networked Embedded Systems
269
Random movements of objects can create new obstacles to propagation, seasonal variations in foliage etc. or even something as simple as fouling of the node by bird droppings [4]. This variation in RF transmission power not also changes the principle internal thermal cycles of the nodes, and therefore one of the primary internal reliability stresses, but also the current cycle of the node battery, making battery life prediction, analysis of battery reliability and planning of battery replacement considerably more difficult for the networked system as a whole. Exploring the nodes themselves, we see a “system-in-a-package” – a highly heterogeneous collection of electronic and mechanical components, typically encapsulated into a small volume, often in three dimensions, with various protrusions through the “casing” for sensors, antennas or power harvesters. Therefore, for the individual nodes, we have a highly complex, multi-component and multi-material hardware assembly subject to a difficult-to-predict external environment and a difficult-to-predict operating cycle (and this is ignoring the extra co-design layers of complexity that arise from software and middleware reliability, issues that are not directly within the scope of this chapter). Yet, somehow we are supposed to predict and manage reliability for an entire network? Perhaps it’s not surprising that there are relatively few publications on NES hardware reliability… While the above may paint a bleak picture of the difficulties of analysing and predicting NES reliability, these systems do present some aspects that are helpful in a reliability context. Like the human brain, a large NES system, with sufficient node capacity, can afford to lose nodes to hardware failure without the entire system failing through the use of redundancy, re-routing and preventative maintenance managed at higher levels of the stack. This is a major advantage over conventional monolithic, complex, large electronic systems where sub-system redundancy may impose major overheads in terms of size and cost. However, the design of a NES system with “sufficient node capacity” for redundancy, re-routing and preventative maintenance requires pre-deployment analyses of the probable node failure rates and the statistical distribution of these failure rates across the geography of the NES. This, in turn, requires a detailed hardware reliability analysis of that relatively unintelligent but physically complex entity, the node. This chapter, therefore, takes the node as its starting point and examines the difficulties in applying conventional reliability analysis and prediction techniques to this heterogeneous system-in-a-package (HSIP). It examines the various reliability aspects of the node, including hardware failures and power source reliability, as issues in themselves and in the context of how they impact upon the reliability of the network as a whole; it also analyzes how conventional node reliability analyses must be enhanced if it is to provide reliability information that is of use to the NES management layers.
2
Approaches to Node Hardware Reliability
Reliability is often defined as “the ability to perform a given function for a given time in a given use environment”. Even this straightforward concept is, however, difficult for the NES hardware designer who might innocently ask: “would somebody please
270
J. Barrett
tell me how reliable does my node have to be?” For networked systems it is not even immediately obvious who provides the answer to this question. The NES owner will, presumably, have a target for data delivery reliability from the network but how does this filter down to a figure-of-merit for the node hardware mean-time-before-failure? How is the answer modified by use of redundancy or acceptance of graceful degradation? How do we apportion shares of responsibility for reliability between network management, middleware, software and hardware? The answer would appear to lie somewhere in that very useful and elastic phrase “co-design for reliability”; concepts such as fuzzy reliability engineering may also have much to offer in the context of NES as “as fuzzy sets can capture subjective, uncertain and ambiguous information” [5]. However, exactly where and how this is to be done is not currently obvious. This is a NES topic requiring focused research or we risk the situation highlighted by Birman in his book on Reliable Distributed Systems [6]: “Reliability is a bit like alchemy. The field swirls with competing schools of thought. Profound arguments erupt over obscure issues, and there is little consensus on how to proceed even to the extent that we know how to solve many of the hard problems. In fact, if there is agreement on anything today, it seems to be an agreement to postpone thinking seriously about reliability until tomorrow!”
As this chapter’s emphasis is on hardware reliability, we will assume that, by some future method as yet resolved, the network specifications have filtered down to produce a specific figure for node failure rates. What does the hardware reliability engineer do next? At least, at this stage, the engineer is on more familiar hardware reliability territory and can be begin to apply some standard approaches to design for reliability (e.g. [7]). More often than not, these approaches tend to concentrate primarily on reliability’s alter ego, failure, with the focus on failure mechanisms and modes, times-to-failure, failure-rates and related concepts. Regardless of whether the emphasis is on reliability or failure, the prediction of reliability/failure is an inherently probabilistic process with phrases like “best estimate”, “confidence interval”, “variance”, “lifetime distribution” and similar being the stock in trade of the reliability engineer. This is because of the interaction of two factors that are fundamentally difficult to both control and analyse: the “stresses” and the “strengths”. These are discussed in more detail in 2.1 below. As this chapter is not intended to be a tutorial on general hardware reliability, for which many books are available, the emphasis will be on highlighting the extra challenges posed by NES hardware.
2.1
The stresses for NES nodes
The “stresses” are those factors that work to cause failure and, for nodes, can be divided into externally induced and internally induced stresses. 2.1.1
Externally induced stresses
These stresses arise from the ambient environment and the use of the devices. The ambient environment provides variations in climatic factors such as temperature,
11 Challenges for Hardware Reliability in Networked Embedded Systems
271
range of temperature change, humidity, rate, frequency and deteriorative elements such as sunshine rain, salt and pollution. These may be natural climatic variations, if a node is outdoors, or they may be artificial; the node may be in air-conditioned indoor environments or attached to other electronic equipment or to machinery that generates its own micro-climate. For geographically distributed NES, such microclimates may also superimpose a regime of vibrations and mechanical shock on top of the climatic factors. Placement of nodes on roads (or similar) will superimpose a range of vibrations from passing vehicles as well as increasing the pollution content of the node’s ambient atmosphere. For a mobile node, these externally induced stresses will be “modulated” by the location of the node. The stresses will also vary between the life-phases of the node from manufacturing to disposal; the node may spend considerable time in a storage environment before being actually used or it may be redeployed during its service life. All of these stresses combine and interact to produce a complex, multiparameter, time-varying external stress distribution on the node. This, in itself, is difficult enough to analyse, or model, but is further complicated by unpredictable random step stresses such as a node being accidentally dropped, or undergoing other impacts. These may not be enough to cause instant failure but will weaken the node sufficiently to make it more vulnerable to its stresses and therefore to an earlier failure than might otherwise have been the case. Electrical transients from power supply problems or electrical storms may also have a similar hidden effect. For conventional hardware applications, we are mainly concerned with the temporal variations in externally induced stresses. However, for NES systems, we must add the complication of spatial variations in these stresses depending on the physical location of the node. As it is unlikely, except in cases where the spatial variation in stresses is extreme, that a custom node will be designed for each location, we therefore face the challenge of having to design a generic node that will have to endure temporally and spatially varying stresses. Therefore, even though individual nodes may be static, the approach to reliability design of the generic node will have to borrow from approaches used for portable electronic products such as outlined by Perera [8].
2.1.2
Internally induced stresses
An inanimate object such as, say, a screwdriver does nothing to damage itself – all of its stresses come from the user and the use environment. However, a node, or any electronic device, generates its own internal stresses purely by functioning. Components heat up and cool down depending on their use cycles and, by doing so, expand and contract, generating thermo-mechanical stresses at thermally mismatched material interfaces and on solder joints. Power saving strategies in nodes may exacerbate this problem because there may be more frequent wake-sleep cycles than would be the case for, say, a PC which may be switched on and off no more than once per day. A node is also often woken up for its most power intensive operation of data transmission, which generates the greatest amount of heat of which the node is capable; this is particularly the case for wireless transmission.
272
J. Barrett
So, even though we may not typically think of NES nodes as high powered, hot devices, they may undergo frequent thermal excursions which induce the cyclic thermo-mechanical stresses that can be an important contributor to electronic device failure. In addition to thermally induced stresses, there will also be the voltage and current stresses that are inherent to any electronic device and which contribute to dielectric breakdown, electromigration, galvanic corrosion and other electrically induced failures. Current best practice approaches to predicting internal stresses rely largely on CAD programmes which, given appropriate and correct input information, can yield very good results on temperature and stress distributions. However, NES nodes present a number of aspects that make providing appropriate inputs to these CAD programmes more challenging. This is discussed further in Section 3 below.
2.2
The strengths of NES nodes
The strengths are the abilities of the electronic components and their interconnect, packaging and encapsulation to withstand the externally and internally induced stresses. To take one of the simplest examples, a dielectric has a given strength against voltage breakdown; we can therefore buy a capacitor rated at 25V and we can reasonably expect it not to suffer dielectric breakdown when first switched on at 3V. However, like almost everything that exists, the dielectric undergoes natural wear-out and strength deterioration from its own chemistry, from chemical interaction with surrounding materials and from chemical interaction with the ambient environment. This is compounded by thermo-mechanical stresses that work on the randomly distributed micro-defects and cracks in the dielectric and which are unavoidable due to an inherently imperfect manufacturing process using inherently imperfect materials; these may be further compounded by random high-stress events such as higher-than-usual voltage transients. All of these produce deterioration in strength with time, which, even for something as apparently simple as a dielectric, is difficult to analyse and model. This is one of the simplest examples of a failure mechanism among what may be hundreds of different failure mechanisms in the different components in the node; all of them are competing in a race to wear-out, becoming the mode that will actually manifest as device failure3. Predicting which one will be the weakest link is a task of enormous complexity for the highly physically heterogeneous entity that is a NES node. There are two fundamental approaches to analysing or predicting strength: measurement or simulation. Measurement is familiar to everybody working in hardware reliability: wire-bond or solder joint pull testing, cyclic fatigue testing, interface
3 A concise and useful guide to reliability terminology differentiating between failures, failure modes, failure mechanisms etc. and other sometimes confusing terminology can be found in [9].
11 Challenges for Hardware Reliability in Networked Embedded Systems
273
adhesion testing, voltage stress testing and many more. These tests will often build in a wear-out or degradation mechanism, for example, to examine interfacial adhesion as a function of exposure to high humidity levels or vibrations. While once-off absolute tests, such as adhesion strength, yield accurate results, particularly if averaged over a sufficient number of samples, cyclic tests such as temperature cycling usually have to be performed in an accelerated fashion so that test times are feasibly short. This introduces the considerable challenge of extrapolating the results of these accelerated tests to real use conditions, which requires correlation between the test results and actual deterioration of strength in the real life use in the field. Extensive information on deterioration in the field is, however, very difficult to obtain, particularly for a new technology such as NES. It should, in theory, be possible to predict something like dielectric breakdown or interfacial adhesion failure through material modelling but it is very difficult to obtain sufficiently accurate materials models and models for variations in material properties. The hardware reliability engineer must still largely rely on testing to obtain measures of strengths. It is in the interaction of the stresses and strengths, however, that the real challenge arises. At a very abstracted level, it is actually quite simple: we have a distribution of stresses and a distribution of strengths, each with its mean and variation and, as long as the strengths are always greater than the stresses, we will never have failure. The job of the reliability engineer in node design is therefore to ensure that the strengths will not deteriorate to less than the stresses over the desired lifetime of a node. This would be ideal except for the fact that the reliability engineer has what may only be a very imperfect knowledge of the absolute values of the stresses and the strengths and an even more imperfect knowledge of the variations in the stresses and the strengths with time. Even if they were known, he has a limited ability to predict what will actually happen when the stresses and strengths collide. In an ideal world, we should be able to take a physics-of-failure approach [10], which, from a bottom-up materials science perspective and/or from empirically derived models, predicts failure based upon the properties of materials and interfaces combined with damage models. However, it is difficult, expensive and time-consuming to apply the physics-of-failure approach accurately to even relatively simple failure mechanisms and, such is the pace of progress in electronics, it is likely that technology will have moved on by the time a physics-of-failure model for any given structure has been formally proven. This is not to say that physics-of-failure has little to offer the reliability engineer. In combination with modern CAD tools for thermal and thermo-mechanical simulation, it is an extremely useful tool at the design stage where it can be used to improve the reliability of a design. Properly applied, it is highly likely to yield a design that is more reliable than one where physics-of-failure was not included in the design process. While it may also not necessarily lead to an absolutely accurate prediction of either the primary failure mode or the time-tofailure in a hardware design (when combined with more traditional approaches to reliability prediction) it will improve the accuracy of prediction. Strictly standardised approaches to reliability prediction, such as MIL-217, Bellcore and similar have largely fallen by the wayside in recent decades following
274
J. Barrett
studies (for example [11] and [12]), which showed large discrepancies in predictions for the same type of hardware. Those traditional approaches to reliability - which embody reliability statistics, reliability distributions such as the Weibull and Lognormal, failure modes and effects analysis (FMEA), accelerated testing and physical acceleration models - still have a very strong role to play in modern design for reliability. Many books are available on these topics, for example [7], [12], [13] as well as the proceedings of the International Reliability Physics Symposium (IRPS), the Reliability and Maintainability Symposium (RAMS), the European Symposium on Reliability of Electron Devices, Failure Physics and Analysis (ESREF), the IEEE Transactions on Reliability and others.
2.3
Hardware reliability challenges for networked embedded systems
When we talk about hardware for NES, what exactly do we mean? Primarily we mean the foot-soldiers of the NES, the nodes, as these are the components that will be exposed to the external stresses - hubs and controllers will generally be housed in more protected conditions. What exactly, therefore, do we have in mind when we think about a node? For most NES researchers, the wireless node is the one that is usually considered and we would tend to idealise them as devices that will be truly embedded, that is, invisible, unobtrusive and autonomous. A comprehensive survey of wireless node platforms is contained in the chapter by Barton elsewhere in this book. However, most of these platforms are primarily intended for research and are not optimised in either design or reliability for long term practical deployment in a specific application. Some nodes may have been developed as part of a roadmap towards Hill’s “low-cost components operating on shoestring power budgets for years at a time in potentially hostile environments without hope of human intervention” [2], but currently most of them are not yet even planned to be reliable enough to meet this set of challenging criteria; few, if any, platform developers have set up a reliability improvement roadmap, integrating all the different facets of NES reliability, towards attaining such a goal. Even very large NES node-focused research programmes, such as the EU-FP7 IST project eCubes [14], do not include an explicit, whole-platform, integrated activity on design-for-reliability. Few of the existing platforms are even designed for true long-term embedding and few have been demonstrated in a long-term, truly “embedded”, application. This is not to criticise these programmes or to say that they do not consider reliability to be important, however, it does emphasise again the relative immaturity of NES node technology. This poses a problem for the reliability engineer in that, while examining reliability issues in existing node platforms may be a useful exercise, extensive reliability studies will not yield results that are useful for any nodes intended for future, specific, truly-embedded applications. If we are to consider the reliability challenges related to such nodes, we therefore need to conceptualise what such nodes may “look like”.
11 Challenges for Hardware Reliability in Networked Embedded Systems
275
Let’s take Hill’s criteria again as a starting point, since most NES researchers probably consider them the definition of an ideal node:
2.3.1
“Low cost components”
While NES nodes may not suffer the same low market acceptance cost-threshold of a few cent as RFID, all commentators seem to accept that there is an applicationspecific cost barrier, perhaps at less than 10 euro/dollars in volume production, for individual wireless sensor nodes. The phrases “Wireless sensors”, “low cost” and “cheap” are also very frequently paired in publications4. What does this imply for the node components and systems integration technology? To answer this, we must first consider what researchers believe nodes should look like and a common theme is that they should be as small as possible. Going back to one of the seminal researchers in the field, Karl Pister and his “Smart Dust” research at Berkeley, Pister’s 1999 paper [16] states “Size reduction is paramount, to make the nodes as inexpensive and easy-to-deploy as possible” and almost all research groups involved in node development have a miniaturisation roadmap, some of which have been outlined by Barton in his chapter in this book. So, we can expect the node to be “small” and its components must, inevitably, be “smaller”. What are these components?
2.3.2
“Without hope of human intervention”
If we take this criterion, we will need a very heterogeneous set of components: sensors, sensor interfaces, microcontrollers, transceivers, antennae, batteries and, for truly autonomous long-term operation, energy harvesters. Yet, node researchers often talk about “e-Grains” and “e-Sugar-cubes” and “e-Seeds” and “Smart Dust” as if the future of node technology lies on an unavoidable path to ultimate miniaturisation. However, if we want a small node, the amount of space available for the electronic components, once we include unavoidably large batteries and energy harvesters, is very limited and hardware systems integration becomes an even greater challenge for both packaging and reliability. Another desirable aspect of nodes appears to be that they should be “plug and play”. This implies a self-contained node that can be easily dropped into a given application and which will self-configure itself into a network with other nodes. For hardware systems integration, this further implies a “system in a package” [17] (SIP, sometimes also “system on a package”, SOP). SIP implies the integration of all of the components of an electronic system into a single miniature package that can be used either autonomously or as a subsystem component in a larger system. In NES, we combine both of these concepts in that the node is an autonomous 4 While not a scientific survey or a guarantee of close pairing of the phrases, a search for “wireless sensors” AND “low cost” in Google Scholar yielded 2010 and “wireless sensors” AND “cheap” yielded 627 “scholarly” results at the time of writing (April 2008).
276
J. Barrett
component in itself, but it is also a sub-component of the overall NES system. SIP presents many challenges in terms of co-design and reliability [18], but the NES node is a particular challenge because of the heterogeneity of its internal components (perhaps leading to the concept of the Heterogeneous SIP or HSIP), the small size of the node, the necessity for long-term operation in diverse environments and the necessity for some of the components in the node to protrude into the outside world whether for sensing, energy harvesting or wireless transmission. This goes against the reliability engineer’s ideal of what an electronic system should look like; most probably it is an egg in which the electronic components are completely sealed, mechanically, electrically and environmentally from the outside world.
2.3.3
“Hostile environments”
A real node externally presents multiple paths for moisture and contaminant ingress and protrusions that are vulnerable to impacts and sources of mechanical stress concentration. This is a particular problem as NES researchers seem to generally expect to be able to deploy nodes almost anywhere, even in “hostile environments” and, as discussed earlier, the hostility of these environments may vary across the geography of the NES, requiring a node to function in “multiple hostile environments”. Internally, the node is a highly heterogeneous and highly compressed mix of components with multiple materials having different thermal coefficients of expansion, multiple material adhesion interfaces having unknown adhesion properties, many opportunities for internal air-voids and moisture entrapment, components with sharp corners that act as locations for stress concentration and numerous other undesirable reliability scenarios. The node structure provides challenges that cannot be easily met by current design for reliability tools, because of the heterogeneity of the structure and because of limited availability of information on material interfacial adhesion or the properties of materials at small dimensions. Considerable research is therefore required both in node packaging and in tools for node package design. We can hope, however, that the extensive global research on SIP/SOP design will yield improvements in design for reliability tools that will support more reliable node design. In the meantime, node design for embedding in any specific application remains a “best effort” where confidence in reliability can only be confirmed by laboratory and field reliability testing. While this will not stop important methodology development in the field of redundancy, this will make it very difficult, for the foreseeable future, to plan accurate node redundancy strategies for practical NES deployments.
2.3.4
“Operating on shoestring power budgets for years at a time without hope of human intervention”
Batteries have been around for a very long time and nobody has yet succeeded in making them follow anything near a Moore’s law-type growth in capacity to keep up with developments in electronics miniaturisation. As one of the most vital of node
11 Challenges for Hardware Reliability in Networked Embedded Systems
277
components, the battery is limited by physics and chemistry to being an inescapable minimum size if the node is to transmit a given amount of information over a required distance in a required time throughout its lifetime. Even on “shoestring power budgets”, particularly if we consider the case of NES nodes dispersed over geographically large areas, long range wireless data transmission demands a large power capacity and a correspondingly large battery. How do we quantify exactly what “large” means? Battery capacity sizing and battery lifetime prediction are vital for node reliability – get them wrong and nodes could stop working unpredictably anywhere in the NES and preventative battery replacement cannot be planned efficiently without this information -yet little research has been devoted to this topic. It would easy if we could make a general assumption along the lines of “I have a battery with a capacity of x mAhr and an average node current consumption of y mA and therefore my battery lifetime z = x/y hours”. Batteries only work like that in the case of a constant, relatively large, discharge current over a relatively short period of time – that is, discharge the battery quickly with a constant current and this simple calculation can be accurate. Unfortunately, NES node currents do not meet these criteria – NES nodes are generally envisioned to spend as much time as possible asleep at almost negligible currents and to wake up for intermittent periods of low current data acquisition from sensors and intermittent high current bursts for RF data transmission. This current consumption pattern can be seen if we refer to Fig.11.1 “Typical power consumption of the Tyndall mote” from the chapter in this book on “Power management, energy conversion and energy scavenging for smart systems” by O Donnell and Wang. While estimating the current consumption of a node is relatively straightforward, this pattern of long periods of low current with intermittent bursts of high current brings into play two complex aspects of battery chemistry: self-discharge [19] and the recovery effect [20]. All batteries self-discharge on the shelf at a rate that is influenced by their chemistry and structure and by external environmental factors such as temperature and humidity. The recovery effect means that a battery can deliver more capacity than nominal if current is drawn in intermittent bursts than if the same total amp-hours is drawn continuously. The recovery effect is also influenced by chemistry, structure, temperature and humidity and, while both been studied for the larger batteries and short-term discharges of laptop and mobile phone batteries (for example [21] and [22]), their influence in much smaller batteries and under the extreme variations in current consumption and very long lifetimes characteristic of NES nodes is almost unknown. The author is currently carrying out research on this topic and has succeeded in obtaining accurate battery lifetime predictions for coin batteries in wireless sensor nodes, but this work has so far only succeeded in completing preliminary examinations of the entire range of parameters and has already raised some new unknowns, which require further research, including how is it possible to validly accelerate node battery lifetime in laboratory tests to verify, in a reasonable time, a capacity/lifetime sufficient for “years at a time” at any point in a geographically dispersed network? This ignores the lack of long-term models for energy harvesters (the only existing power technology with the potential for “without hope of human intervention”)
J. Barrett [email protected] (Power Consumption and Generation) [mA]
278 102
101
100
10−1
10−2
0
50
100
150
200
250 300 350 Time T [sec]
400
450
Power Consumption of Wireless Sensor Node Energy Generation from EH Devices Average Power Consumption Average Power Consumption With Larger Duty Cycle
Current consumption in active mode [mA]
45 40 35 30
Rx mode Tx mode
25 20 15 10 5 0
50
100
150
200
250
Time T [ms] Current Consumption in Active mode
Fig. 11.1 Typical power consumption of the Tyndall mote
500
550
11 Challenges for Hardware Reliability in Networked Embedded Systems
279
recharging small batteries with widely fluctuating small levels of current. Without solutions to these research problems, there is currently no easy answer to the challenge of predicting battery capacity and lifetime in a NES system and, therefore, it is going to remain difficult to plan preventative battery maintenance or to plan a redundancy programme that takes battery depletion into account. A final aspect of node reliability, which is a consequence of the HSIP concept, is that the node represents an electrical signal integrity nightmare. In a very small three-dimensional space, the node packs in highly sensitive sensors with high switching speed digital processors generating harmonics into the GHz region, low power supply voltages, a high power RF transmitter and an electrical generator possibly generating random amounts of power at random voltages requiring noisy voltage conversion. And all of this is to be packed into a sugar cube, grain or seed? Signal integrity (which includes EMC issues) is becoming one of the most important “soft” failure problems for digital systems. It will be a substantial problem for a NES node and, like the “hard” failure problems in reliability, it needs to be an important part of design for reliability. However, again, effective tools for simulation and prediction need to be put in place to enable this problem to be dealt with and, as many of the problems exist in common with SIP technology, it is likely that parallel solutions must be found.
3
Conclusions and Outlook
NES hardware presents some unique reliability challenges, some of which arise from the usually miniaturised nature of the nodes and the extreme heterogeneity of the components in them. Others arise from the nature of NES themselves, particularly their geographic distribution, which leads to identical nodes experiencing a possibly wide range of environmental stresses. Still others arise from the way in which NES systems are deployed and managed with an emphasis on low cost to win acceptance in the marketplace and very long battery lives to reduce maintenance costs and increase node dependability. None of these challenges are unique in themselves and, individually or in subsets, arise in other application domains such as distributed systems, wireless networks, ad-hoc networks, portable electronics and SIP technology. The main challenge is that NES nodes present all of these problems simultaneously with the added difficulty that there is no rush of claimants for ownership of the responsibility for overall NES reliability – in this NES node hardware reliability is important but still only a single element. Hardware reliability engineers can, with some confidence, say that the solutions to node hardware will eventually, and perhaps incrementally, be found, even if many of the solutions are not yet obvious. However, solving hardware reliability issues will not yield reliable NES. An integrated approach involving all layers of the stack is required. Starting from the application specifications for performance, reliability and cost, some elegant co-design tool (or at the very least some reliability Esperanto that would facilitate communication
280
J. Barrett
between the very often mutually incomprehensible languages of those working at non-adjacent levels of the stack) must be developed that finds the best performancereliability-cost balance between the actions that can be taken to optimise overall NES reliability. Cost is most often cited as one of the primary market-entry barriers for NES. Reliability follows closely behind, yet it does not seem to have attracted widespread notice in the research community as a NES-wide topic of importance. This chapter has attempted to draw attention both to its importance for the future of NES markets and also to its relevance as a fascinating subject for research in itself. It is also worth remembering that a version of the “rule of 10” that is commonly used in manufacturing and quality control also applies to the issue of reliability: for each successive stage of a technology development where a reliability issue is ignored, the cost of addressing it goes up by a factor of 10.
References 1. Marcello Cinque, Domenico Cotroneo, Gianpaolo De Caro, Massimiliano Pelella. Reliability Requirements of Wireless Sensor Networks for Dynamic Structural Monitoring. Proc. of the International Workshop on Applied Software Reliability (WASR 2006), Philadelphia, USA, pp. 8–13. 2006. 2. Jason Hill, Mike Horton, Ralph Kling, Lakshman Krishnamurthy. The Platforms Enabling Wireless Sensor Networks. Communications of the ACM, Vol. 47, No. 6. pp. 41–46. June 2004. 3. Shan Lin, Jingbin Zhang, Gang Zhou, Lin Gu, Tian He and John A. Stankovic. ATPC: Adaptive Transmission Power Control for Wireless Sensor Networks. Proceedings of the 4th international conference on Embedded networked sensor systems, Boulder, Colorado, USA, pp. 223–236. 2006 4. Prabal Dutta, Jonathan Hui, Jaein Jeong, Sukun Kim, Cory Sharp, Jay Taneja, Gilman Tolle, Kamin Whitehouse, David Culler. Trio: Enabling Sustainable and Scalable Outdoor Wireless Sensor Network Deployments. Proceedings of the fifth international conference on Information processing in sensor networks, Nashville, Tennessee, USA, pp. 407–415. April 2006. 5. K. Verma, A. Srividya, R. S. Prabhu Gaonkar. Fuzzy-Reliability Engineering: Concepts and Applications. Narosa. ISBN 978-81-7319-669-0. 2007. 6. Kenneth P. Birman. Reliable Distributed Systems: Technologies, Web Services, and Applications. Springer. ISBN 0387215093. 2005. 7. Dana Crowe, Alec Feinberg. Design for Reliability. CRC Press. ISBN 084931111X. 2001. 8. U Daya Perera. Reliability Index – A Method to Predict Failure Rate and Monitor Maturity of Mobile Phones. Reliability and Maintainability Symposium, 2006. RAMS ‘06. pp. 234–238. 23–26 Jan. 2006. 9. Lee, S.-B.; Katz, A.; Hillman, C. Getting the quality and reliability terminology straight. IEEE Transactions on Components, Packaging, and Manufacturing Technology, Part A, vol.21, no.3, pp. 521–523, Sep 1998 10. Pecht, Michael; Dasgupta, Abhijit; Barker, Donald; Leonard, Charles. The reliability physics approach to failure prediction modelling. Quality and Reliability Engineering International. Vol. 6, pp. 267–273. Sept.–Oct. 1990. 11. Spencer, James L. The Highs and Lows of Reliability Predictions. Proceedings of the Annual Reliability & Maintainability Symposium, Las Vegas, Nevada, USA. pp. 156–162. 1986.
11 Challenges for Hardware Reliability in Networked Embedded Systems
281
12. Paul A. Tobias, David C. Trindade. Applied Reliability. CRC Press. ISBN 0442004699. 1995. 13. Michael Pecht. Product Reliability, Maintainability, and Supportability Handbook. CRC Press. ISBN 0849394570. 1995. 14. http://ecubes.epfl.ch/public/ 15. Nath, B.; Reynolds, F.; Want, R.;. RFID Technology and Applications. IEEE Pervasive Computing, vol.5, no.1, pp. 22–24. January–March 2006. 16. Kahn, J. M., Katz, R. H., and Pister, K. S. Next century challenges: mobile networking for “Smart Dust. Proceedings of the 5th Annual ACM/IEEE international Conference on Mobile Computing and Networking (Seattle, Washington, United States). MobiCom ‘99. ACM, New York, NY, 271–278. August 15–19, 1999. 17. Tummala, R.R.; Swaminathan, M.; Tentzeris, M.M.; Laskar, J.; Gee-Kung Chang; Sitaraman, S.; Keezer, D.; Guidotti, D.; Zhaoran Huang; Kyutae Lim; Lixi Wan; Bhattacharya, S.K.; Sundaram, V.; Fuhan Liu; Raj, P.M.;. The SOP for miniaturized, mixed-signal computing, communication, and consumer systems of the next decade. IEEE Transactions on Advanced Packaging, IEEE Transactions on [see also Components, Packaging and Manufacturing Technology, Part B: Advanced Packaging, vol.27, no.2, pp. 250–267. May 2004. 18. Madisetti, V.K.;. Electronic system, platform, and package codesign. IEEE Design & Test of Computers, vol.23, no.3, pp. 220–233. May–June 2006. 19. Thomas Roy Crompton. Battery Reference Book. Newnes. ISBN 075064625X. 2000. 20. Chiasserini, C.F.; Rao, R.R.;. A model for battery pulsed discharge with recovery effect. IEEE Wireless Communications and Networking Conference, 1999. WCNC’99, vol., no., pp. 636– 639. 1999. 21. Daler N. Rakhmatov, Sarma B. K. Vrudhula. An Analytical High-Level Battery Model for Use in Energy Management of Portable Electronic Systems. pp. 488–494, International Conference on Computer-Aided Design (ICCAD ‘01). 2001. 22. Gomadam P.M.; Weidner J.W.; Dougal R.A.; White R.E;. Mathematical modeling of lithiumion and nickel battery systems. Journal of Power Sources, Volume 110, Number 2, 22, pp. 267–284(18). August 2002.
Part VII
System Co-Design Co-Design Processes for Pervasive Systems
1.1
Summary
A systems-led design and innovation process typically requires a resolution of issues that overlap two or more domains of expertise. A simple example to provide, though perhaps less so to implement, is that between the hardware and software in embedded systems. A number of co-design processes have been, and continue to be, developed to answer this particular embedded systems challenge. In fact, co-design approaches can be applied between two associated areas of hardware (e.g. chip-package co-design) or software. As a result, most of us working or researching the areas of IT systems are at least notionally aware of a co-design process, or programme. In fact, co-design is best presented as a philosophy that supports genuine collaboration and in particular a balanced process that moves beyond the relatively static approach of partners delivering know-how to each other and towards integrated co-innovation initiatives. It is not atypical for co-design to feature as a methodology when significant whole-systems issues, such as the challenges in power management or reliability, discussed previously in Part VI, are defined as being performance requirements for an application. This part investigates co-design from two perspectives. Chapter 12 investigates co-design largely from the perspective of hardware systems and their use as infrastructure in building smart objects. The particular example provided by the concept of Augmented Materials, as discussed in Chapter 2, provides are framework for this discussion. Chapter 13 broadens this to an extent by analyzing the area of co-design for pervasive systems and arguing that this should focus upon context awareness and creating the ability to integrate information derived from a wide number of different sources.
1.2 Relevance to Microsystems The application of co-design to optimise technology platforms that integrate MEMS sensing devices with their associated control and calibration circuitry represent an immediate example of the relevance of these methodologies to Microsystems.
284
Part VII System Co-Design
Furthermore, it should be anticipated that the co-design ‘interfaces’ between hardware and software implementations of particular Ambient Intelligence applications may well be relatively deep, allowing for clearly defined flexibilities and constraints within the infrastructure. This may well affect the nature of the Microsystems devices themselves and will certainly influence the nature of the subsystems within which they are integrated.
1.3
Recommended References
As co-design is broadly used phrase, there are references to it within many different research domains. One area, which is well established, is that of hardware-software co-design for embedded systems and in particular the tools that have been developed to support this process. Two relevant references are provided that introduce this area of research in more detail. A number of other research programmes have also served to illustrate the nature of co-design approaches in the domains of Ambient Intelligence and Pervasive Computing. Of these, two that are of particular interest are the projects framed within the European Commission’s Disappearing Computer Initiative and the Grand Challenge that is the ‘Smart Dust’ concept. 1. R. Ernst, Co-design of embedded systems: status and trends, IEEE Design & Test of Computers, Apr-Jun 1998, page(s): 45–54 2. G. De Micheli, W. Wolf, R. Ernst, Readings in Hardware/Software Co-Design, Morgan Kaufmann Publishers, 2001, ISBN:1558607021 3. The Disappearing Computer initiative: http://www.disappearing-computer.net/ 4. B. Warneke, M. Last, B. Liebowitz, and K.S.J. Pister, Smart Dust: Communicating with a CubicMillimeter, Computer, vol. 34, pp. 44–51, 2001 5. The Smart Dust Project: http://robotics.eecs.berkeley.edu/ pister/SmartDust/
Chapter 12
Co-Design: From Electronic Substrates to Smart Objects Kieran Delaney, Jian Liang
Abstract Everyone is different. It is a fact echoed by the great, shifting diversity of our world today. It is reflected in our use of the everyday tools around us and even in our understanding of the words we use. For Information and Communications Technology (ICT) research, the term ‘co-design’ is often employed to define the shared design process that exists (particularly in embedded systems) between hardware and software. In fact ‘co-design’ relates to any design process; in effect it is a philosophy, one in which the goal is to involve all perspectives that are relevant. The rationale is simple, even though the practice is not: the quality of a design improves if all of the stakeholders’ interests are considered. This chapter discusses co-design as a process, relating it to the concept of Augmented Materials, outlining the challenges relevant at material- and object-level in creating optimal solutions. Keywords Co-design, Embedded Systems, Augmented Materials, Co-synthesis, Integral Passive Components, Smart Objects, Chip-Package Co-Design, Disappearing Computer, Smart Dust.
1
Introduction
The focus of European and U.S. research in wireless and embedded systems is increasingly turning towards the development of the Ambient Intelligence, or AmI, with programmes devoted to addressing specific enabling technologies, ranging from nano-systems research through photonics solutions for wideband transmission to augmented user interfaces. Programmes of research like this are tightly focussed and, thus, highly cross-disciplinary forms of collaboration are relatively rare, though, they are becoming increasingly evident. One such collaboration is the work performed in the “Disappearing Computer” Initiative [1] where, at least in part,
Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
285
286
K. Delaney, J. Liang
expertise in hardware, software, (and user-design) joined proactively. Even so, highly concurrent engineering approaches were not broadly applied; instead partners tended to deliver services to one another. In certain cases, however, a common agenda has developed, where co-design yields to a process of ‘co-innovation’. One of the most relevant examples worldwide is the “Smart Dust” project [2, 3] at the University of Berkeley. In this project, the goal was to develop a system of wireless sensor modules where each unit is the size of a mote of dust. The result was innovations in miniaturised systems, the parallel development of low power operating systems (e.g. Tiny OS) and, as it captured the imagination of many researchers, an ongoing sequence of other initiatives as well. The interface between hardware and software typically offers a focal point for co-design activities; thus, there are many methods in used for embedded systems development. Most of those approaches are focused on solving design problems at system-level, on hardware and software synthesis (including co-synthesis) and on hardware/software partitioning. The objective is to ensure that improvements can be made in cost-efficiency, reduced the time-to-market, more rapid testing of software and hardware and so forth. In the following section, selected approaches to embedded systems co-design are described to give out a quick image on how the fundamental theories and models can be used in embedded systems development.
1.1
Established methods of co-design
Traditionally, the development process for embedded systems can be divided into two separate processes at hardware and software level after system functions are divided. Developers of software will await completion of the hardware system design, hardware fabrication, system build and integration and debug; then they may begin their tasks based on the specified hardware environment. If there is any conflict between the hardware specification and software function, then this either compromises functionality or requires a redesign of the hardware. However, more recently, digital hardware design has become increasingly like software design. Hardware circuits can be described using modeling or programming languages. The can be validated and implemented by executing software programs, which can be conceived for that particular hardware design. The progress of silicon IC processing is also having an effect. Current ICs are evolving quickly and can now incorporate more than one processor core and memory arrays on a single substrate. This ‘System-on-Chip’ could contain a significant amount of embedded software, which provides flexibility in product evolution and differentiation. Thus, the design of these systems now requires practitioners to be knowledgeable in both hardware and software domains, understanding what is necessary in order to make good design tradeoffs. Numerous approaches have emerged to address this, and other types, of challenge in embedded systems. This has driven the development of a range of associated
12 Co-Design: From Electronic Substrates to Smart Objects
287
design tools and methodologies, which are capable of completing the design of mixed hardware/software systems starting from system-level specification. These are called co-design or embedded system design tools and they have enabled significant increases in productivity [4]. Given the complexity of the process it is now typical for system architects, customers, and marketing departments to develop requirement definitions and system specifications together. The goals include defining an architecture consisting of cooperating system functions that supports concurrent hardware and software design, engaging the hardware and software developers, integrating hardware and software and then facilitating effective testing, as well as employing solutions that reuse components from previous designs in order to improve productivity and reduce design risk. A constant goal in this process is to make it increasingly integrated and coherent; in terms of the required enabling tools, this is an integrated computer-aided design space exploration and co-synthesis approach.
2
Co-Design for Augmented Materials
A central premise of augmented materials is a systemic programming technique that enables the materials to “know” themselves. The determination of what it means for a material to “know” itself is rooted in a number of parameters sets: ●
●
●
Physical parameters that in some form alter the nature, or equilibrium state, of the material (temperature changes, flexure, compression, etc) Reference measurements that are a factor in a the behavioral description, or definition, of the “self-knowledge” of the material (motion vectors, vibration, shock, impact, etc) The effect of processing, fabrication and assembly steps applied to the material to make objects and the subsequent importance of these alterations on the functionality of a constructed object.
The successful development of an augmented material will only be achieved through an effective co-design process. The focal points of this co-design process for augmented materials will function (as appropriate) at object-level, at networklevel and at element-level: Co-design at Object level: Within the materials construct, the network of individual elements presents a diverse set of possibilities. It can be assumed that each element shares a common core, but the population of sensors, actuators and other devices can vary widely. Because of this diversity the system will exhibit openended behaviour; this will mean that augmented materials are not amenable to direct programming solutions of the kind normally found in embedded systems. Thus, we adopt an approach based around rich, scalable, self-organising context models and inference. The goal is to integrate programming into the process of
288
K. Delaney, J. Liang
manufacturing the augmented materials, and to capture clearly the relationship between the factors affecting the material and its behavior. Co-design at Network level: The required nature and role of the sensor subsystems (built within an element) provide an imperative to develop an autonomic management method within the systems software at network-level. In an effective system, this method would incorporate management of critical performance issues; these could include administration of global references, such as the 3-D orientation of the entire material, determination of appropriate optimization strategies for utilizing the available energy and the creation of a framework for learning and monitoring the lifetime behaviour of elements within the network. Co-design at Element level: The requirement for these materials to “know” themselves affects not only the nature of the network, but also the architecture of the elements themselves. The composition of the elements described will be based, at least in part, upon the functional requirements for operation and management of their component sensors. To fulfill the requirements of the physical-level, measurements of the material behavior will be needed.
2.1
Co-design at Object-level
The issue of Co-design at Object-level is complex. In this section, we provide an insight into a practical process of co-design that took place within the eGadgets project [5], which was part of the Disappearing Computer Programme [1]. Co-design at object-level, with reference to context and pervasive systems, is further discussed in Chapter 13.
2.1.1
A co-design process
The European Commission runs numerous research initiatives as part of its largescale Framework Programme. A central part of these programmatic activities is to foster research collaborations across the continent. Throughout the 1990s and the early 2000s, many research groups in Ireland sought to harness these collaborations in order to build up their funding base and sustain their activities. The National Microelectronics Research Centre, or NMRC, (now known as the Tyndall National Institute) situated in Cork, a city in the south coast of the country, was particularly effective in this regard, building up a presence in numerous microelectronicsrelated research domains and in the process developing a large network of European partners. This was based upon a high level of competence in microelectronic material and hybrid systems, including silicon circuit design, silicon micro-sensors and microelectronic packaging, amongst others. In 2000, the Centre targeted a new area of research within a specialised European Commission programme, known under the title ‘Future and Emerging Technologies’, or FET. This programme was, and remains, a nursery for research of a radical, or
12 Co-Design: From Electronic Substrates to Smart Objects
289
high-risk, nature that offers the promise of significant impact to European society or it industry. Its new initiative was the ‘Disappearing Computer’, a title that was intended in many ways to be taken literally. One of the key attractions of this programme to researchers in the microelectronics domain was its inherent requirement for highly miniaturised devices.
2.1.2
The Disappearing Computer
The goal of the Disappearing Computer initiative [1] was to actively seed the development of information technology that can be diffused into everyday objects and settings. It attempted to address the pure technology issues, but also to “actively investigate how this can lead to new ways of supporting and enhancing people’s lives in ways that are beyond what is possible with the computer today”. Specifically, the initiative focused on three-interlinked objectives: ●
●
●
Creating new software and hardware architectures that are used to build information artefacts and can be integrated into everyday objects. Investigating mechanisms for engineering new behaviour and new functionality using collections of collaborative artefacts. Investigate how to ensure that people’s experience in these new environments is coherent and engaging by developing new approaches to designing collections of artefacts in everyday settings
Rather than partitioning these significant challenges within separate strands, the initiative was structured as a group of seventeen overlapping projects, each primarily addressing an aspect of one of the individual challenges while fostering links to related activities in other projects. The goal was to encourage synergies. The NMRC (Tyndall) became involved in two of these projects, “Fibre Computing” (FiCom) where it was playing to its strengths in manipulating a novel silicon platform as a medium for sensing, and “Extrovert Gadgets” (eGadgets). In eGadgets, NMRC researchers, of which I was one, were about to embark on a steep learning curve on the nature of present and future systems. The “Extrovert-Gadgets” project is based upon a concept originated by one of the authors of this chapter, Professor Achilles Kameas. Its primary goal was “to provide a technological framework that will engage and assist ordinary people in composing, (re)configuring or using systems of computationally enabled everyday objects, which are able to communicate using wireless networks”. That it succeeded in proving its core concept, is in no small part due to that fact that each of its three partners, Professor Kameas’ Computer Technology Institute in Greece, the University of Essex in the United Kingdom (who investigated the issue of intelligence and intelligence agents in this project) and the NMRC, determined a route that maximized communications and investigated areas of clear mutual misunderstanding. Many words, including ‘system’ and ‘architecture’, were freely employed until it became clear that each partner derived an entirely different meaning from them. The new approaches, such as the use of ‘plugs’ and ‘synapses’,
290
K. Delaney, J. Liang
created to generate functional relationships between smart everyday objects (i.e. artifacts), were debated, sometimes passionately, until a commonly understood, robust framework for the project was in operation; at its core was the development of a Gadgetware Architectural Style and the implementation of a successful demonstration [6]. The effectiveness of this innovation process depends primarily upon maintaining links between the multiple disciplines involved in realising these future systems. Recognising this, the European Commission, in its investigation of Ambient Intelligence (the European perspective on Ubiquitous Computing) proceeded to focus upon scenario development as a means of fostering coherence. This was defined at a high level through general consumer, traffic, health and business scenarios, but also at the level of projects such as eGadgets, where the limits of the demonstrators were explored in the light of future commercial applications. In this context, aspects of the future focus of a number of research domains, including microsystems, can benefit from drivers found in these scenarios through the user requirements for interacting with a ‘disappearing computer’ system.
2.2
Co-design at ‘Network’ level
In this context, the term network overlaps subsystem and network design issues, as the network in question is embedded in a material. Research dedicated to the study of co-design techniques for embedded and wireless sensing systems has been undertaken, as both technologies are realised as a fusion of hardware-software systems. Typically, the specification of the hardware and software designs is completed separately, providing scope for incompatibility at the hardware-software boundary, resulting in sub-optimal solutions. Specific co-design methodologies have tended to focus upon reconfigurable systems and design tools have been implemented to address co-designing these systems, including aspect-oriented system design (AOSD) [7], generally applied to middleware. Integrated design and system level solutions for autonomous devices have been practically applied in a number of cases, one good example being the SMART Dust project at the University of Berkeley [2, 3], where power limitations in particular have dictated a requirement for synergistic development of the hardware and software enabling technologies. Other research initiatives, such as the WiseNET ultra-low-power platform implementation [8], have also taken this approach. These, and other [9, 10], examples provide strong evidence that a fully coherent and multidisciplinary co-design process can yield significant successes, both in foreseeable and unpredictable ways. The integrated design approach can be greatly effective in bridging semantic variances between independent disciplines, particularly in the context of the complexities innate in real-world environments.
12 Co-Design: From Electronic Substrates to Smart Objects
2.3
291
Co-design at Element-level
Co-design for high density electronic packaging is not well highlighted, since, in the past, it was firmly on the hardware side of the design process. Since the advent of ‘System-on-Chip’ and ‘System-in-Package’ technologies, this has tended to change with embedded systems design methodologies being elaborated to handle the additional complexity. For augmented materials, the focus is primarily upon a ‘System-in-Package’ method, where the package material and the ‘host’ material merge. As a design challenge, this not only includes co-design issues at the hardware-software interface, it also incorporates aspects of the design process at the packaging level that are traditionally considered the domain of hardware design tools; using the material as a programming interface requires a new interpretation of what may be required in a genuine co-design process. Hardware miniaturisation applies numerous enabling technologies for high density integration where, in fact, the packaging material is largely removed. One targeted form factor is a cube of stacked ICs; this is a 3-D Multi-chip Module (MCM), sometimes called an MCM-V (where the V is ‘vertical’). A module like this will include bare die versions of commercially available IC microprocessors, wireless chipsets, and micro-sensors. The assembly processes will be based upon such methods as flip-chip bonding, wafer-level packaging, lamination, etc. The design of these ‘packages’ is significantly more complex than previous formats, requiring a stronger understanding of material fabrication issues. For example, in these packages, rework to correct assembly errors is no longer generally feasible, bringing penalties in production as the integrated nature of the module means fully functional, possibly costly devices are lost. It also places a greater emphasis upon “Right First Time” Design.
2.3.1
“Right First Time” Design
Hardware design tools have cost and performance targets that are built around optimizing functional efficiency and minimizing iterations. This is captured as drive to create the “Right First Time” Design. In simple terms, the primary considerations are: performance, cost and volume. However, it is accepted that problems in manufacturing, test and reliability can be created at the design stage, slowing the process and costing money in an industry with increasingly narrow margins. Explicit tools exist to target customer requirements in “Design for Manufacturing”, “Design for Test” and “Design for Reliability”; in fact, the term “Design for X” has become established to allow for a multitude of specialist design targets and tools. In reality, this is an example of a situation where co-design processes will take place, though perhaps the term is not explicitly used. A significant number of converging and diverging influences are at play when one considers the implementation of a “Right First Time”. Fig. 12.1, shows a number of technology, engineering and business areas, each seeking an optimum design, though as is that
292
K. Delaney, J. Liang
Fig. 12.1 A multitude of stakeholders may be part of the hardware fabrication and assembly process, requiring co-design if there is to be any hope that a ‘right first time’ design is feasible
case in most technologies, the resultant interaction of targets requires trade-offs and, thus, co-design. Sometimes this can be an explicit process, as in the case of chip-package co-design.
2.3.2
Chip-Package Co-design
Chip-package co-design [11] seeks to exploit the best features of the silicon IC and the substrate/packaging around it in order to optimise performance. It is well established the improvement in silicon fabrication technology has tended to outpace the capabilities of packaging to support it. Co-design is one approach that can be use to overcome certain emerging difficulties in this area. For example, optimum performance in clock speed is determined by signal delay and clock skew; the signal delay is a result of the combine delay from the gates and the interconnect. Gate delay is decreasing as a result of improved silicon densities, however, the on-chip interconnect is increasing due to the reduction in the (aluminium or copper) metal track cross-section. Below the 130nm technology the on-chip interconnect delay tends to become a greater factor. Using package interconnect can provide about 1000 times less wire resistance and about 10 times less wire capacitance then the on-chip equivalent. Thus, long interconnections - like the global clock tree - can access a lower RC delay if the interconnection is performed off-chip. Of course, this requires a close interaction between the IC and packaging design, because the functionality will now be distributed between the IC level and the packaging level.
12 Co-Design: From Electronic Substrates to Smart Objects
293
These interactions are a key feature of blurring the interface between different hardware layers. In creating a similar challenge through the Augmented Materials concept, where the design interface between materials, packaging and software deepens, the link between the limits of a material’s capability and the user’s design requirements are now more directly integrated. This really means that the entire process, from element fabrication to material integration to object assembly will require a co-design methodology. Two examples from the area of high density integration highlight the potential issues around creating and augmented material.
2.3.3
Packaging Stresses
A silicon chip embedded in a plastic packaging material will create stresses in that packaging material. The nature of the assembly process, where the plastic is cured typically at elevated temperatures - over or around the silicon chip, creates compression and shear stress that can have a significant affect on the performance and reliability of the package (See Fig. 12.2). These stresses are a result of mismatches between the material properties of the silicon and that of the plastic. These problems are resolved for technologies and processes, using scientific, engineering and iterative approaches, only to re-emerge when changes (including design) are made.
2.3.4
Integration of Passive Components
Similar performance trade-offs are seen in another domain of microelectronics, where passive components are being integrated (or buried) into the substrate they would traditionally have been bonded onto [12]. For example, one integrated passive component technology was developed using low temperature co-fired ceramic (LTCC) substrates. A series of new fabrication methods had to been developed to achieve a reproducible buried capacitor; the process used arrays of high permittivity inserts to reach the target tolerances while maintaining the physical integrity of the substrate.
Fig. 12.2 (Left) A Finite element model of the stresses on a silicon die in a plastic package and (Right) stress concentration showing a crack in the plastic radiating from die corner
294
K. Delaney, J. Liang
To resolve stress effects within the ceramic circuit, new materials were developed. A mismatch in the shrinkage rates between the standard ceramic and that of the high permittivity ceramic was managed through an adaptation of the fabrication process. During the assembly sequence an array of high permittivity inserts are placed in slots made in a layer of standard substrate (See Fig. 12.3), which is then laminate within the substrate itself and sintered at a temperature of about 900°C. Since the shrinkage during sintering of the standard ceramic is at a higher rate that its high permittivity counterpart, the volume of the insert is offset to match volume of the slot after the sintering process was completed. The resultant capacitor performance was correlate with the pressures used in laminating the layers of ceramic together. Low pressures resulted in voiding between the insert and the capacitive metal plate, reducing the resultant capacitance (See Fig. 12.4). In fact, voiding was seen on some level for all pressures applied, Top Layer Metal
LTCC Layers Inserts
Slots in Standard Tape
Bottom Layer Metal
Fig. 12.3 An Exploded Diagram of a Buried Capacitor
Fig. 12.4 A geometrical analysis of a buried ceramic capacitor, showing (left) voiding in white, in the ‘insert’ technology seen using scanning acoustic microscopy and (right) a micro-section of an ‘insert’ showing evidence of a shrinkage effect in the sintering process
12 Co-Design: From Electronic Substrates to Smart Objects
295
however at the higher lamination pressures the impact was small and the capacitance targets were achieved. The final integrated component offered reasonable capacitance ranges and was more reliable than its surface mounted equivalent.
2.3.5
Summary
Both examples illustrated that a material parameter mismatch can have implications for how a package subsystem is fabricated and that these issues can impact upon the design process. These types of challenges were addressed in each case through investigative research by experts in the manufacture of the respective packaging formats. In this context, modeling of the performance of the packages and substrate from an electrical, thermal and thermo-mechanical perspective are key supports in determining effective design solutions. Analytical approaches are used, but finite element modeling is now a core tool for practitioners in predicting affects like those summarised above. The approaches proposed for creating Smart Objects (and, in particular, the concept of Augmented Materials itself) will need to include these types of materials trade-offs in the body of knowledge that these materials contain. “Self-knowledge” will include the ability to ascertain the limits of the changes that are possible in building and subsequently rebuilding with these materials. This not alone needs the equivalent of a integrated computer-aided design space to be developed, the central innovation of Augmented Materials – that its fabrication (and the object assembly) steps will form a programming language for the material - requires that aspect of this process be available to the material in real-time as something that will be analogous to a ‘material digital memory’. This is particularly noteworthy in the context of opportunities that may be provided by, for example, the rapid-prototyping initiatives that are currently growing through internet-driven collaborations [13, 14]. In these programmes, there are many users collaborating together, the manufacturer and user may be the same person and both innovation and co-design are integral parts of the process. The realization of a functional suite of augmented materials, with appropriate supporting design tools, could provide potentially highly productive synergies with these forms of innovation process.
3
Conclusions
Embedded Systems co-design has progressed rapidly in the recent past to provide tools that improve costs, reduce the time-to-market, and streamline testing and assembly processes. More stakeholders have become involved as the tools have become more sophisticated. This trend has expanded into areas of research, such as the Disappearing Computer and the Smart Dust Initiative, yielding generally positive
296
K. Delaney, J. Liang
and influential result. There is a consensus that this process of co-design, supported by a growing array of inter-linkable tools, should continue to broaden, particularly in driving the creation of complex systems. For Augmented Materials, there is the significant challenge of drawing together a co-design process that integrates issues at the material- and element-level (through the design of the sensor elements) with challenges at network- and object-level. The creation of an integrated computeraided design space that incorporates the ability to manage the ‘self-knowledge’ in the material represents a key target in developing the means to achieve this.
References 1. The Disappearing Computer initiative: http://www.disappearing-computer.net/ 2. J. M. Kahn, R. H. Katz and K. S. J. Pister, “Mobile Networking for Smart Dust”, ACM/IEEE Intl. Conf. on Mobile Computing and Networking (MobiCom 99), Seattle, WA, August 17– 19, 1999. 3. Smart Dust: Communicating with a Cubic-Millimeter Brett Warneke, Matt Last, Brian Liebowitz, and Kristofer S.J. Pister, Computer, vol. 34, pp. 44–51, 2001 4. R. Ernst, Co-design of embedded systems: status and trends, IEEE Design & Test of Computers, Apr–Jun 1998, page(s): 45–54 5. The Extrovert Gadgets Project: http://www.extrovert-gadgets.net/public/about.asp 6. A. Kameas, S. Bellis, I. Mavrommati, K. Delaney, A. Pounds-Cornish and M. Colley, “An Architecture that Treats Everyday Objects as Communicating Tangible Components”, Proc. First IEEE International Conference on Pervasive Computing and Communications (PerCom’03); pp 115–124, March 23–26, 2003, Dallas-Fort Worth, Texas USA. 7. G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. Videira Lopes, J.-M. Loingtier and J. Irwin, “Aspect-Oriented Programming”, Proc. of the European Conf. on Object-Oriented Programming, June 1997. 8. C.C. Enz, A. El-Hoiydi, J-D. Decotignie, V. Peiris, “WiseNET: An Ultra-low-Power Wireless Sensor Network Solution”, IEEE Computer, Vol. 37, No. 8, Aug 2004, pp. 62–70 9. F. Balarin, M. Chiodo, P. Giusto, H. Hsieh, A, Jurecska, L. Lavagno, C. Passerone, A. Sangiovanni-Vincentelli, E. Sentovich, K. Suzuki, B. Tabbara, “Hardware-Software CoDesign of Embedded Systems: The Polis Approach”, Kluwer Academic Press, June 1997. 10. T. Zhang, K. Chakrabarty, and R.B. Fair, “Design of reconfigurable composite microsystems based on hardware/software co-design principles”, IEEE Trans. on Computer-Aided Design of Integrated Circuits & Systems 21(8), pp. 987–995. Aug. 2002. 11. G. Troster, Potential of chip-package co-design for high-speed digital applications, Proceedings of the Design, Automation and Test in Europe Conference and Exhibition, September 1999, Munich, Germany pp. 423–424 12. K. Delaney, J. Barrett, J. Barton, R. Doyle, “Characterisation and Performance Prediction for Integral Capacitors in Low Temperature Co-Fired Ceramic Technology”, IEEE Transactions on Advanced Packaging, Vol. 22, No. 1, February 1999, pp. 68–77. 13. The RepRap Project: http://reprap.org/bin/view/Main/WebHome 14. The Fab@Home: http://fabathome.org/wiki/index.php?title=Main_Page
Chapter 13
Co-Design for Context Awareness in Pervasive Systems Simon Dobson
Abstract Pervasive systems rely less on sensing than on the ability to integrate information derived from a wide number of different sources. We argue for a co-design approach to pervasive systems in which sensor and actuator components, adaptive services and the system environment are treated equally, allowing designers to engineer at a number of different levels to achieve system goals. This approach offers more flexibility in leveraging the capabilities of the different system aspects. Keywords Co-design, Adaptation, Behaviour, Context, Situation, Stability, Optimisation
1
Motivation
There seem to have been two strands of pervasive computing research, with never the twain meeting. From the software side, researchers have focused on programming environments, knowledge-based systems, ontologies and location-based services, typically using existing infrastructures and PDA-type devices for experimentation. From the hardware side, the focus has been in sensor development, miniaturisation and wireless communications, with little consideration of wider issues. The disjunction between these strands is unfortunate. Commodity mobile hardware provides an inferior platform upon which to conduct complex pervasive systems research. Conversely, individual sensors and processing elements can generally only provide information on a single aspect of behaviour or context, which is insufficient for any but the simplest applications. Interoperability can also be difficult, not least because the two communities have radically different vocabularies and assumptions.
Systems Research Group, School of Computer Science and Informatics, UCD Dublin, Belfield, Dublin 4, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
297
298
S. Dobson
A whole-systems view can combine software and hardware elements to leverage the strengths of each. One might, for example, use bespoke hardware elements for embedded microsensing whose results are then distributed to an AI-based reasoning engine: the benefits of high-level reasoning driven from reliable, inconspicuous sensors. This is an example of co-design, addressing a systems-level problem using hardware and software elements as most appropriate. Moreover, the more detail we can provide as to the context within which interaction occurs, the more high-level information we can use to condition and inform the decisions we make based on the sensed and inferred data we receive. However, one should not underestimate the difficulties involved in designing and implementing such systems. First and foremost, it requires engineers sufficiently familiar with both hardware and software. While most computer scientists (for example) have a basic understanding of electronics, few will be able to conduct board-level (let alone chip-level) design and fabrication; conversely, electronic engineers can almost always write embedded C software but cannot necessarily work with ontologies, uncertain reasoning and advanced distributed systems concepts. Rather than expect a new breed of individuals to emerge, we might perhaps better focus on the capabilities and issues of co-design and the ways in which groups of scientists and engineers may best be populated to address them. In this chapter will be argue that pervasive systems are inherently about the co-design of three interlinked elements: the sensors used to observe the world, the environment in which these observations occur, and the software used to manipulate and react to them. It is possible to engineer any of these elements, to a greater or lesser extent, allowing a rich field within which to develop adaptive pervasive applications. This richness does, however, introduce two new concerns into the design process: the need to understand and model complex links between elements that are often considered separate; and the possibility of engineering elements that are often considered fixed.
2
Co-Design as the Essence of Pervasive Systems
Classical systems development has typically fallen into three categories (Fig. 13.1). Enterprise systems operate at organisational or global scales, and are concerned with issues such as scalability, tolerance of partial failures, security and interoperability. Services provide well-defined functionality to individuals or groups, and stress the importance of user interface design and cohesive functional behaviour. Embedded systems appear inside physical devices, and must typically offer tightlybounded services with low power, high reliability and limited user interaction. Pervasive systems by contrast do not fall into these neat categories. A typical pervasive system is simultaneously an enterprise system, an application and a collection of embedded components. The embedded components will include sensing and processing elements able to observe the physical world and relay these observations. An example might be a collection of embedded location sensors such as RFID tags
13 Co-Design for Context Awareness in Pervasive Systems
Information
Tools
299
Enterprise
Service
Embedded
Global access
Local information, individuals or small groups
Very local information
Web sites
Web sites
Web services
Desktop applications
Embedded sensing and processing
Distributed objects Java, C++, CORBA, XML, SOAP
Intranet web sites Java, C++
Serial, USB and wireless access by other systems C, assembler
Major goals
Interoperability of individual systems Integrity of data
Extension
Open-extensible Add new components on an on-going basis
User interfaces Scrutability Cohesion
Power and performance Robustness
Extended only in a controlled fashion, usually en bloc
Not generally extended or upgraded
Management
No centralized management
Single manager, often the user themselves
Expected to self-manage
Failure
Partial failure
Fail completely
Fail completely
Tolerate inaccessible components
Re-boots acceptable
Failures can threaten existence of organisation
Failures typically only have local impact
Failures generally unacceptable
Fig. 13.1 The designer’s concerns change according to the scale of system
and readers, allowing objects to be tracked within a space. A collection of available services can react to these observations and provide appropriate functions, for example counting objects in and out of rooms or sounding alarms if objects are moved inappropriately. These services may however have quite complex dependencies on each other and on the business rules describing the way the space is managed, providing an enterprise (or indeed global) view of the system’s behaviour. Pervasive systems cross other boundaries too. A typical step in classical systems analysis is to understand a business process and describe it as a workflow, which may then be supported – and perhaps improved – by the provision of IT services. A pervasive system extends this concept with the ability to take account of the physical
300
S. Dobson
locations and actions that individuals carry out in the workflow, and integrating these “everyday” actions directly into the service’s behaviour. In this sense the pervasive system “embodies” the workflow to a far greater degree than does a traditional workflowmanagement system. Re-engineering the environment may affect both the users’ physical actions and, by implication, the IT services’ responses to those actions. We therefore have three aspects to concern us: the sensing and actuation capabilities that allow the pervasive system to observe and affect the real world; the services that the pervasive system provides and adapts according to the logic the system is to provide; and the environment within which the system operates. How does this come about? The goal of pervasive computing is to provide services that “weave themselves into the fabric of everyday life until they are indistinguishable from it” [1]. We cannot effectively separate sensing and actuation from services: we must sense enough information to trigger the services we require in the way we require them, and must provide sufficient actuation to allow them to function. Equally, we cannot separate sensing and actuation from the environment in which they occur, since sensors and actuators are constrained by physics in terms of operation and cost. Furthermore, we cannot separate the environment from the services provided within it, since what is meaningful and correct will depend critically on the ways in which users can access these services. If this sounds too abstract, consider an in-car navigation system. We have sensors for location (perhaps using GPS), and a display to provide actuation of navigation. We have a navigation service that can display a route from A to B, and a driver operating the vehicle. In order to provide the navigation service, we need location information of sufficient accuracy (sensing and actuation affecting services). The information must be given to the driver without interfering with her primary task of steering the car (actuation affecting and constrained by environment). The driver may need more time to react to instructions at night, or when driving at speed, or in bad weather (services affected by environment). If we miss any of these linkages, the system design will be flawed. If we must account for the impact of each aspect on the other, we can also change each aspect to some degree as well. Interconnection works both ways: we might, for example, add sensors to improve our ability to track objects; or we might instead change the placement, composition or colour of objects to make them more trackable with the existing sensors. This gives us extra degrees of freedom that allow us to compensate for deficiencies in one element by investing in another, or conversely to leverage the strength of one element to simplify another. This is the essence of pervasive computing, and is also the essence of co-design: treating a system as a system that can be changed in different ways to achieve the intended goal.
3
Elements of a Pervasive System
This of course begs the question as to how we should go about designing or modifying a pervasive system. The key is to understand the strengths and weaknesses of the three system aspects.
13 Co-Design for Context Awareness in Pervasive Systems
3.1
301
Observation, precision and accuracy
Individual sensors are not perfect information sources. A typical location sensor, such as RFID [2], ultra-wideband or wi-fi triangulation, will only specify location down to a particular granularity: the precision of the sensor. Finer-grained movement is effectively invisible. Furthermore the accuracy of sensors, in the sense of how their readings differ from the “ground truth” of the real world, can also vary widely between sensors, between instances of a particular sensor, and even over time from the same sensor (as is illustrated in Fig. 13.2). However, in many cases the weaknesses of a single piece of sensor data may be compensated for using additional sensor modalities. A person who is both located in an office (obtained from a location sensor) and engaged in editing a document (from a computer activity sensor) can be supported much more effectively than a person for whom only one of these facts is available. This suggests that, in many cases at least, pervasive systems rely on having a sufficiently sensorised environment from which to derive enough cues about the world being observed.
System reports location only within a specific area, not to a point
(x + dx, y + dy) (x, y)
(a) Precision
System reports position as being some distance from “true” position
(x, y)
(x + dx, y + dy)
(b) Accuracy
Fig. 13.2 (a) Precision versus (b) Accuracy
302
S. Dobson
We might also note from this example that the second fact, the user’s activities within a computer, was also sensed – although not using anything a hardware engineer would consider to be a sensor. Such digital or “virtual” sensors share many characteristics with their physical counterparts: they have limited precision and accuracy (although typically better than physical sensors), and provide only a narrow window onto activity. It transpires that we can actually derive more information from the virtual sensor than at first appears, since in order to edit a document a user must typically be located at the computer at which editing is taking place. We can therefore infer location as a corollary of action: if we know where the computer is, we know where the person is. However, while a user is typically at the computer when editing the document, they are not necessarily so: the user may be logged-in remotely, or may not in fact be the person they purport to be. Both physical and virtual sensors therefore share a number of attributes in common: ● ●
●
●
They observe a single aspect of the real world and/or the activities in it The information they deliver is bounded by constraints on precision and accuracy, which may not be constant or easily determined It is sometimes possible to derive additional data by inference from data that has been directly sensed Such inference is also bounded by precision and accuracy
We can conclude that each individual sensor is providing only low-grade data about the world, which must be regarded as inherently imprecise and untrustworthy. The most obvious approach to improving sensing is to improve sensors, developing sensors that are more precise and more accurate. In speaking of hardware sensors, we may add to these requirements a desire to be smaller and to draw less power. There have been significant improvements in sensors over the past years, leading to (amongst other developments) micro- and nano-sensors capable of being integrated into silicon wafers. (For an overview of these developments accessible to software engineers see [3].) It is almost inevitable that smaller sensors both use less power and provide less precise measurements, but can also be extremely robust and reliable. Larger sensors can offer better precision at a cost of a more intrusive form factor and greater power consumption. A basic issue for pervasive systems design is the resulting trade-off between miniaturisation and precision. Virtual sensors have been made radically simpler by the movement towards systems integration through the World-Wide Web. Most common services are now either offered purely in web form, as web sites or (increasingly) as web services [4]. Services such as Google Calendar1 offer on-line calendar storage and management, which can interoperate with desktop products such as Microsoft Exchange and Apple iCal, as well as with PDA and cellphone diaries, using a standardised exchange format [5]. This means that one may construct a “calendar sensor” that can extract
1
http://www.google.com/calendar
13 Co-Design for Context Awareness in Pervasive Systems
303
appointments from a wide range of diaries either directly or by “scraping” information off web pages. Having a reasonably-sized sensor population makes it easier to provide appropriate capabilities for a range of application. While it might be attractive to always use the most precise and accurate sensor available, in many applications the extra precision will be unnecessary, and the costs unwarranted.
3.2
Sensor fusion and uncertain reasoning
The limits of sensors mean that improving the model that a system builds of its environment and users cannot be improved purely by improving the qualities of the individual components. No matter how effective they may be designed to be, components can fail, and can encounter pathological cases that interact badly with their design. These issues can be addressed in two ways. Firstly, the sensor population available to the pervasive systems needs to be sufficiently rich in order to cope with the imprecision and failures of individual sensors. Referring back to figure 13-2, a system might ensure that two or more sensors overlap in “critical” areas of the environment so that their imprecise results can be combined to give a more precise, “consensus” estimate of the user’s position. Secondly, the sensor population can be made diverse so as not to rely on a single sensor modality. This guards against the risk that the circumstances combine to defeat the single mode. An example might be to make use of RFID in order to track people entering and leaving rooms: a person without an RFID tag cannot be tracked. Adding other sensors can alleviate this problem: we might, for example, install triggers to track the opening of doors, so that if a door opens without a corresponding sighting of an RFID tag we may conclude that the door was opened by a un-tracked, un-tagged “ghost” (Fig. 13.3). The diversity of sensors can be provided using physical and virtual sensors: a calendar-based location sensor can be used to provide a basic estimate of a user’s location in the absence of more concrete observations. These techniques do however make severe demands on the reasoning infrastructure of the system. The basic feature is that all sensor data is uncertain and must be treated accordingly: as evidence of fact rather than as facts themselves [6, 7]. In principle one may combine the known, experimentally-obtained figures for the precision and accuracy of each particular sensor within an uncertain reasoning framework: Bayesian networks have proven popular, with Dempster-Shafer evidence theory and fuzzy logic being alternatives. (See [8], [9] and [10] respectively for accessible introductions to these approaches, with [11] for a more complete treatment of this subject.) In practice it can prove difficult or impossible to characterize certain sensors a priori, suggesting that the necessary values be learned by reference to a known “ground truth” and updated periodically to ensure correctness. This is especially true of virtual sensors, since it is hard to decide ahead of time how accurately an individual maintain
304
S. Dobson
(a) An RFID system will detect tagged individuals without showing any signs of “ghosts”
(b) Adding door triggers shows the presence of “ghosts”
Fig. 13.3 Diverse sensors can provide improved tolerance of the unexpected
their on-line diary, for example. Use of a well-founded reasoning framework, rather than a system of arbitrary weights, is more complicated but provides more consistent confidence intervals.
13 Co-Design for Context Awareness in Pervasive Systems
305
The result is a view of pervasive computing that treats all data – physically sensed, virtually sensed or inferred – as provisional with an associated probability or confidence interval. Adding new sensors adds new (imperfect) data sources, which will hopefully improve the confidences by providing more supporting evidence of the state of the world. Equally importantly, diversity of sensing need not lead to an explosion in the complexity of applications, since an application can be insulated from direct contact with the sensors. This is the approach we have taken with the Construct platform2 [12].
3.3
Engineering the environment
It is easy to forget that the environment of a pervasive system is also subject to engineering. This is perfectly understandable, since we tend to take the environment of a computer system as a given: indeed, we typically expend great effort to avoid environmental dependency! Pervasive systems have a closer relationship with their environment, however, and in some cases this relationship can be advantageously manipulated. A straightforward example comes from smart homes. Providing sensors and actuators for assisted living can yield enormous benefits for elderly or infirm residents, allowing them to be provided with sufficient support to facilitate independent living. It is widely accepted that the costs of providing a sensor and actuator infrastructure is negligible as part of a new build, but prohibitive when retrofitted into older structures. This simple engineering of the environment ahead of time facilitates a wide range of applications that would otherwise be economically infeasible. A more subtle example comes from augmented materials, where the sensor, actuator and processor elements are embedded into a physical substrate. In this case, the physical properties of the substrate – its stiffness, conductivity and so on – can be engineered to support the pervasive application being developed. A stiff substrate might be used to increase the force needed to bend a rod of material, for example, providing a more amenable environment for embedded strain gauges. The physics of radio propagation can be used to determine the proximity of two materials, tailoring the frequencies and powers used to provide appropriate detection properties. Having taken such an environment approach, sensor readings which are physically impossible can be immediately classified as noise and discarded, whereas a more complex reasoning process might be required in a less environmentally-constrained situation. There are of course limits to the degree of environmental engineering that particular systems can expect. It might be reasonable to move location sensors around a building, but unacceptable to demolish or re-finish walls in order to change the radio propagation characteristics. A pervasive system is only one part of a typical environment, and cannot expect total control: even limited changes can count for a lot, however, and this should be borne in mind as part of the co-design process. 2
http://www.construct-infrastructure.org
306
4
S. Dobson
Conclusions
Pervasive computing systems combine sensing, actuation and reasoning to provide adaptive services. While the research community is split between emphasising software or hardware solutions, we have argued that a more useful distinction is to focus on pervasive systems development as a problem in co-design, in which sensor components, service descriptions and environmental, task and user models co-exist as equal partners. In pursuit of a given effect or characteristic it is often the case that several different approaches are possible, with different consequences for the overall properties of the system. Systems engineering may involve new hardware, new algorthms, or a change of environment: all of these avenues will typically be open to a designer in ways that are profoundly different to traditional systems. This chapter has in many ways highlighted more questions than it has answered. We have only a limited understanding of several key issues. While adding new sensors can provide improved behaviour, there is as yet no principled way of deciding when an environment is “sufficiently sensorised” for a given application. Experiments reported in the research literature have often taken the reverse approach: given a particular environment and sensor population, what services are possible? This is not a valid way to design systems intended for commercial deployment. Equally importantly, the trade-offs involved in choosing between component, service or environmental changes are imperfectly understood and seldom (if ever) modeled effectively. We still do not have clear engineering or analysis methodologies for deciding whether a given adaptation is “correct” in a given set of circumstances – still less for proving that a system will exhibit that adaptation at the appropriate time. Models of whole-system adaptive behaviour are needed in order to provide the guarantees necessary to support the widespread deployment of pervasive systems. Finally, pervasive systems programming is still in its infancy. Despite the many individual systems developed, we lack a coherent model that would allow developers to develop at the level of the system as opposed to programming the components within it. Sensor networks, to use one popular example, are typically programmed using methods that target individual devices rather than complete networks, even though the problem being addressed will typically be stated in terms of the network and independently of any specific devices or characteristics. This mis-match complicates development and encourages inefficient programming practices.
References 1. M. Weiser. The computer for the 21st century. Scientific American. September 1991. 2. J. Smith, K. Fishkin, B. Jiang A. Mamishev, M. Philipos, A. Rea, S. Roy and K. SundaraRajan. RFID-based techniques for human-activity detection. Communications of the ACM 48(9), pp.39–44. September 2005.
13 Co-Design for Context Awareness in Pervasive Systems
307
3. R. Frank. Understanding Smart Sensors. Artech House. 2000. 4. E. Newcomer. Understanding Web Aervices: XML, SOAP, WSDL and UDDI. Addison-Wesley. 2002. 5. F. Dawson and D. Stenerson. Internet calendaring and scheduling core object specification (iCalendar). RFC 2445. 1998. 6. S. Dobson and P. Nixon. Whole-system programming of adaptive ambient intelligence. Proceedings of HCI International. Beijing, CN. Lecture Notes in Computer Science. SpringerVerlag. 2007. 7. S. Dobson. Leveraging the subtleties of location. In Proceedings of Smart Objects and Ambient Intelligence, pages 175–179. G. Bailly, J. Crowley and G. Privat (ed). Grenoble, FR. 2005. 8. E. Charniak. Bayesian networks without tears. AI Magazine 12(4), pp. 50–63. 1991. 9. G. Shafer. Perspectives on the theory and practice of belief functions. International Journal of Approximate Reasoning 3, pp. 1–40. 1990. 10. L. Zadeh. Fuzzy logic. IEEE Computer 21(4), pp. 83–93. April 1988. 11. J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press. 2000. 12. L. Coyle, S. Neely, G. Stevenson, M. Sullivan, S. Dobson and P. Nixon. Sensor fusion-based middleware for smart homes. International Journal of Assistive Robotics and Mechatronics 8(2), pp. 53–60. 2007.
Part VIII
User-Centered Systems From Concept to Reality in Practical Steps
1.1
Summary
The topics addressed in the book to this point are largely technology-centric. Whether the chapters are investigating the current status and progress of particular research domains, discussing whole-systems challenges, or analyzing methodologies for collaborative design and co-innovation, the focus has largely been upon technology platforms that can combine to create a systems infrastructure. However, to be realised an Ambient Intelligence (AmI) system must make the user the priority and any attempt to dilute this is highly unlikely to be effective. Thus, it is difficult, if not impossible, to sustain research into AmI without at some point addressing in detail the issue of how a collaborative research process is implemented to understand the user(s) current and future requirements. As one might imagine, this is a complex process, and one that must acknowledge all of the many stakeholders that will be involved in creating Smart Systems for AmI, including in particular industry. This part discusses this from two perspectives. First, in chapter 14 the area of User-centered Design is described, advocating a process that places the user at the heart of future Smart Systems development. Secondly, chapter 15 investigates the long-standing difficulties that exist in collaboration between industry and academia. The analysis, framed by the experiences of industry and academic researchers in Ireland, describes evolving enterprise-focused initiatives that are prototype-driven and designed to encourage industry-led applied research. The key elements, which should be supported by the academic partners, are a flexible approach, ongoing communication about each of the industry-academic objectives and a planning/road mapping process that is clear to industry partners.
1.2
Relevance TO Microsystems
Microsystems are typically expensive to create, fabricate and validate and consistently involve relatively slow and demanding development cycles. For MEMS devices where the market need is established, there can a ready justification for the
310
Part VIII User-Centered Systems
process driven technology innovations that may improve performance. This can and should be underpinned by analyses that link these process innovation schemes with a consistent value-statement and by extension a viable, continuing return on investment. Companies that employ user-centered design increase the likelihood of such an outcome measurably. More importantly, however, the general nature of a process that highlights the specific problem(s) that are being solve for the user will also offer the opportunity to create new markets and most likely, given the nature of AmI, new sensor devices (including MEMS) and subsystems. These opportunities are highlighted in general terms within AmI roadmaps; however, for individual companies working in the Smart Systems domain, the advantages of user-centric and prototype-led programmes in defining new opportunities are tangible.
1.3
Recommended References
There are a number of relevant publications in the growing area of User-Centered Design, a selection of which are provided below. A particularly interesting and potentially useful source of reference material, relevant to both chapters and relating to Ambient Intelligence and it composite technologies, can be found in the work of the European Commission’s Advisory Group on Information Society Technologies (ISTAG). While the challenges inherent in creating effective collaborative programmes between industry and academia are certainly international issues, the focus of Chapter 15 was within an Irish context. Thus, for those interested in further reading related to this chapter, a general reference on management of technology transfer is provided, as well as two websites for Irish agencies responsible for investigative science in Biotechnology and ICT (Science Foundation Ireland), as well enterprise development and applied R&D in industrial technologies, ICT and in life-sciences and food (Enterprise Ireland). 1. J. Green, Democratizing the future: Towards a new era of creativity and growth, Phillips Design (2007) 2. N. Makelberge, Flow, Interaction Design and Contemporary Boredom. IT University of Gothenburg, Gothenburg, Sweden. (2004) 3. B Moggridge, Designing Interactions, MIT Press (2006) 4. IST Advisory Group: http://cordis.europa.eu/ist/istag.htm 5. J. Cunningham, B. Harney, Strategic Management of Technology Transfer: The New Challenge on Campus, Oak Tree Press, 2006 6. Science Foundation Ireland: www.sfi.ie 7. Enterprise Ireland: www.enterprise-ireland.com/
Chapter 14
User-Centred Design and Development of Future Smart Systems: Opportunities and Challenges Justin Knecht
Abstract A user-centred approach to the design and development of future smart systems won’t guarantee their success, but if the user isn’t taken carefully into consideration during development, the probability of failure will be much higher. An ethnographic approach to design research will provide insights into what people value in order to help us create technological solutions that uniquely meet those needs. The potential exists to simplify our interactions with technology and make it more transparent. AmI allows us to create systems that facilitate actual human contact versus virtual contact; create systems that clarify instead of confuse; and take into consideration universal and sustainable design principles. However, we are confronted with the design challenge of creating new conventions of interface for users to understand and to adapt. The technology itself needs to understand input vs. noise, give feedback to a user when a screen may not be present and understand the context a person is in at any given moment. Ambient intelligence promises to make our lives better. But what does that actually mean? A state of flow is achieved when we are confronted with a task that provides enough challenge for our current skill level. Provide too much challenge for level of skill and you experience anxiety. Provide a challenge well below the level of skill of an individual and you induce boredom. As technology is employed more and more to reduce our daily challenges are we creating products and services that lead to a better life? Should we be purposely approaching design for self-actualisation? Keywords User-centred design, design research, design process, innovation, flow, HCI, interaction design, ambient intelligence
1
Introduction
In the past we approached technology. Increasingly in the future, technology will be approaching us. What responsibility will we have as designers? Centre for Design Innovation, Sligo, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
311
312
J. Knecht
Consider the following scenario taken from the Wikipedia entry [16] on ambient intelligence: Ellen returns home after a long day’s work. At the front door an intelligent surveillance camera recognises her, the door alarm is switched off, and the door unlocks and opens. When she enters the hall the house map indicates that her husband Peter is at an art fair in Paris, and that her daughter Charlotte is in the children’s playroom, where she is playing with an interactive screen. The remote children surveillance service is notified that she is at home, and subsequently the on-line connection is switched off. When she enters the kitchen the family memo frame lights up to indicate that there are new messages. The shopping list that has been composed needs confirmation before it is sent to the supermarket for delivery. There is also a message notifying that the home information system has found new information on the semantic Web about economic holiday cottages with sea sight in Spain. She briefly connects to the playroom to say hello to Charlotte, and her video picture automatically appears on the flat screen that is currently used by Charlotte. Next, she connects to Peter at the art fair in Paris. He shows her through his contact lens camera some of the sculptures he intends to buy, and she confirms his choice. In the mean time she selects one of the displayed menus that indicate what can be prepared with the food that is currently available from the pantry and the refrigerator. Next, she switches to the video on demand channel to watch the latest news program. Through the follow me she switches over to the flat screen in the bedroom where she is going to have her personalized workout session. Later that evening, after Peter has returned home, they are chatting with a friend in the living room with their personalized ambient lighting switched on. They watch the virtual presenter that informs them about the programs and the information that have been recorded by the home storage server earlier that day.
The promise (and reality) of embedded intelligence in everyday objects provides opportunities in the form of real time, relevant and customised information at our fingertips - all the time and everywhere. Will ubiquitous computing be a utopia of seamless integration and simplicity, or a dystopia of information overload, erosion of personal privacy and truly ‘fatal’ errors? What effect will it have on our social relationships? Will we feel more fulfilled, or emptier than ever? Will this bring us closer together or drive us farther apart? A user-centred approach to the design and development of future smart systems won’t guarantee their success, but if the user isn’t taken carefully into consideration during development, the probability of failure will be much higher.
2
Why User-Centred Design? I consider myself a futurist. I consider myself a humanist. For me, technology is an incredible enabler, but it means nothing if it doesn’t consider the human being, the human touch. - Yves Béhar, Designer [11]
The success of any new technology is rarely judged on its own innate abilities. The user determines true innovation. The ability to add processing power and intelligence to everyday objects doesn’t mean we necessarily should. If we do, people need to feature prominently in the development process. Take this typical innovation model (see Fig. 14.1). It balances the needs of business, technology and the user. Ambient intelligence is an innovative technology on
14 User-Centred Design and Development of Future Smart Systems
313
Fig. 14.1 Innovation Model
its own, but not an innovation per se until it is translated into a usable product or service that meets a consumer need. Consumer demand for usable products and services drives value to business. Good design is ultimately about creating value and ultimately the consumer decides what is of value. If we focus too much on technology at the expense of the other two, and you get a techno-centric invention, or a gadget that nobody needs. In Japan, electronics maker BDL expected the Snuggly Ifbot to be a hit with the elderly in nursing homes looking for companionship and conversation. The Ifbot was equipped with the vocabulary of five year-old and could sing songs. The $5000 robot didn’t catch on and was perceived as over-complicated and impractical by its users. What the Japanese elderly wanted was “everyday products adapted to their needs – easy to read for those with poor eyesight, big buttons for people with trembling hands and clear audio for the hard of hearing.” [3] Josephine Green [4] has coined the term “context economy.” “As technology merges into our walls, floors and clothes, then we no longer ‘consume’ technology, but live with it side-by-side as it supports and facilitates our daily living, an invisible helper at the ready. Through this more intimate co-existence our identity becomes less about needs (‘what do I want?’) and more about activity and experience (‘how can I best take advantage of what I want to do in the way I want to do it?’)” Context is king. “What we value, rather than what we consume, becomes the issue.”
314
3
J. Knecht
How then do we Determine what is of Value to the User? To understand invisibility the humanities and social sciences are especially valuable, because they specialize in exposing the otherwise invisible. For instance, ethnography can teach us something of the importance of the details of context and setting and cultural background. – Mark Weiser, Xerox PARC [15]
Design research is based on ethnography and allows us to study people within the context of their daily lives. From this observation we can gather clues or insights from people’s behaviour. Ambient intelligence looks to traverse different interconnected social settings (the home, workplace, school, hospital, social care facilities, cultural institutions) [8] and within each of these environments, the needs and roles of users change dramatically. Ambient technology will not only need to know what a user values at a given time, but must be able to detect the context of the user as well. The first step to doing design research is to identify your users. This might appear straightforward at first as you might assume this is simply the end user of your product or service. However, just like the multiple networks that exist to support smart systems, no product or service has just one user. Every user is not of equal importance, but ignoring a key group may prevent the adoption of your new product or service. Let us look at the original scenario of Ellen returning home from work and coming across her intelligent surveillance system. Who sold her the system in the first place? Who recommended that she might look into such a system? Did she purchase the system, or did her husband, landlord or builder acquire it? Let’s say it malfunctions and she cannot get into her house, who is going to support her? The system was connected to her alarm system and door locks. What people were involved in identifying and creating compatible products and protocols? Who installed the system? Each of these people is a long link in a chain, with various needs at different stages of sales, installation, use and support; and a potential for gathering insight into a product. While working at Crayola, the children’s crayon company, we frequently referred to a four-legged consumer. Our end user (the child) rarely was the one buying the product, but had a lot of say in the decision and evaluation of the end product. The caretaker or parent was usually the purchaser, and had particular needs of her own. Ignoring either would have been a mistake. You can’t do user-centred design until you identify your key users. There are many techniques for applying design research and they can be conveniently grouped into three categories of Look, Involve and Try.
14 User-Centred Design and Development of Future Smart Systems
3.1
Look, Involve and Try
3.1.1
Look
315
‘Look’ comes the closest to traditional ethnography. You are doing observation of people in their natural habitat. Designing a kitchen device? Better get into peoples’ kitchens. Working on a medical device? Better get into a hospital. Direct observation can be difficult and uncomfortable for the observer and the observed. What if you were studying subjects taking showers and baths, or using the toilet? Even in a less personal situation, you can always have people do the observation themselves and take pictures as part of a photo diary to identify issues and opportunities. Why look? Because people will do things they never will tell you. When the research team at Miele was observing parents of asthmatics and allergy sufferers vacuuming, they noticed that parents were spending an inordinate amount of time cleaning mattresses. This went beyond simple care for an individual to bordering on excessive cleaning. The insight was that the vacuum had no way of telling the user when the mattress was clean. Bagless vacuums show you how much dirt you have collected which may give you some satisfaction for what you are removing, but it doesn’t tell you when to stop. The engineers built a simple technology into the head of the vacuum that measures the particulate stream and displays a simple, red, amber and green system. Red denotes the surface is still dirty, yellow denotes it is getting cleaner (and gives some nice feedback to the user that progress is being made) and green means the surface is clean. The vacuum lets you know when to stop, but doesn’t let you know when to start. Could smart fabrics on rugs and mattresses let you know when they need to be cleaned, or when harmful bacteria are present?
3.1.2
Involve
‘Involve’ lends itself to co-design or unfinished design. Co-creation is an excellent way to integrate user needs by actually involving people in the design process. One rule of thumb when involving users is to look for “extreme users”. Going back to the original scenario with Ellen and the automated menu that suggested recipes based on the available food in the kitchen. It might be counter-intuitive to involve people that rarely cook at home, or amateur gourmets as your primary users for research. Why not the mass in the middle which get ordinary food on the table on a regular basis? Research has shown that more innovative insights are to be found by involving extreme users. From their stronger stance of loving or hating an activity, extreme users will provide deeper insights into key issues, needs and values more quickly.
316
3.1.3
J. Knecht
Try
There are two important aspects to ‘try’. First, have you taken a walk in your users shoes? When Toyota was designing what would become the Lexus, a small number of designers moved to California for nine months to live the West Coast lifestyle. Living in the households of their target users, they shopped in the same stores, golfed together and drove their children to school in Jaguar, BMW and Mercedes automobiles. The insights from this first-hand experience of the upper middle class professional American, made it into the final design of the vehicle. The Lexus went on to become the best selling premium imported luxury car for 15 out of its 17 years. The other key activity within trying is developing prototypes. One of the fundamental benefits of a design process is making things visual. A physical prototype moves and idea from concept to reality and allows people to interact and react to an actual object. The Information Society Technologies Advisory Group (ISTAG) believes: A successful realization of AmI, therefore, is likely to require new forms of experience design and prototyping involving social, cultural and psychological research. [8]
Their report goes on to propose that Experience and Application Research Centres be created: ●
●
●
Facilities are needed to support fast prototyping of novel interaction concepts and resemble natural environments of use. Feasibility and Usability centres would test components in real user environments on a small scale and investigate their usability. Validation and Demonstration centres would take promising prototypes and fully integrate them into large-scale real-life situations and validate them through extensive user tests.
Whether specific centres come into existence or not, developing prototypes throughout the development process are key to the success and acceptance of AmI.
3.2
The changing concept of the interface and interaction design
PCs have their own language and their own logic which sometimes defeats me. –Alex Murray, 74, Retired engineer [13]
Human interaction with computers and micro processing has largely been dominated by screen-enabled interfaces. The interface has largely been dictated by the size of the display and a limited arrangement of input devices, like keyboards, mice and buttons. As we liberate the interface from the common formats of the screen
14 User-Centred Design and Development of Future Smart Systems
317
and ‘WIMP’ (window, icon, menu and pointer) interfaces, we have the opportunity to improve usability. We are confronted with the design challenge of creating new conventions of interface for users to understand and to adapt. The technology needs to understand input vs. noise, give feedback to a user when a screen may not be present and understand the context a person is in at any given moment. It is pretty clear when you are trying to address your handset, your laptop or television. You press a key. You use the mouse. You pick up the remote. You read the display of your phone, observe progress bars and status screens on your computer and you watch the picture change on you television. How do you address smart wallpaper? How do you know if it is listening (and you don’t want it to)? How does it know you are addressing it and not wiping a spider from its surface or outstretching a hand to balance yourself? Victoria Belotti et al. [1] describe this as a shift from Norman’s [12] “Interaction as execution and evaluation” where the user constantly drives the interaction and evaluation to “interaction as communication” which relies more on humanto-human interaction and conversational cues. In an interesting turn of events, the system itself is becoming a user. Belotti et al. suggest five questions that address the needs of the user, and arguably, the system itself.
3.2.1
‘Address’
What mechanism does the user employ to address the system? This is fairly straightforward if you are sitting at a computer, but not clear when a keyboard or button no longer exists. If using a voice-activated system, how do you intentionally not address the system? I wonder if sports fans that had installed a “clapper device” to operate their household lights ever experienced a strobe effect during a particularly good match on TV. This might be humorous, but it is easy to think up scenarios where accidentally addressing a system could have dangerous consequences.
3.2.2
‘Attention’
How does the system provide cues about attention? Human beings look into each other’s eyes when they are communicating and typical computer interactions provide feedback on screen. Tagged devices may trigger a command simply through proximity, but how will the user be aware that a transaction is taking place? Ambient displays in themselves present interesting opportunities by having the ability to display information in the periphery. People typically focus on the task at hand, while cues from the outside world compete for their attention. Ambient displays can provide subtle cues to users, like the Ambient Devices Orb, which uses light and colour to represent changes in the weather or the stock market. Extensive research at MIT [7] is being done on ambient environments and tangible user interfaces where the world becomes the interface. Work on the ambientROOM has
318
J. Knecht
researched appropriate ambient displays and stimulus that allow a user to naturally move between foreground and background tasks.
3.2.3
‘Action’
Once you know how to address the system and when it is attending to you, how do you make it do something? How do you specify an action and its extent? One solution is to impose certain design constraints and allow only one action per object. This may be natural for objects that already have a specific use and perhaps a single sensor is added to them to augment their function. Applying the metaphor of the Miele vacuum system, perhaps a sensor on a knife informs the user that the food they are preparing contains harmful bacteria. Or, in the original scenario, Ellen has a tagged “key” with the single function of opening a keyless door. “Uni-dimensional” input is easier to design for and will reduce unwanted errors but more complex interactions between multiple objects, multiple actions and targets will need to be prototyped and designed very carefully to work with novice users.
3.2.4
‘Alignment’
How do you know the system is doing (or has done) the right thing? How do you receive timely and appropriate feedback? Simply, this feedback could become overwhelming to the user, with multiple objects performing multiple tasks all providing audio or visual feedback. Like uni-dimensional input, you could restrict people to the “space of action” but that seems to go against the vision of freeing ourselves from having to be in a certain place. It will take a good deal of research to understand what the appropriate level and format of feedback is to a user on a case-by-case basis.
3.2.5
‘Accidents’
Finally, how do you avoid an accident? What happens when a mistake is made? Everyone is familiar with the Undo function on a computer, but what does Undo look like in an embedded system? What if you started something by mistake and you wanted to stop it? Should the system be programmed to compensate for human error? You could imagine a system for dispensing medication that could immediately recognise whether the correct dose was being administered. One radical suggestion is to remove the Undo button completely and “try to create a feeling of non-reversibility.” [10] If users understand that a command, once issued, cannot be reversed they should approach tasks with more consideration, safety and thought. A similar approach is the basis for “naked streets”, as pioneered by the late Hans Monderman. By removing signage from road junctions and streets, drivers
14 User-Centred Design and Development of Future Smart Systems
319
exhibit greater care and safety. They also need to negotiate right of way through eye contact. In a London pilot project on the busy Kensington High Street, over a two-year period since instituting the removal of signage, pedestrian fatalities dropped by 43%. [6] Monderman was famous for saying, “If you treat drivers like idiots, they act as idiots. Never treat anyone in the public realm as an idiot, always assume they have intelligence.” [14] Like the user-centred process addressed earlier, this framework [1] provides the beginnings for a systemic approach to the design of interactive systems in order to avoid a number of potential pitfalls and hazards.
4.
Does Transparency Equal Acceptance? A good tool is an invisible tool. By invisible, I mean that the tool does not intrude on your consciousness; you focus on the task, not the tool. Eyeglasses are a good tool - you look at the world, not the eyeglasses. – Mark Weiser, Xerox PARC [15]
Technology is truly accepted once it becomes transparent. The ISTAG recommendations [9] and users’ privacy concerns need to be taken into consideration if we expect user acceptance. AmI allows us to create systems that facilitate actual human contact versus virtual contact; create systems that clarify instead of confuse; and take into consideration universal and sustainable design principles. Ambient intelligence promises to make our lives better. But what does that actually mean? Makelberge argues that the ultimate goal is “computing for selfactualisation.” [10] Beyond addressing lower-order needs in Maslow’s hierarchy of needs in our design process, should we turn our attention to encompass “being needs”? Is this the ultimate design challenge? In Csikszentmihalyi’s investigation of human happiness [2], he further defined the term “flow” to explain that state when one experiences absolute absorption in an activity. All our other goals of money, power, wealth, fame and fortune are driven by our need to be happy. A state of flow is achieved when we are confronted with a task that provides enough challenge for our current skill level. Provide too much challenge for level of skill and you experience anxiety. Provide a challenge well below the level of skill of an individual and you induce boredom (see Fig. 14.2). As technology is employed more and more to reduce our daily challenges are we creating products and services that lead to a better life? Should we be purposely approaching design for self-actualisation? Take the original scenario of Ellen being presented with the options of the meals that can be prepared in her kitchen. If we were to follow this scenario through, one system may mechanically suggest all the correct ingredients and portions, and perhaps even cook the meal for you. No room for error. On the other hand, a system designed for self-actualisation might already have some idea of the skill of cooking and set an appropriate challenge, leaving the design open and the user more fulfilled.
320
J. Knecht
Fig. 14.2 A student (1) will move out of flow and become bored, as his skill in an activity increases (2a), unless the challenge to succeed also increases. Likewise, a student (1) will move out of flow if the demand on him is too great (2b). To stay in flow, he must increase his level of skill. [5]
If you were providing a drawing application for children, is the more rewarding application to deliver a perfect finished drawing that the child merely colour in, or perhaps providing half the picture; engaging the child’s own creative abilities to complete the drawing. Which is the more rewarding experience? And what are the long-term effects of each of these options? Whether we are designing to better meet functional needs, redefining the humancomputer interface, or truly looking to design for better lives, none of this is possible without understanding human behaviour. By applying a user-centred design process and considering some of the questions raised by others, and the author, we stand a better chance at success in the adoption and acceptance of future smart systems.
Reference 1. Bellotti, V., Back, M., Edwards, W. K., Grinter, R. E., Henderson, A., Lopes, C. (2002). Making Sense of Sensing Systems: Five Questions of Designers and Researchers. CHI 2002, Minneapolis, Minnesota, USA. 2. Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience. Harper & Row, New York, NY, USA.
14 User-Centred Design and Development of Future Smart Systems
321
3. Foulk, E. (2007, September 20). Robots turn off senior citizens in aging Japan. Reuters. from http://www.reuters.com/article/inDepthNews/idUST29547120070920?sp=true Retrieved March 17, 2008 4. Green, J. (2007). Democratizing the future: Towards a new era of creativity and growth. Phillips Design. 5. Greenberg, A. (2004). Flow and peak performance. In B. Hoffman (Ed.), Encyclopedia of Educational Technology. Retrieved March 20, 2008, from http://coe.sdsu.edu/eet/Articles/ flow/start.htm 6. Gould, M. (2006, April 12). Life on the open road. The Guradian. http://www.guardian.co.uk/ society/2006/apr/12/communities.guardiansocietysupplement Accessed 31 March 2008. 7. Ishii, H., Ullmer, B. (1997). Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms. MIT Media Lab, Tangible Media Group, Cambridge, MA, USA. 8. IST Advisory Group (2003). Ambient Intelligence: from vision to reality. 9. IST Advisory Group (2001). Scenarios for Ambient Intelligence in 2010. 10. Makelberge, N. (2004). Flow, Interaction Design and Contemporary Boredom. IT University of Gothenburg, Gothenburg, Sweden. 11. Morla, J. (2008, February 21). Industrial Design for the Public Good. Design Within Reach, http://news.dwr.com/archive/9z1zufsvu64f1ov9au9vuugu22kft7mmmbiaf31v0lg. Accessed 21 February 2008. 12. Norman, D. A. (1988). The design of everyday things. The MIT Press, Cambridge, MA, USA. 13. Schenker, J. L. (2000, February 28). Not Very PC. Time Europe, VOL. 155 NO. 8. 14. Times Online. (2008, January 11). Accessed 31 March 2008. http://www.timesonline.co.uk/ tol/comment/obituaries/article3167372.ece 15. Weiser, M. (1993, November 7). The world is not a desktop. 16. Wikipedia. Ambient intelligence. http://en.wikipedia.org/wiki/Ambient_intelligence Accessed 3 March 2008.
Chapter 15
Embedded Systems Research and Innovation Programmes for Industry Kieran Delaney
Abstract Industry and Academic researcher have traditionally struggled when it comes to developing meaningful and long-term partnerships. The growth in multidisciplinary research, driven by concepts such as Ambient Intelligence (AmI) will further complicate this by significantly shifting the focus towards collaborative R&D between academics, which is something that will become more and more difficult for industry, and particularly SMEs, to form. New approaches to support effective collaboration with industry would be timely. This Chapter describes the development of a number of evolving industry-focused initiatives which are designed to encourage industry-led applied research. The key elements in these programmes are a flexible approach, ongoing communication about each of the industry-academic objectives and a planning/road mapping process that is clear to the industry partner. Keywords Embedded Computing, Small-to-Medium Enterprise (SME), Multinational, Industry Programme, Prototyping, Wireless Sensor Network, Intellectual Property, Sustainability, Scalability.
1
Introduction
Ireland has seen a significant increase in national funding for research, particularly since the year 2000. This has completely altered the landscape for research. Universities and Research Organisations have transferred their focus from research programmes in Europe (e.g. the Sixth and Seventh Frameworks [1]) to research programmes coordinated by national bodies, such as Science Foundation Ireland [2] and Enterprise Ireland [3]. Science Foundation Ireland (SFI) is charged with advancing scientific research in Ireland, specifically in the areas of information and
Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
323
324
K. Delaney
communications technology (ICT) and in biotechnology. Enterprise Ireland ((EI)) is responsible for development and promotion of the indigenous business sector and the agency invests in research and innovation in order to promote the competitiveness of Irish industry. In meeting their remits, both agencies have seeded the establishment of new Centres of Excellence with varying objectives and varying levels of impact. They are also investigating structures and mechanisms to sustain the more valuable Centres as high impact activities into the future. For a number of reasons, (strategic, collaborative and financial) this includes refocusing on research at a European level, an objective that is largely being driven by Enterprise Ireland. In Cork Institute of Technology, change has been equally rapid. The Centre for Adaptive Wireless Systems (CAWS) was founded within the Institute’s Department of Electronic Engineering in September 2000. Initially known as the Adaptive Wireless Systems Group with six researchers, it had grown to forty by end of 2006, enabled by grants from a variety of, mostly national, sources; a total of ca. €5M was acquired in the period 2002–2006. A significant factor in this growth was the experience of the Principal Investigators involved (the three lead investigators had over thirty years combined experience in competitive research). A second, equally important factor was the deliberately multi-disciplinary nature of the Group/ Centre’s mission, namely to harness a convergence of hardware, networking and software skills in the area of wireless and embedded sensor systems, often with external academic and industrial partners. The third factor is cultural; the institute has consistently supported a culture of openness, informality and a ‘willingness to help’, in particular its industrial partners, whether this had a strategic impact for the institute (and its staff members) or otherwise. This, ‘social glue’ effect, is one reason why Irish Institutes like CIT are seen as natural focal points for supporting indigenous industry, a perspective that is gradually being formalized by Enterprise Ireland. In real terms, these factors succeeded in shifting the challenge of genuinely collaborative research on networkable embedded systems from the realm of the impossible (at least for a new organisation) to being merely very difficult. In fact, it was recognised in 2006 that, while specific collaborative projects were successful, there was a tendency for the hardware, software and networking disciplines to work side-by-side, or to deliver services to each other, rather than to ‘co-innovate’. Such things do not necessarily happen organically. In fact, while new vision statements, such as Ambient Intelligence (AmI), derive much of their potency from the idea of harnessing multiple disciplines, understanding how to implement even a small aspect of this vision can be an exercise in innovation, in itself. In addition to this, it must be recognised that, however a multidisciplinary research programme is brought to a successful outcome; its output must by definition be exploitable. Dissemination though peer review is as important as ever, if not more so. However, for a country of the size of Ireland, the strategic value of providing indigenous industry with a means to increase its competitiveness, and do so specifically in the context of Ireland as an emerging knowledge economy, cannot be understated. Given the scale of some of the research challenges and the traditional discontinuities
15 Embedded Systems Research and Innovation Programmes for Industry
325
between the exist between industry and academia, one would be forgiven for concluding that combining these challenges together, as one must, sends the overall mission back into the realm of the impossible. So what should a Research Group do? How should a research programme be constructed so that the Group can invest in scientific research and provide a genuine innovation process for industry? What are the key elements to ensuring this can be sustained? This Chapter will discuss how these questions are being addressed at Cork Institute of Technology, describing the development and implementation of R&D programmes in the context of the technical and innovation targets. In particular, we will seek to highlight what are perhaps the three most important features of any initiative of this kind: flexibility, communication and clarity in planning.
2
The Technologies for Embedded Computing (TEC) Centre
In January 2006, Cork Institute of Technology was awarded an Applied Research Enhancement (ARE) Programme [4] grant of 1.25MEuro by Enterprise Ireland, one of the national agencies. The grant was awarded in order to establish the Technologies for Embedded Computing (TEC) Centre as a Centre of Excellence in applied research on embedded systems technologies. The objectives of the Centre are specifically (1) to enhance existing regional industry, (2) to create new start-up activity with international potential and (3) to build formal industry-led programmes to sustain the growth of the Irish embedded systems industry. These objectives can be distilled into the following mission statement: To Strengthen the Regional and National Innovation Infrastructure by becoming an effective “One-Stop-Facility” for enterprise in Applied Research in Embedded Systems.
The foundation for achieving this is based upon building up sustainable partnerships with regional Small-to-Medium Enterprises (SMEs) and with Multi-national Companies (MNC). The resources to do this research are to be derived from the additional staff in the new Centre, from the existing Research Entities in CIT, such as the Centre for Adaptive Wireless Systems, the Department of Electronic Engineering, the School of Mathematics and Computing and from other academic partners. This is also supported on-campus through CIT’s enterprise development resources, (i.e. the Genesis Enterprise Programme [5] and CIT Innovation Centre, known as the Rubicon Centre). The Genesis Programme, an industry start-up programme, was established by CIT, UCC and EI in 1997 and has supported 95 start-ups. In 2006, CIT opened its Incubation Centre, the Rubicon, which hosts and currently supports over 23 start-ups; it is now the second most successful such Centre in the country. The key deliverable is the development of a sustainable and growing industryled applied research program integrated with established CIT research activities,
326
K. Delaney
commencing under R&D strands (agreed with regional enterprise) and built to achieve a high-level of industry driven research. Each R&D strand must derive a technology roadmap, and support funded collaborative research programmes co-written with SMEs and local industry.
2.1
The TEC Centre R&D Programme
The context for this research is defined by the insight of the researchers; the perspective of the relevant companies in particular and agencies such as Enterprise Ireland; as well as the activities of the Genesis Programme and Rubicon Centre enterprise initiatives. The methods employed to achieve success include (a) building a research repository - from concept to result - that is benchmarked against the state-of-the-art and current markets, (b) providing space, facilities and equipment to industry and research partners, (c) generating and maintaining continuous dialogue between researchers and industry that drives new innovation initiatives and (d) creating educational programmes for second-level, undergraduate and postgraduate students. To apply this effectively, the programme first set out to determine the key, relevant R&D challenges and then define appropriate concepts for technologies and products with innovation potential. This must be matched with an understanding of industry needs based upon substantial interaction. A major aspect of the programme is a methodology that ensures ongoing communications with companies. The approach includes clustering of researchers to make clear statements of potential, and clustering of industry partners to review concepts and support development of effective roadmaps. The objectives of the TEC Centre are, in particular, underpinned by the establishment of a prototyping and development facility within the Centre. This facility enables novel embedded platforms (developed internally, accessed through research by partner Groups and Institutes and sourced externally) to be directly utilized by industry partners in a manner appropriate to their own development programme.
3
Why Embedded Computing Systems?
The area of embedded computing (i.e. electronics and software in such things as consumer electronics, entertainment devices, household appliances, cars, airplanes, etc) is earmarked to lead the next wave of key technology development in the near future. Embedded computing is increasingly driven by large visions (Ubiquitous Computing, Ambient Intelligence, etc) that encompass a drive to simplify user interaction while maximizing the scope, scalability and effectiveness of the systems. This drive is effectively about unobtrusively embedding complex behavior (digitally and physically) within systems to enable intelligent operation and low cost of ownership. While some aspects of the development of these visions are very long-term, others are viable within the next three to five years.
15 Embedded Systems Research and Innovation Programmes for Industry
3.1
327
The embedded computing revolution
Embedded systems have a wide range of applicability to many market sectors and their importance is growing continuously. The drive for innovation combined with opportunities to use mature technology in new ways has made such systems the focus of much activity. ARTEMIS (Advanced Research & Technology for Embedded Intelligence and Systems) [6] is an example of this; an Industry-led initiative to reinforce the EU position as a worldwide leader in design, integration and supply of Embedded Systems. To demonstrate the potential of embedded systems ARTEMIS has reported that: ●
●
●
●
●
It is estimated that the worldwide average of 8 billion embedded micro components (2003) [7] will have doubled by 2010. (The equivalent of 3 embedded devices for every person on earth) Embedded devices markets are projected to grow by 10.3% [8] per year during the period 1999–2011. In the automotive industry, embedded electronics are an increasing proportion of the vehicle value, projected to increase from 22% in 1997 to 33–40 % in 2010 [9]. In the avionics sector, embedded software now accounts for a significant portion of the development costs of a plane. It is estimated that the growth of the digital home in the US alone will generate 200+ billion in revenue by 2010.
Such trends will continue in the future with the scope of the development becoming broader, creating new markets as new enabling technologies emerge. The vision of Ambient Intelligence (AmI) that is driving the European Research Agenda has fueled initiatives in device miniaturisation, where more and more functionality is available for less and less cost. New programmes of research, development and technology transfer on smart objects, networked systems and new digital applications will provide opportunities for future applied development by Irish researchers and industry provided coherent strategic approaches are adopted quickly. European research funding for Embedded Systems under the previous framework (FP6) was estimated to be € 70 million per year by 2005. In the current Framework (FP7), the projected Embedded Systems budget will double by 2010, with industry committed to a 50% direct contribution to the total project costs. The industry funding commitment over this period will be consistently at 50% level. Thus, significant growth in innovation activity in embedded systems can be anticipated. Cork, and Ireland’s, role in this can be significant provided the existing research expertise is consolidated and supported by an explicit action on applied embedded systems research.
3.2
The Current international state of the art
“Embedded Systems are special-purpose computer systems, performing pre-defined tasks, completely encapsulated by the devices they control. They combine hardware and software to facilitate mass production and various applications”. [10]
328
K. Delaney
The future of information technology systems will be driven by the vision of Ambient Intelligence, or AmI [11]. In this vision, Ambient Intelligence will surround us with proactive interfaces supported by massively distributed computing and networking technology platforms. The goal of AmI provides a driver to technology development that is likely to result in the vast integration of information systems into everyday objects [12, 13], such as furniture, clothes, vehicles, roads and even materials like paint, wallpaper, etc. Much of this is long-term research; enabling technologies and research topics for AmI include Smart Dust [14], Smart Matter [15], Smart Textiles [16], Context Aware Systems [17], Collaborative Autonomous Agents [18], etc. However, there is significant scope to develop innovative products by merging mature technology platforms with concepts and services derived from the emerging principals of the AmI concept. In Europe, a focal point of this type of research has been the “Disappearing Computer” initiative [19], a program consisting of 17 projects [20]. The goal of this program is to explore how people’s lives can be supported and enhanced through developing new methods for the embedding computation in everyday objects, through investigating new styles of functionality that can be engineered, and through studying how useful collaborative behaviour in such interacting artefacts can be developed. Such initiatives have been replicated globally, and in fact are increasingly driving the research agenda; for example, strategic calls for applied research on smart objects have been launched in 2005 within the current European Programme (FP6), and the focus will be increased for the upcoming Framework 7. Another measure of the area’s significance is the development of large-scale networking initiatives, including Artemis [21], Chess [22] the Embedded Linux Consortium [23], and T-Engine Forum [24]. The T-Engine Forum is a community of interest around Ubiquitous Computing architectures, developed by the originator of the ITRON operating system; the most widely used in the World. In Ireland, aspects of this research domain have been captured through the Enterprise Ireland (EI) Informatics Research Initiative [25], which includes projects on digital media, e-business and mobile and wireless technologies. Funded by another Irish body, the Higher Education Authority (HEA), the M-Zones project [26], in which CIT is a partner, is developing distributed management tools for smart spaces, and investigating the application of novel embedded systems to these environments. There is also an emerging national community of interest in this area, which has a focus in Dublin through the SFI funded, “Adaptive Interfaces Cluster” [27], and in Cork through other initiatives. Recent initiatives have also emerged in the area of marine systems, where heterogeneous embedded systems designs have been specified by Ireland’s Marine Institute “Smart Catchment” and “Smart Coast” concepts; the Centre for Adaptive Wireless Systems is active in this domain in collaboration with University College Cork and Tyndall National Institute.
3.2.1
Technical limits of existing products and processes
Many of the international initiatives are long-term and require research, however, it is broadly agreed that realization of true AmI infrastructure will evolve from current technology platforms. To be successful in this regard a number of research
15 Embedded Systems Research and Innovation Programmes for Industry
329
innovations need to be made. First, there is no coherent, standardized embedded systems technology platform for AmI; currently the closest is the T-Engine platform. Second, designs are typically based on hardware or software only (e.g. Java embedded and mobile); no widely recognised hardware/software co-design approach exists which provides the required improvements in low-power consumption, computing efficiency, and system integration to enable the vision of ubiquitous computing and AmI. Third, in many cases the issue of security is often dealt with as an after-thought and bold-on. Fourth, no solution exists to overcome the power/battery life problem for wireless embedded systems. Solutions need to be found to each challenge to fully enable future embedded systems.
3.2.2
Optimised Co-Design of Embedded Systems
There is emerging consensus that “Embedded Systems can no longer be designed using two separate threads of hardware and software merged at later stages” [16]. Thus, a co-design methodology must be developed, targeting the Cork region’s strength in Cross-layer embedded system and network optimization, power management, autonomous sensing and control and the emerging activity in long-life embedded systems. The TEC seeks to address the following issues: ●
●
●
4
How do we develop a fully effective systems approach that merges functional and non functional requirements from the initial stages? What underlying tools and architectures are needed to capture the interaction of the system with its physical and network environments? How do we optimise the test and evaluation regime for the development and prototyping of new and potentially high value products?
The Industry Programme
The objective of the TEC Centre is to become established as a key applied research resource for indigenous industry in the Cork region and any measure of progress should be based upon how effective the Centre is in moving to achieve this. The foundation for achieving this must be based on building up sustainable partnerships with regional SME and MNC industry. To date, the Centre has been focused upon building up successful models of operation with SMEs (and MNCs) that bridge the gap between their development requirements (and timeframes) and those that would be typical of Applied R&D projects. An approach that employs a multi-strand R&D Work Plan, built upon a productization process has been quite effective. This approach provides the framework for the Centre’s Industry Programme. An R&D Work Plan, shown in Fig. 15.1, is developed with each company in order to build a viable innovation process. This crystallizes the immediate and medium-term requirements of the company and bases the partnership upon implementing a series of practical prototypes on a phased basis.
330
K. Delaney
Te c Fe hnol as ibi ogy lity
Pro
tot
Pro
of
of
Co
Projects EI Funded 12-36 Months R&D Team Innovation Driven nc
ep
t
Inn Pa ova rtn tio ers n hip Pro Op tot tim ype isa tio n
yp
e
Projects Direct Funded 3-6 Months R&D Team Deadline Driven
Te De chno ve lop logy me nt
Op
Op
Business Planning
tim
tim
isa
isa
tio
tio
Pr od
Business Concept
Targets Develop R&D Programmes (Groups of Aligned Projects) with SMEs and MNCs
n
uc tis ati on
Pro
n Pro
du
du
cti
sa
tio
n
Business Expansion
Business Development
cti
sa
tio
n
Mature Business
Fig. 15.1 Outline the TEC Centre Industry R&D Workplan
Initial meeting with company Idea put into proposal, objectives & milestones defined, Project Coordinator assigned
Project Review
Draft Proposal
Contract created and signed, project cost centre created
Project Contract
Project End
Completion of milestones within the agreed timeframe
Project Management
Project Team Project Coordinator puts together a team
Clear ideas for next generation products/projects
Project Idea
Project Start
Ordering, progress monitoring, staff payment requests, regular team meetings
Project kick-off meeting
Fig. 15.2 Build a project partnership with a company within the Industry Programme
As these prototyping projects are completed the emphasis on innovation grows and one of the project strands will increase in duration and level of risk. Thus, it is not unusual for the Work Plan to evolve into a series of complementary directfunded and grant-funded projects; this typically sees the assembly of a team of researchers (on a fulltime or part-time basis) to develop and implement the targeted
15 Embedded Systems Research and Innovation Programmes for Industry
331
objectives of the Work Plan. Note that this tends to be the case regardless of the overall value of the Workplan. The communication process commences through a series of discussions which culminates in the completion of a research plan template by both the TEC Centre representative(s) and those from the company (See Fig. 15.2). This plan highlights the immediate technical issues faced by the company in specific areas, but also looks beyond them to provide a picture of the kind of related embedded systems innovation that could have the greatest possible impact upon the company’s growth and competitiveness. The plan is approved by the company as being an accurate representation of targets and timeframes. The collaboration then seeks to work towards achieving both the immediate and medium-term objectives. This Work plan is structured into the programme of projects that will be most effective in providing results; in essence, the objectives are considered fluid until specific contracts for implementation are formalized. The formation of project teams is central to the successful completion of the projects and in this context, prototyping projects would operate to different criteria than R&D projects involve fulltime postgraduates. An industry prototyping project prioritizes the delivery of an agreed deliverable on a specific date; team members are selected because the assigned task(s) are considered to be routine. Typically, the team members will be working on the project part-time; they may be postgraduates, staff members or completely external to the Institute. The project is managed by the coordinator who oversees the scheduling, managing risks as they occur. The company is effectively part of the team, invited to interact with each of the members to best effect. It is not unusual for prototyping project like this to run ins sequences, evolving towards optimization phases that ultimately become dedicated to productization. In these situations, the role of coordinator is gradually transferred to an appropriate company staff member and the role of the TEC Centre team tends to become focused towards specific technical tasks that are part of a larger company programme. As the larger, dedicated R&D projects commence, an R&D Work Plan manager is usually appoint, who will liaise across the various active projects to support continuity. In most cases, the members of the R&D team and the prototyping team overlap, but there is sufficient separation to differentiate a time-sensitive prototyping process from innovation-focused research. Although this process has been in operation for a relatively short period of time, it has demonstrated a number of successes; some of these are described in the next section.
4.1
Industry Programme Case Studies
4.1.1
Wireless Sensor Systems for In-situ weighing of Beer Barrels
Draught beer loss is a long-standing problem for the licensed trade. In a busy bar, keeping track of what is poured and what is paid for is extremely difficult. Between the wastage resulting from bad stewardship and the revenue loss from dishonesty,
332
K. Delaney
Fig. 15.3 The KegMonitor™, part of the wireless sensor network developed for Grafico Ltd to monitor the rate of beer-pour in pubs and hotels
losses of over 200 pints of beer per week are not unusual. Current systems for monitoring beer-pour use commercial flow-meter technology, thus, requiring a physical integration with the beer dispenser. These systems are expensive, lack remote monitoring capability and often necessitate complex calibration and maintenance. A more effective method is to accurately monitor beer-pour through weighing the beer kegs. The TEC Centre, in partnership with industrial partner Grafico Ltd, developed an optimised prototype wireless sensor network designed to provide accurate, realtime data on the weight of numerous beer kegs (See Fig. 15.3). The platform is capable of communicating relevant weight data to base-stations using ZigBee wireless communications technology. The base-station operates user interfaces that collect and represent the monitored data in an appropriate format. The system is scalable and allows relevant beer-pour information to be accessed and managed through any Wi-Fi enabled device, PDA or smart phone and via the Internet. The TEC Centre team completed three projects with Grafico in this stage of the partnership, an initial proof-of-concept prototype, followed by a performance optimisation and test phase and then a mechanical/electronic systems design for low cost production. The TEC Centre coordinated the first two phases; the third phase saw further technical and business skill sets added to the overall team and this was coordinated by the company’s Chief Executive, Mr John Dalton. The sequence of projects was characterised by specific deadlines linked to negotiations with the company’s stakeholders and potential clients. In this context, the TEC Centre operated according to an industry model. A proportion of the innovation process was built into specific aspects of the systems design, some of which have been patent by Grafico Ltd. Other possible innovation opportunities were also highlight as the work progressed and these are currently the subject of further action. 4.1.2
Wireless Access to Medical Files using UWB Technology
A need has been identified by healthcare providers for very high speed mobile links between mobile devices and the backbone IT network in hospitals. In particular, cardiology and radiology departments are seeking mobile devices, such as tablet
15 Embedded Systems Research and Innovation Programmes for Industry
333
Fig. 15.4 A PACS (Picture Archiving & Communication Systems) file, which is to be wirelessly transmitted to a PC tablet using UWB
PCs, in order to enable a clinician to view scans while s/he is on the move. However, some of these scans can be many giga-bits, and larger, in size so that the current wireless solutions, based on WLAN, are not up to the task. One of the TEC Centre’s industry partners, Vistapex Ltd, sought to resolve this by developing a very high speed wireless system based upon ultra-wide-band (UWB) technology. The focus is EMR (Electronic Medical Records) and PACS (Picture Archiving & Communication Systems) and the aim is to deliver improved patient care, better workflow efficiency and cost reductions (See Fig. 15.4). The system can provide a range of wireless services to the patient and healthcare professionals such as: patient management and transfer; medication verification management; internet and email; television over IP; and billing. This activity was developed into an R&D collaborative programme between CIT and Vistapex, focusing upon two avenues for progress. The first involved a competitively funded EI Proof-of-Concept project, in which the substantive issues relating to using UWB in hospital environments are being investigated. The second avenue is designed to directly support the company’s agenda by providing representative prototypes and demonstrators. The team members include staff from the TEC Centre, postgraduates and staff from the Centre for Adaptive Wireless Systems and the company’s CEO, Mr. John Hayes. Initial systems prototypes demonstrated the potential for UWB, as being most suitable to the healthcare environment, and capable of carrying large amounts of data while utilizing very low power in a secure and non-interfering manner.
4.2
Analysis of the Industry Programme
In terms of practical project activities, the Industry Programme has progressed well since its inception. Twelve projects were completed in 2007 and two distinct forms of project emerged; a 2–4 month proof-of-concept project format and a 3–6 month optimization process; note that by increasing resources the optimization project can
334
K. Delaney Table 15.1 Selected results of the Industry Programme for 2007 TEC Centre- Industry Programme 2007 • Number of company contacts • Company R&D programmes • Industry Contracts • Prototypes Developed • Prototypes Optimised • Projects Running • Projects Completed • Part-time Positions Created • Full-time Positions Created • New Student Positions • Internships Completed
28 15 12 6 6 3 10 18 5 5 8
be completed more quickly (See Table 15.1). There was 100% commitment by industry partners to continue working in the programme on further, more innovative projects. The demand exists to increase activity by 250% in 2008; 25 projects will be completed. Using part-time resources to complete prototyping project has proved extremely effective; the process allows significant flexibility to resource multi-disciplinary projects, from the smallest to the largest, without significant risk. Company deadlines for these projects can be specific and onerous; thus, in effect, the TEC Centre is required to behave like a company itself, in order to meet these needs. This has extended to the implementation of clear operational procedures for running industry projects and induction processes for new team members and project coordinators. In fact, the process of creating these procedures facilitated a streamlining of the Work plan initiation process and the communications methods, which resulted in an increase in the efficiency of the interaction with the industry partners. One of the consequences of this process was a reorganization of the industry programme itself. The new structure was developed to facilitate the increase in the programme size and to more effectively partition the activities. For example, the success of the programme highlighted an opportunity to increase interaction between academics and companies through undergraduate programmes, provided the educational focus of those programmes could be maintained.
5
Evolving the Centre Programme
The strategic targets and tasks developed as part of the new structure are designed to focus effort effectively towards key achievements in knowledge transfer and to enable multiplier effects extending into research, teaching and learning and ultimately the sustainability (particularly funding) of CIT’s efforts in embedded systems. These objectives are framed in an evolution of the industry programme, now known as the POINT (Practical Optimisation and Innovation of Networkable Technologies) Programme.
15 Embedded Systems Research and Innovation Programmes for Industry
5.1
335
The POINT Programme
In restructuring the industry programme a three strand process has been adopted, including (1) the prototyping programme, (2) an innovation strand and (3) an education strand. The following is an extract from the strategic document that formally defines this new structure. The new POINT Programme consists of: [1] The PIN–POINT (Prototype Iteration for Novelty) initiative, which involves short-term prototyping projects to demonstrate the technical feasibility of product concepts and to investigate opportunities for further innovation. This initiative is successfully tested and requires scaling. [2] The HI-POINT (Harnessing Innovation) initiative, which develops projects to capture intellectual property and initiates specific knowledge transfer processes for this IP with appropriate industry partners. This initiative requires a number of further specific pilot projects to provide insight into successful models. [3] The KEE-POINT (Knowledge & Education for Employment) initiative, which enables the transfer of knowledge and training in a practical project-oriented context that is accessible to industry employees and students in second-, third- and fourthlevel. This initiative requires specific pilot projects to provide insight into successful models; this will likely lead to a multi-strand approach within this initiative. Each strand offers an entry point to potential industry partners with involvement in all three constituting a full partnership with the TEC Centre. The most natural entry point for companies remains the process of engaging in short term prototyping projects lasting 2–6 months. As with the original industry programme, the POINT Programme then offers a company the opportunity to develop their own innovation process, tailored to their own requirements and constraints; it assembles all of the research disciplines and expertise required to achieve this and provides the operational framework to best support success. Some unique features of the POINT Programme are: 1. definition and management of short- and medium term multidisciplinary projects for a company, removing the burden of defining the R&D skills needed and resourcing them, 2. provision of active coordination and support in creating many kinds of valuable innovation for companies and 3. integration and synchronization of the programmes innovation processes with business planning and scheduling
5.2
Finding a sustainable model
The successful implementation of a series of prototyping projects does not automatically mean that significant levels of high quality intellectual property (IP) will ultimately be created. A further reason for the development of the new POINT
336
K. Delaney
START-UPS: 2 - 3 INDUSTRY CLUSTERS: 2 - 3 IP TRANSFER INITIATIVES: 12 - 15 INDUSTRY PROJECTS: 25 - 30 INDUSTRY FORUM MEMBERS: 50 - 60
Fig. 15.5 An indicative breakdown of the scale of industry interaction required to drive a number of IP initiatives and industry clusters
Programme structure was the necessity to highlight specific targets in this context. Many of the successful industry project collaborations do create the conditions for IP to be actively targeted. Defining specific research projects, particularly if the IP is set out within those projects at an early stage, represents the most direct approach in leveraging the second strand of the POINT programme. In practical terms, however, it should not be expected that all effective industry collaborations will yield an identifiable form of IP. It is not unreasonable to expect to create know-how in many cases; however, the number of patent disclosures will be much more limited. A limited analysis of the current output from the TEC Centre Industry Programme indicates that a significant level of contact with indigenous industry, in particular, will be required in order to translate prototyping projects into genuine innovation programmes. Fig. 15.5 provides an outline of the numbers of companies and the related output in a pyramidal form. Note that the timeframes grow as one is moving up the pyramid; while the numbers for industry forum members and industry projects are likely to operate in an annual cycle, the targets for start-ups and industry clusters are more likely to have a five year cycle. The breakdown of numbers also assumes that the processes developed in the POINT programme will become highly effective. This has implications for the resource and budgetary requirements in setting up and sustaining the Programme in the long-term.
5.3
Creating an Innovation Process
The evolution from short-term industry prototyping to valuable innovation output is not a direct one. Prototyping provides for a mode of communication between researchers and the company, and it could be said to frame a common agenda. However, the traditional drivers of R&D, namely publications and acquiring funding to keep, or hire, staff and students, remains a dominating factor. The need for a
15 Embedded Systems Research and Innovation Programmes for Industry
Proof of
Tech n Feas ology ibilit y
Pro
tot
337 Projects EI Funded 12-36 Months R&D Team Innovation Driven
Concep
Technology Development
t
Innovati on Partners hip
Technology Development
yp
e P Op roto tim typ isa e tio n
Projects Direct Funded 3-6 Months R&D Team Deadline Driven
Pro du cti sa ti
Business Concept
Business Planning
on
Business Development
Business Expansion
Mature Business
Fig. 15.6 Divergence in the Industry Work Plan where no common driver exists
Projects EI Funded 12-36 Months R&D Team Innovation Driven Tech n Feas ology ibilit y
Pro
tot
e
t
Innovati on Partnersh ip
Technology Development Technology Development
Op Pro tim tot is ype at io n
P Op roto tim typ isa e tio n sa
at
tio
n io
Business Planning
cti
is
du
ct
Pro
du
Business Concept
o Pr
Direct Funded 3-6 Months R&D Team Deadline Driven
Concep
e yp n ot tio ot sa Pr timi p O
yp
Projects
Proof of
n
Business Development
Business Expansion
Mature Business
Fig. 15.7 Employing prototypes as research tools to create IP for industry partners
company to meet immediate targets and to focus on existing, and future, clients and stakeholders provides no less an impediment to success. This means that, in practical terms, the alignment of the project strands shown in Fig. 15.1 may be the exception rather than the rule. Fig. 15.6 highlights the issue in terms of the most likely outcome in the absence of specific drivers. Maximizing
338
K. Delaney
the prospects for creating intellectual property will tend to depend upon harnessing one of a number of possible drivers. A primary driver is obviously a company specific innovation target, linked to an existing, or new, product or service. This may exist at the commencement of the partnership or may emerge based upon the collaborative process itself. In that case, certain supports are beneficial, including access to research funding and in particular to co-funded programmes, where the company may licence, or seek assignment of, the new IP through financial and resource contributions to the project. A further support relates to the use of the prototypes as research tools in their own right (Fig. 15.7). In this context, the function of the prototype may be extrapolated (of course, we may be talking about assembling duplicates, which will require the support of the industry partner) to perform another function. However, this new function should be of interest and relevance to the industry partner and may require certain levels of insight.
6
Conclusions
Research in Ireland has changed significantly since 2000. New programmes of scientific and applied research have commenced, and sustainability has become an increasingly relevant issue. The trend toward multi-disciplinary research offer many opportunities to Academics, however, success, particularly in the areas of embedded computing and wireless sensor networks, requires a strong commitment to processes of join innovation. Small-to-medium enterprises will tend to find it difficult to access this type of multi-disciplinary research unless programmes are tailored for them. One such programme, which has been running since the beginning of 2007, has to date been successful in delivering practical, prototype-oriented results and providing access to a wide variety of research disciplines. The key elements of such a programme are flexibility, constant communication and clarity in the process of planning and road mapping.
References 1. EU Sixth and Seventh Frameworks: www.cordis.lu 2. Science Foundation Ireland: www.sfi.ie 3. Enterprise Ireland: www.enterprise-ireland.com/ 4. Applied Research Enhancement (ARE): www.enterprise-ireland.com/ResearchInnovate/Rese arch+Commercialisation/Applied+Research+Enhancement+(ARE)+Programme.htm 5. The Genesis Enterprise Programme: www.gep.ie 6. Artemis: Advanced Research and Development on Embedded Intelligent Systems, http:// www.cordis.lu/ist/artemis/index.html 7. P. Hansen, “Electronic Stability Control Promoted in Japan”, The Hansen Report on Automotive Electronics, vol. 17, No. 1, February 2004
15 Embedded Systems Research and Innovation Programmes for Industry
339
8. Gartner 2002: Microprocessor, Microcontroller and Digital Signal Processor, The Forecast Through 2005 9. Automotive S/W and Electronics Briefing: IBM Automotive Industry Team, May, 2002 10. Extract, http://en.wikipedia.org/wiki/Embedded_system 11. IST Advisory Group (ISTAG) Scenarios for Ambient Intelligence in 2010: http://www.cordis. lu/ist/istag-reports.htm. 12. Delaney, K., Barton, J., Bellis, S., Majeed, B., Healy, T., O’Mathuna C. and Crean, G., 2004, “Creating Systems for Ambient Intelligence”, Silicon: Evolution and Future of a Technology, pubs Springer-Verlag Berlin Heidelberg; editors, Siffert and Krimmel, 489–514 13. Paradiso, J.A., Lifton. J., Broxton, M., “Sensate Media - Multimodal Electronic Skins as Dense Sensor Networks”, BT Technology Journal, Vol. 22, No. 4, October 2004, pp. 32–44. 14. Smart Dust: Communicating with a Cubic-Millimeter Computer; Warneke, B, Last, M, Leibowitz, B, Pister, K S J, IEEE Computer Society, vol. 34 no. 1, p. 43–51, January 2001 15. Carlson, B.; Gupta, V.; Hogg, T. Controlling agents in smart matter with global constraints. Freuder, E., ed. Proceedings of the AAAI-97 Workshop on Constraints and Agents. Menlo Park, CA: AAAI Press; 1997; 58–63. 16. Krans, M., “Interactive photonic textiles” Ambience 2005, Philips Research, Netherlands, 19–20 September 2005, Tampere, Finland 17. OTM 2005 Workshop on Context-Aware Mobile Systems (CAMS’05), October 30–31, 2005, In conjunction with OnTheMove Federated Conferences (OTM’05) 18. IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’04) 19. Streitz, N., Nixon, P., Guest Editors, “The disappearing computer”, Special Issue, Communications of the ACM, Volume 48, Issue 3, March 2005. 20. The Disappearing Computer initiative: http://www.disappearing-computer.net/ 21. “Artemis Strategic Research Agenda”, Artemis Annual Conference June 2005 22. Chess -Centre for Hybrid and Embedded Software Systems: http://chess.eecs.berkeley.edu/ 23. Embedded Linux Consortium: http://www.embedded-linux.org/index.php3 24. The T-Engine Forum: http://www.t-engine.org/english/press.html 25. The Informatics Research Initiative: An Enterprise Ireland Initiative for the Third-level Sector: http://www.nsd.ie/htm/comm_rad/informatics/informatics2003.pdf 26. The M-Zones Project: www.m-zones.org 27. Adaptive Interfaces Cluster: http://www.adaptiveinformation.ie/home.asp
Part IX
Applied Systems Building Smart Systems in the Real World
1.1
Summary
As was highlighted in Part VIII, prototyping is an effective approach to assessing the most effective systemic implementation for individual users and for the relevant stakeholders. In fact, it is not inaccurate to state that it is essential in any collaborative process where an AmI influenced system is being specified with any level of performance complexity. Moreover, attaining success is not merely about a laboratory prototype, however, successful its functions. A key measure of impact and effectiveness is provided by means of deployment and field trialing. For example, many of the elements needed to effectively build and integrate wireless networks and Wireless Sensor Networks (WSNs) currently exist. It is broadly acknowledged at this point that the most effective approach to developing and sustaining progress in the WSN domain is to be practical; deploy scalable networks of sensors in real application environments. Research providing WSN data and know-how from field trials is currently acknowledged to have distinct value for that domain’s entire research community. This is also effectively the situation with all AmI and Smart Systems research activities. This part provides three approaches to using the prototyping methodology to research the creation of Smart Objects (collaborative and otherwise) and applied Wireless Sensor Networks. The first, chapter 16, summarizes the research of MIT Media Lab’s Responsive Environments Group, lead by Professor Joe Paradiso, and their unique approach to ‘interactive surfaces’, including the realization of a series of applications around the concept of ‘sensate media’. The second, in chapter 17, provides an overview of an ongoing case study on the practical problems of building networkable smart objects. The approach, which is based upon hierarchical systems architectures, uses the construction of a smart table to investigate what is required in terms of ‘whole-smart-artifact design’; it includes investigation of the use of augmented materials and analyzes how (and when) the actual physical integration process should be implemented. The final approach, in chapter 18, provides an overview of a project, which incorporated the specification, design, prototype fabrication, deployment and short-term field trial analysis of a wireless sensor network freight container tracking system.
342
Part IX Applied Systems
The project, a collaboration between researchers in Ireland from the domains of microelectronics and wireless networking, was derived by the requirements of a specific stakeholder, one of the regional Irish ports, which was seeking to resolve specific throughput issues under resource and operational constraints.
1.2
Relevance to Microsystems
Applied research and specifically prototyping and field trial programmes provide significant insights into the effectiveness, or otherwise, of microsystems devices and the hardware platforms upon which they are assembled. Much of this, particularly in the area of Wireless Sensor Networking (WSN), is supported by a toolkit methodology where functionality can be validated and routes to optimization and productization can be resolved. The toolkit method has the advantage of efficient and potentially timely development cycles, with broad scope for implementing design iterations. Managed effectively, it avoids costly hardware design and fabrication sequences until necessary and when the risk of subsequent errors has been minimized. The approaches demonstrate that, in addition to determining problem statements and investigating the appropriate systems infrastructure to overcome them, as in the case of chapter 18, there is merit in investigating what a technology is capable of by creating new applications and functionalities, as in the case of Chapter 16.
1.3
Recommended References
The work of the MIT Medialab provides a broad range of inventive approaches to existing technologies and a number of relevant publications are available. The work of the Responsive Environments Group is of specific interest and further detail can be found at the Groups website. Further publications are also available on the implementation of Smart Objects, based upon for example ‘Extrovert Gadgets’ and incorporating Augmented Materials. The ‘Smart-ITs’ project may also be relevant to the interested reader. The derivation of the system designed to track the location of freight containers is but one of numerous possible examples of possible applied wireless sensor network systems. Further insight may be gained, for example, through the work completed by international networking projects, such as the EU programme on CReating Ubiquitous Intelligent Sensing Environments, named ‘CRUISE’. 1. N. Gershenfeld, When Things Start To Think. Henry Holt and Company, New York, 1999 2. H. Ishii, B. Ullmer, Tangible bits: towards seamless interfaces between people, bits and atoms, Proceedings of the SIGCHI 1997 conference on Human factors in computing systems, Atlanta, Georgia, United States, Pages 234–241. ISBN: 0-89791-802-9. 3. The MIT Medialab’s Responsive Environments Group: http://www.media.mit.edu/resenv/ projects.html
Part IX Applied Systems
343
4. A. Kameas, S. Bellis, I. Mavrommati, K. Delaney, A. Pounds-Cornish and M. Colley, “An Architecture that Treats Everyday Objects as Communicating Tangible Components”, IEEE International Conference on Pervasive Computing and Communications (PerCom2003), Dallas-Fort Worth, Texas, March 2003. 5. S. Dobson, K. Delaney, K. Razeeb and S. Tsvetkov, “A Co-Designed Hardware/Software Architecture for Augmented Materials”, 2nd International Workshop on Mobility Aware Technologies and Applications (MATA’05), October 2005 6. The Smart-Its Project: http://www.smart-its.org/ 7. European Commission Network of Excellence on ‘CReating Ubiquitous Intelligent Sensing Environments’ (CRUISE - IST-027738); Report on WSN applications, their requirements, application-specific WSN issues and evaluation metrics: http://mobilesummit2006.telecom.ece. ntua.gr/cruise/Public%20documents/cruise_d112-1_revision_final-v12.pdf
Chapter 16
Sensor Architectures for Interactive Environments Joseph A. Paradiso, Senior Member, IEEE
Abstract As microelectronics have escalated in capability via Moore’s Law, electronic sensors have similarly advanced. Rather than dedicate a small number of sensors to hardwired designs that expressly measure parameters of interest, we can begin to envision a near future with sensors as commodity where dense, multimodal sensing is the rule rather than the exception, and where features relevant to many applications are dynamically extracted from a rich data stream. This article surveys a series of projects at the MIT Media Lab’s Responsive Environments Group that explore various embodiments of such agile sensing structures, including high-bandwidth, wireless multimodal sensor clusters, massively distributed, ultra-low-power “featherweight” sensor nodes, and extremely dense sensor networks as digital “skins”. This paper also touches on other examples involving gesture sensing for large interactive surfaces and interactive media, plus overviews projects in parasitic power harvesting. Keywords Sensor Networks, Energy Harvesting, Large Interactive Displays, Computer-Human Interaction, Ubiquitous Computing
1
Introduction
The digitally-augmented environments of tomorrow will exploit a diverse architecture of wired and wireless sensors through which user intent, context, and interactive gesture will be dynamically extracted. This article outlines a decade of research conducted by the author and his team at the MIT Media Lab’s Responsive Environments Group that explores such sensor infrastructures for creating new channels of interactivity and expression.
Responsive Environments Group at the MIT Media Laboratory.
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
345
346
2
J.A. Paradiso
Interactive Surfaces
My earliest experiments with interactive environments evolved from heavily wired systems that I developed for interactive media installations, as shown in Fig. 16.1. Starting in 1994 with an activated chair that exploited transmit-mode electric field sensing to produce musical response to body posture and dynamics [1], I evolved a suite of interactive stations for the 1996 debut of the Brain Opera at Lincoln Center [2] that encompassed installations such as an array of over 300 networked multimodal percussion sensors (the Rhythm Tree) and a handheld baton controller that incorporated tactile, inertial, and optical tracking sensors (in many ways, a forerunner of the currently popular Nintendo Wii).
Fig. 16.1 The Sensor Chair (top left), The Gesture Wall (top right), a small segment of the Rhythm Tree (bottom left), and the Digital Baton (bottom right – a schematic diagram and in use during a performance)
16 Sensor Architectures for Interactive Environments
347
One Brain Opera installation, the Gesture Wall, used an array of capacitive electrodes for sensing free-gesture atop interactive walls. This project sparked a deeper research interest into large interactive surfaces for public settings. As wall-sized displays decrease in cost, they will become more ubiquitous and eventually interactive. As opposed to the cloistered personal space provided by common video kiosks, large interactive displays naturally encourage collaborative activity. In public settings, small crowds typically congregate around such active walls, as individuals interacting with the displays effectively become performers, playing off their spontaneous audience. During the late 90’s, my Responsive Environments research group developed a pair of systems (Fig. 16.2) that retrofit large displays to track the position of bare hands [3]. The LaserWall used a low-cost scanning laser rangefinder mounted atop a corner of the display to create a sensitive plane just above the display surface.
Fig. 16.2 The LaserWall at SIGGRAPH 2000 (bottom) and the Tap Tracker Window in the Innovation Corner at Motorola’s iDEN Lab in Florida (top)
348
J.A. Paradiso
As the rangefinder’s detection was synchronously locked to the modulated laser, this system was insensitive to ambient light, and would measure the 2D position of the user’s hand out to roughly 4 meters at a 30 Hz scan rate. A subsequent system used an array of 4 contact microphones fixed to a large sheet of glass to determine the position of impacts from unstructured knocks and taps [4]. Realizable as a digital audio application without requiring special hardware, a set of simple heuristics determined the nature of the impact (e.g., hard tap, knuckle knock, or fist bash) and estimated the position of the impact from the differential time-of-arrival of the structural-acoustic wavefront at the transducer locations, countering the effects of dispersion in the glass. Producing resolutions on the order of 3 cm across active areas spanning more than 4 square meters, this system enabled users to interact with a large display via simple, light knocks. As the plate and bulk waves launched by the knock propagate within the glass, this system only requires pickups on the inside of the glass, leaving the (potentially outdoor) outer surface free of any hardware and completely available for interaction. During development for the Brain Opera, I also became interested in interactive floorspaces. In 1997, this resulted in an environment for interactive music called the Magic Carpet [2, 3] (Fig. 16.3) that measured the position and dynamic pressure of a user’s feet with a dense grid of piezoelectric cable laid underneath a 6-by-10 foot section of carpet. In order to make this environment immersive, upper body motion was measured by a pair of Doppler radars [2,5], which provided a rough estimate of the amount of motion, velocity, and mean direction of the objects within their beam. In contrast to conventional video approaches, although the information that the Dopplers provided was quite coarse, they were insensitive to illumination or clutter and required very little data processing to produce useful parameters.
Fig. 16.3 The Magic Carpet Installation at the Boston Museum of Science (left) and taping of the piezoelectric wire to the bottom of the carpet (right) before installation at the MIT Museum
16 Sensor Architectures for Interactive Environments
3
349
Wireless Sensor Clusters
Starting in the late 90’s, my research interests have increasingly encompassed wireless systems and sensor networks. Wireless sensors are foot soldiers at the front lines of ubiquitous computing. Within this rubric, however, there is still a wide hierarchy of platforms suited to different applications and demarked by their physical footprint and energy requirements, from complex, multimodal sensor clusters sporting a high bandwidth radio down to simple sensors built into a passive RF tag. The MIT Media Lab’s Responsive Environments Group has produced a wide range of such sensor systems that enable embedded computing to diffuse into various kinds of smart environments. Sensors have followed a corollary of Moore’s Law as they have dramatically decreased in size and cost across recent decades. Rather than dedicate a small number of sensors to hardwired designs that expressly measure parameters of interest, we can begin to envision a near future with sensors as commodity - where dense, multimodal sensing is the rule rather than the exception, and where features relevant to many applications are dynamically extracted from a rich data stream. Designers can now begin to embed a rich sensor package, of diversity previously seen in heavy platforms like robots or satellites, into the form factor of a wristwatch. My first exploration of this principle was a shoe (Fig. 16.4, top) for interactive dance [6]. As previous electronic footwear tended to concentrate on only one type of sensor (e.g., pressure sensors for tap dancing or inertial sensors for pedometry), my design was an expression of integration and diversity, in that I wanted to see how many different kinds of sensors I could practically embed into the constrained environment of a dancer’s footwear with a real-time wireless data transfer coming directly from the shoe. The first working design, produced in 1997, was an early example of a multimodal, compact wireless sensor node of the sort now common in sensor networks. As this device incorporated a suite of 16 sensors that measured various inertial, rotational, positional, and tactile degrees of freedom, it was able to respond to essentially any kind of motion that the dancer would make. The sensor diversity proved to be extremely worthwhile when devising software behaviors that responded to the dancer’s motion via music – we were able to fairly easily map any kind of podiatric motion the dancer made into a causal audio response with a straightforward rulebase. To further explore applications of such dense wireless sensing, my group evolved an adaptable stacking architecture [7] a few years ago, and collaborated with the NMRC Laboratory (the National Microelectronics Research Institute, now called the Tyndall Institute) in Cork Ireland in developing a roadmap to shrink the electronics into a sub-cm volume [8]. Each layer of our Sensor Stack is dedicated to a particular flavor of sensing. For example, the inertial layer features a full 6-axis inertial measurement unit (IMU) on a planar circuit card and includes passive tilt switches for efficient wakeup, the tactile board supports a host of piezoelectric and piezoresisitve pressure and bend sensors, and the environmental board features a
350
J.A. Paradiso
Fig. 16.4 Wireless wearable sensor nodes. The 1998 version of the Expressive Footwear wireless sensor shoe for interactive dance (top), the 2004 GaitShoe (middle) for wearable bio-motion analysis with the Sensor Stack mounted at the heels, and the recent compact wireless Sensemble IMU(bottom) for interactive dance ensemble performance and sports monitoring
variety of photoelectric and pyroelectric sensors, a compact microphone, and a small cell phone camera. Although our Stack has enabled many different sensing projects (including a collaboration with the Massachusetts General Hospital to build a gait analysis laboratory into a compact shoe-mounted retrofit [9], shown in Fig. 16.4 middle), our current research with the Stack centers on sensor-driven power management. While such multisensor platforms provide a rich description of phenomena via several different flavors of measurement, extending battery life mandates that the
16 Sensor Architectures for Interactive Environments
351
sensors can’t be continually powered, but must rather spend most of their time sleeping or turned off. Accordingly, we have developed an automated framework that we term “groggy wakeup” [10] where, by exposing an analysis to labeled data from particular phenomena to be detected and general background, we evolve a power-efficient sequence of hierarchical states, each of which requires a minimal set of activated sensors and calculated features, that ease the system into full wakeup. Accordingly, the sensor system only comes full on only when an appropriate stimulus is encountered, and resources are appropriately conserved – sensor diversity is leveraged to detect target states with minimal power consumption. We have recently deployed another wearable sensor node in versions tailored to interactive dance ensembles and high-speed motion capture for sports medicine [11]. Able to accommodate up to 25 nodes that update full state to a remote base station at 100 Hz, these compact nodes (the size of a large wristwatch – Fig. 16.4 bottom) feature a full 6 axes of inertial sensing. The dance version also provides a capacitive sensor that can determine the range between pairs of nodes (out to a halfmeter or so). The more recent sports version also features both high and low G accelerometers and high-rate gyros along with a tilt-compensated compass for directly determining multipoint joint angles and in-processor flash memory that enables synchronized onboard recording of all sensor readings at 1 Khz for 12 seconds (sufficient to monitor a basic athletic motion – e.g., pitch, swing, or jump) with subsequent wireless offload of data from all nodes. We have also recently made a simpler version of this system for interactive exercise, featuring a set of wireless ZigBee accelerometers worn at the limbs that communicate with a mobile phone running a compiled interactive music environment [12]. Although sensors indeed grow progressively smaller and cheaper, a platform as diverse as a fully outfitted Stack is still somewhat expensive, potentially running into hundreds of dollars. Another avenue through which sensors diffuse into the world is via an orthogonal axis – where ultra low-cost wireless sensors measure very few parameters, but are so cheap that they can be very widely deployed. One such “featherweight” sensor system that we have developed, shown in Fig. 16.5 (left), is a compact acceleration detector that sends a narrow RF pulse when it is jerked [13]. Although there are many applications for such a device (e.g., activity detection in smart homes [14]), we have used it to explore interactive entertainment in very large groups, where these cheap sensors can be given out with tickets, and real-time statistics run on incoming data can discern ensemble trends that facilitate crowd interaction. As the electronics are directly woken up by the sensor signal, the batteries in these devices last close to their shelf life. By exploiting a passive filter conditioned by a nanopower comparator, we have developed more generalized systems that are directly activated by low-level sensor signals in particular spectral bands. Termed “quasi-passive wakeup”, this initiative has developed a micropower, optically-interrogated ID tag (Fig. 16.5, right) for applications where standard RFID doesn’t perform (e.g., in the presence of metal or with very limited surface area) [15, 16]. Our “CargoNet” (Fig. 16.5, bottom) device [17] is a recent implementation of this principle. Designed for low-cost, long-duration monitoring of goods transiting through supply chains, this node
352
J.A. Paradiso
Fig. 16.5 An ultra low-cost wireless motion sensor for crowd interaction (top left), quasi-passive optical wakeup tag (top right), and a CargoNet Active RFID Sensor Tag (bottom)
monitors temperature and humidity once per minute, continually integrates lowlevel vibrations, and wakes up asynchronously on shock, light level, sound, tilt, or RF interrogation above a dynamically adaptable threshold. Accordingly, the tag stays in a very low-power sleep unless it wakes up to do periodic monitoring or encounters significant phenomena (e.g., a drop or hit, something breaking, container breach, or RF interrogation request). The CargoNet can automatically “numb” its sensitivity to prevent redundant wakeup in environments with significant steady-state background (e.g., continual vibration, light, or noise). Tests of this platform in various shipping conveyances have exhibited average power requirements of under 25 mW, suggesting a circa 5-year lifespan from a standard lithium coin cell battery.
16 Sensor Architectures for Interactive Environments
4
353
Energy Harvesting
Other sensors dispense with the battery entirely, and are powered through inductive, electrostatic, or radio interrogation like RFID tags. We have explored a variety of small, chipless sensor tags that map their response onto their resonance frequency for applications in human-computer interfaces [18], an example of which is shown at left in Fig. 16.6. A recent project, currently under development, is seeking to develop a very low cost, passive RFID tag based on Surface-Acoustic-Wave (SAW) devices for precise (e.g., 10’s of cm) radio localization for objects in buildings and rooms. These tags are addressed by a series of base stations that emit a coded sequence of RF pulses that correlate with programmable reflectors fabricated onto the SAW waveguide. A correlation between the transmit sequence and the tag’s response at the base stations determines range, and multiple base stations triangulate to determine tag position. Initial fabrication of these “mTags” has been performed [19], and they are now undergoing characterization and test (Fig. 16.6, right). Going further, systems that are able to scavenge energy from their environment hold the promise of perpetual operation, with their longevity limited by component lifetimes rather than the capacity of an onboard energy store. Our forays into power scavenging (Fig. 16.7) began in 1998 with piezoelectric insoles that produce power as the wearer walks (top), followed a couple of years later by a radio powered by a button push for batteryless remote controls (bottom) [20]. Our recent research in this area has established a new field called parasitic mobility [21], which interprets energy harvesting for mobile sensor networks as an adaptation of “phoresis” in nature, where nodes can actively attach to a proximate moving host (like a tick), passively adhere to a host that comes into contact (like a bur), or provide a symbiotic attraction to a passing host that makes them want to carry the sensor package (e.g., by attaching it to something useful like a pen). Although parasitic nodes can be very lightweight, since the nodes only need sufficient
Fig. 16.6 A passive LC tag mounted on a ring for finger tracking and HCI applications (left) and a prototype passive localization mTag mounted on an evaluation board (right)
354
J.A. Paradiso
Fig. 16.7 Power generating shoes with piezoelectric insoles from 1998 (top) and a self-powered dual RF push button for a wireless car window controller (bottom)
energy and agility to attach to a nearby host and determine where it is bringing them, our existing active prototypes (sized on the order of a 3 cm cube) are of a scale more appropriate for vehicles rather than animate carriers – a situation that will change as the nodes grow smaller.
5
The Plug
Another way to power to a sensor network in home, workplace, or factory environments is to tap into the existing power grid. As the cost of sensors decreases, it may not be unusual to see them incorporated into devices that are mainly intended for other purposes in order to widen their domain of application. Accordingly, we have
16 Sensor Architectures for Interactive Environments
355
recently embedded a multimodal sensor network node into a common power strip (Fig. 16.8 - top) [22].
Fig 16.8 The prototype PLUG (top) – piggybacking a multimodal sensor network node onto a power strip and (bottom) multimodal data from 9 PLUG nodes stationed at demos during an 8-hour public event
356
J.A. Paradiso
This device has access to power (and potentially networking) through its line cord, can control and measure the detailed current profile consumed by devices plugged into its outlets, supports an ensemble of sensors (microphone, light, temperature, and vibration sensors are intrinsic, and other sensors such as thermal motion detectors and cameras can be added easily), and hosts an RF network that can connect to other PLUG sensors and other nearby wireless sensors (accordingly acting as a sensor network base station). Fig. 16.8 (bottom) shows PLUG data plotted from 8 AM to 4 PM (spanning the duration of a 200-person public event held in our auditorium). Data from nine PLUGs are shown, each of which was installed at a demo station in the atrium outside of the theater where the talks were held. The structure of the event can be noted directly from the data, where sound amplitude and motion are seen to increase markedly when the talks aren’t in session and the audience is milling about in the atrium. PLUGs located near the windows exhibited a clear common daylight curve, while those located under artificial lighting exhibited more constant illumination, barring any modulation or deactivation of the light source. The electric current profile is very varied, showing clear differences between devices that pull constant current, devices being turned on and off, and devices (like computers, monitors, or projectors) that exhibit dynamic current draw. We have leveraged the PLUG platform to explore a variety of ubiquitous computing applications, such as a distributed conversation masking system [23] and new approaches to browsing sensor network data by tying it metaphorically to events in virtual worlds (an aspect of what we term “Dual Reality” [24]).
6
Sensate Media
In addition to shrinking the sensor node size and power requirements, another axis of diminishing scale can be the distance between nodes on a sensor network. Rather than building sensor nets with nodes many meters apart (a standard deployment for sensor networks), we are also exploring an interpretation of sensor nets as electronic skins, where the nodes are cm or mm apart. Taking inspiration from biological skin, the copious data generated from a field of multimodal receptors in such sensate media [25] is reduced locally in the network across the physical footprint of the stimulus, and then routed out to computational elements that can take higherlevel action. Promising revolutionary applications in areas like prosthetics, robotics, and telepresence, this extreme vision of scalable pervasive computation embedded onto surfaces encourages dramatic advances in microfabrication, embedded computing, and low power electronics. We have fielded several platforms to explore this concept (Fig. 16.9), including a dense planar array of configurable “pushpin” computers that we have used to study localization from commonly-detected background phenomena [26], a sphere tiled by a multimodal sensor/actuator network used to study co-located distributed sensing and output [27], a sheet of interconnected small, flat multimodal sensor nodes fabricated on flex substrate [28], and a floor tiled with pressure-measuring sensor network nodes [29] that detect and characterize
16 Sensor Architectures for Interactive Environments
357
Fig. 16.9 Several Dense Sensor Networks - the PushPin Computer (top left), the Tribble (top right), a sensor network “skin” with elements fabricated on flex substrate (bottom left), and a few of the Z-Tiles interactive floor (bottom right) pursued in collaboration with the University of Limerick
footsteps, then route high-level parameterizations off the floor tile-tile, avoiding complex cabling and multiplexing schemes. In 2000, we also explored building a wireless sensor network ‘skin’ into a sensate roadbed that’s able to infer dynamic road conditions and the statistics of passing traffic [30,31]. Sporting a permalloy magnetic sensor that can detect the disturbance in the Earth’s magnetic field caused by the passing of the automobile’s ferrous chassis and engine block overhead, this device is able to count cars and estimate rough speeds (assuming an average vehicle size). The addition of temperature and capacitive dielectric sensors can also hint at the presence of ice on the roadbed. As the measurements don’t need to be updated instantaneously, a carriersense-multiple-access (CSMA) network can be used that allows the nodes to dump their accumulated data at different intervals, eliminating the need for network synchronization and a receiver on the nodes. Our prototype tests (Fig. 16.10) have indicated that, with proper duty-cycling of the magnetic field sensor and circa 15-minute data uploads to a nearby base station (assumed to be located at the roadside within 500 meters or so), the average node’s current draw will be on the order of 15 mA, enabling them to last up to a decade with an embedded hocky-puck-size lithium battery, a lifespan well-suited to the
358
J.A. Paradiso
Fig. 16.10 The Sensate Roadbed prototype sensor node (top) encased in a Delrin enclosure for tests in a pothole on Vassar St. (center), and the passing car count from this node during the morning rush hour (bottom), showing development of a traffic jam at 8 am
16 Sensor Architectures for Interactive Environments
359
periodic need for road resurfacing. As the node cost will be on the order of 10’s of US$ for large quantities, it becomes feasible to instrument a city center with these devices for a few million dollars, a modest cost in comparison with the expense of the physical road itself.
7
Badge Platforms
A recent wearable device that we developed, called the UbER-Badge, was designed as a flexible platform that can be used to facilitate interaction at large social events as well as a tool to analyze human dynamics [32]. Sporting a multitude of features, the badge includes a large, highly-visible LED display for scrolling text and showing simple animations, a line-of-sight IR port for communicating with nearby badges or active IR tags, and an onboard radio for wireless networking. These badges have been used by over 100 simultaneous attendees at several large Media Lab events. Although the badges facilitated applications such as wireless messaging, voting, and bookmarking of other badges or tagged demos during our open house, it was extremely effective at timekeeping during tightly-scheduled presentations, where all badges in the audience flashed bright time cues to the speaker, becoming increasingly insistent as talks run over. The badges also continuously logged accelerometer and audio spectral data (see Fig. 16.11). An analysis of our data [32] has indicated that the badges’ measurements of body motion and voice characteristics, together with the IR person-person data, predict relevant aspects pertaining to user behavior (such as interest level) and can determine social context (such as affiliation with other users). We are now finalizing a new system called “Spinner” [33] with hardware components that include a small badge (hosting IR, compass, microphone, and accelero meters), a wrist sensor (hosting accelerometer, compass, and Galvanic Skin Response [GSR] monitor) and a stationary network of multimodal sensors (including video and audio). Spinner seeks to enable automatic assembly of captured video that best fits a story-board “query” describing participants’ activities projected onto an abstracted narrative “plot,” with the affective/social context derived from the wearable sensors acting as primary keys to this query. In this fashion, a pervasive sensor network is used to derive a projection of the participants’ daily life that best fits a story – indeed, through such emerging frameworks, we all become actors.
8
Conclusion
This article has presented several projects from the Media Lab’s Responsive Environments Group that illustrate several approaches to sensor architectures for pervasive computing. The article has adopted the style of a high-level survey, omitting detail in favor of a broad presentation. Readers are encouraged to peruse the cited
360
J.A. Paradiso
Fig. 16.11 An UbER-Badge (top) and accelerometer data logged from all badges worn at a recent Media Lab function - the structure of the event (talk sessions, breaks, open house) is clearly evident
references for more information, including extensive overviews of related and prior work for each of the projects presented here. More details and video clips showing several of these systems in action can be downloaded from: http://www.media.mit. edu/resenv/projects.html Acknowledgment The author acknowledges the hard work of several generations of his students in the Responsive Environments Group, upon which much of this work is based. Most of these projects were supported by the Things That Think Consortium and the Media Laboratory’s industrial partners.
16 Sensor Architectures for Interactive Environments
361
References 1. Paradiso, J.A., Gershenfeld, N., “Musical Applications of Electric Field Sensing,” Computer Music Journal, Vol. 21, No. 3, Summer 1997, pp. 69–89. 2. Paradiso, J.A. “The Brain Opera Technology: New Instruments and Gestural Sensors for Musical Interaction and Performance,” Journal of New Music Research, 28(2), 1999, pp. 130–149. 3. Paradiso, J.A., Hsiao, K., Strickon, J., Lifton, J. and Adler, A., “Sensor Systems for Interactive Surfaces,” IBM Systems Journal, Vol. 39, No. 3&4, October 2000, pp. 892–914. 4. Paradiso, J.A., and Leo, C-K, “Tracking and Characterizing Knocks Atop Large Interactive Displays,” Sensor Review, 25:2, 2005, pp. 134–143. 5. Paradiso, J.A., “Several Sensor Approaches that Retrofit Large Surfaces for Interactivity,” Paper presented at the UbiComp 2002 Workshop on Collaboration with Interactive Walls and Tables, Gothenburg, Sweden, September 29, 2002. See: http://ipsi.fhg.de/ambiente/collabtablewallws/ 6. Paradiso, J., et al., “Design and Implementation of Expressive Footwear,” IBM Systems Journal, 39(3&4), October 2000, pp. 511–529. 7. Benbasat A.Y. and Paradiso, J.A., A Compact Modular Wireless Sensor Platform, in Proceedings of the 2005 Symposium on Information Processing in Sensor Networks (IPSN), Los Angeles, CA, April 25–27, 2005, pp. 410–415. 8. Barton, J., Delaney, K., Bellis, S., O’Mathuna, C., Paradiso, J.A., Benbasat, A., “Development of Distributed Sensing Systems of Autonomous Micro-Modules,” in Proc. of the IEEE Electronic Components and Technology Conf., May 27–30, 2003, pp. 1112–1118. 9. Bamberg, S.J.M., Benbasat A.Y., Scarborough D.M., Krebs D.E., Paradiso J.A., “Gait analysis using a shoe-integrated wireless sensor system,” to appear in the IEEE Transactions on Information Technology in Biomedicine, 2008. 10. Benbasat, A.Y. and Paradiso, J.A. “A Framework for the Automated Generation of PowerEfficient Classifiers for Embedded Sensor Nodes,” in Proceedings of the 5th ACM Conference on Embedded Networked Sensor Systems (SenSys’07), November 6–9, 2007, Sydney, Australia, pp. 219–232. 11. Aylward, R. and Paradiso, J.A., “A Compact, High-Speed, Wearable Sensor Network for Biomotion Capture and Interactive Media,” in the Proc. of the Sixth International IEEE/ACM Conference on Information Processing in Sensor Networks (IPSN 07), Cambridge, MA, April 25–27, 2007, pp. 380–389. 12. Jacobs, R., Feldmeier, M. and Paradiso, J.A., “A Mobile Music Environment Using a PD Compiler and Wireless Sensors,” to appear in the Proc. of the 8’th International Conf. on New Interfaces for Musical Expression (NIME 2008), June 5–7 2008, Genova Italy. 13. Feldmeier, M. and Paradiso, J.A., “An Interactive Music Environment for Large Groups with Giveaway Wireless Motion Sensors,” Computer Music Journal, Vol. 31, No. 1 (Spring 2007), pp. 50–67. 14. Tapia, E.M. and Intille, S., “Activity Recognition in the Home Using Simple and Ubiquitous Sensors,” in Proc. of the 2004 Pervasive Computing Conference, Vienna Austria, April 2004, pp. 158–175. 15. Ma, H. and Paradiso, J.A., “The FindIT Flashlight: Responsive Tagging Based on Optically Triggered Microprocessor Wakeup,” in G. Borriello and L.E. Holmquist (Eds.): UbiComp 2002, LNCS 2498, Springer-Verlag Berlin Heidelberg, 2002, pp. 160–167. 16. Barroeta Perez, G., Malinowski, M., and Paradiso, J.A., “An Ultra-Low Power, OpticallyInterrogated Smart Tagging and Identification System,” in the Fourth IEEE Workshop on Automatic Identification Advanced Technology, Buffalo New York, 17–18 October, 2005, pp. 187–192. 17. Malinowski, M., Moskwa, M., Feldmeier, M., Laibowitz, M., Paradiso, J.A., “CargoNet: A Low-Cost MicroPower Sensor Node Exploiting Quasi-Passive Wakeup for Adaptive Asychronous Monitoring of Exceptional Events,” in Proceedings of the 5th ACM Conference on Embedded Networked Sensor Systems (SenSys’07), November 6–9, 2007, Sydney, Australia, pp. 145–159.
362
J.A. Paradiso
18. Paradiso, J.A., et al. “Electromagnetic Tagging for Electronic Music Interfaces,” Journal of New Music Research, 32(4), Dec. 2003, pp. 395–409. 19. LaPenta, J., Real-time 3-d Localization using Radar and Passive Surface Acoustic Wave Transponders, MS Thesis, MIT Media Laboratory, August 2007. 20. Paradiso, J.A. and Starner, T., “Energy Scavenging for Mobile and Wireless Electronics,” IEEE Pervasive Computing, Vol. 4, No. 1, February 2005, pp. 18–27. 21. Laibowitz, M. and Paradiso, J.A., Parasitic Mobility for Pervasive Sensor Networks, in H. W. Gellersen, R. Want and A. Schmidt (eds): in Pervasive Computing, Third International Conference, PERVASIVE 2005, Munich, Germany, May 2005, Springer-Verlag, Berlin, pp. 255–278. 22. Lifton, J., Feldmeier, M., Ono, Y., and Paradiso, J.A., “A Platform for Ubiquitous Sensor Deployment in Occupational and Domestic Environments,” Proc. of the Sixth International IEEE/ACM Conference on Information Processing in Sensor Networks (IPSN 07), Cambridge, MA, April 25–27, 2007, pp. 119–127. 23. Ono, Y., Lifton, J., Feldmeier, M., Paradiso, J.A., “Distributed Acoustic Conversation Shielding: An Application of a Smart Transducer Network,” in Proceedings of the First ACM Workshop on Sensor/Actuator Networks (SANET 07), Montreal, Canada, September 10, 2007, pp. 27–34. 24. Lifton, J., Dual Reality: An Emerging Medium, PhD Thesis, MIT Media Laboratory, September 2007. 25. Paradiso, J.A., Lifton. J., and Broxton, M., “Sensate Media - Multimodal Electronic Skins as Dense Sensor Networks,” BT Technology Journal, Vol. 22, No. 4, October 2004, pp. 32–44. 26. Broxton, M., Lifton, J., and Paradiso, J.A., “Wireless Sensor Node Localization Using Spectral Graph Drawing and Mesh Relaxation,”, ACM Mobile Computing and Communications Review, Vol. 10, No. 1, January 2006, pp. 1–12. 27. Lifton, J., Broxton, M., and Paradiso, J.A., “Distributed Sensor Networks as Sensate Skin,” in the Proceedings of the 2003 IEEE International Conference on Sensors, October 21–24, Toronto, Ontario, pp. 743–747. 28. Barroeta-Péerez, G., S.N.A.K.E.: A Dynamically Reconfigurable Artificial Sensate Skin, MS Thesis, MIT Media Laboratory, August 2006. 29. Richardson, B., Leydon, K., Fernstrom, M., and Paradiso, J.A., “Z-Tiles: Building Blocks for Modular, Pressure-Sensing Floorspaces,” in the Proc. of the ACM Conference on Human Factors and Computing Systems (CHI 2004), Extended Abstracts, Vienna, Austria, April 27– 29, 2004, pp. 1529–1532. 30. Knaian, A.N., “A Wireless Sensor Network for Smart Roadbeds and Intelligent Transportation Systems,” MS Thesis, MIT Department of EECS and MIT Media Lab, June 2000. 31. Knaian, A., Paradiso, J.A., “Wireless Roadway Monitoring System,” U.S. Patent 6,662,099, December 9, 2003. 32. Laibowitz, M., Gips, J., Aylward, R., Pentland, A., and Paradiso, J.A., “A Sensor Network for Social Dynamics,” in the Proc. of the Fifth International IEEE/ACM Conference on Information Processing in Sensor Networks (IPSN 06), Nashville, TN, April 19–21, 2006, pp. 483–491. 33. Laibowitz, M., “Distribute Narrative Extraction Using Sensor Networks,” PhD Thesis Proposal, MIT Media Lab, April 12, 2007.
Chapter 17
Building Networkable Smart and Cooperating Objects Kieran Delaney, Ken Murray, and Jian Liang
Abstract Wireless networks and sensor technology have converged, and are being broadly applied in numerous areas. Amongst other things, pervasive computing highlights a specific aspect of this convergence, namely the integration of distributed sensing and wireless communications into everyday objects. The rationale that future computing systems should be unobtrusive and form a seamless part of our environment not only underpins this objective, it demands that the integration process be effective. It is challenging to design and build Smart Objects. It is more even challenging to seek to transform our everyday environments, and the objects in them, on a massive scale. This Chapter addresses the practical problem of building a networkable Smart Object that is expected to perform largely the same physical functions as its current ‘dumb’ equivalent, while ‘infusing’ its usespace with an intelligence that supports intuitive interaction, creativity and provides access to significant computing resources on demand. An approach, which is based upon hierarchical systems architectures, uses the construction of a smart table to investigate what is required in terms of ‘whole-smart-artifact design’ and (current and future) materials and also outlines issues relating to how (and when) the actual physical integration process should be implemented. Keywords Embedded Computing, Wireless sensor networks, IEEE802.14.5, Augmented Materials, Smart Objects, Design, Hardware Toolkits
1
Introduction
Building a Smart Object (or a cooperating object, intelligent artifact, etc), particularly where there is a requirement for local ‘intelligence’ and embedded sensing, is an involved process. Many forms of ‘Gadget’ are currently in the market or being
Centre for Adaptive Wireless Systems, Cork Institute of Technology, Rossa Avenue, Bishopstown, Cork, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
363
364
K. Delaney et al.
developed through research (the emphasis here being on issues like collaboration and ‘intelligence’). In certain cases, an effective combination of design and function has been successful in capturing global attention; the Nintendo WiiTM is perhaps one of the more obvious recent examples of this. The system’s rapid market growth is indicative of an appetite for ‘intelligent’ behaviour and ‘smart objects’, as long as the added value is clear, affordable and dependable. Research to address the challenges of creating intelligent environments is becoming highly user-focused; in essence, the issues of added value, dependability, etc as highlighted in the example of the WiiTM, are central to achieving success. Without a genuine understanding of what conveys ‘meaning’ to a user experience, the prospect of a successful uptake is very low. The application of any selected combination of the assortment of new technologies must be framed by this kind of user-driven insight. However, it should also be understood that currently available technologies may not represent a full suite of what is needed in order to deliver the requisite solutions. Thus, user-driven research must evolve hand-in-hand with technological innovation. An example of one programme that sought to combine these perspectives through research, though perhaps with the emphasis on technology, was the Disappearing Computer Programme. The goal of the Disappearing Computer initiative [1] was to actively seed the development of information technology that can be diffused into everyday objects and settings. One, of a number of projects, which actively sought to develop prototype Smart Objects, was the Extrovert Gadgets (or eGadgets) project.
1.1
The Extrovert Gadgets Project
Extrovert-Gadgets [2] was a three year long R&D project with three academic partners; the Computer Technology Institute (CTI) in Greece, who were project coordinators, the National Microelectronics Research Centre (NMRC) in Ireland – now known as Tyndall National Institute - and the University of Essex in the United Kingdom. The goal of the project was: To provide a technological framework that will engage and assist ordinary people in composing, (re)configuring or using systems of computationally enabled everyday objects, which are able to communicate using wireless networks.
It does this by adopting a dynamic “plug-and-play” approach with tangible, transducer-enhanced objects, in a manner that has much in common with the process where system builders design software systems out of components. The eGadgets project developed a platform that is designed to enhance everyday objects, such as kitchen appliances, entertainment units, heating, lighting and security systems, and even clothing and furniture with intelligent and communicative abilities (see Fig. 17.1). One of the key outputs from the project was a series of practical prototypes that informed the overall process of creating Smart Objects and in particular highlighted
17 Building Networkable Smart and Cooperating Objects
365
Fig. 17.1 A functioning Extrovert Gadgets “User Study Environment” Demonstrator on display at the Ubiquitous Computing Conference 2002
the challenges to be faced in integrating distributed sensors and computational processors into an everyday objects. The main demonstrator for the eGadgets project was the subject of intense discussion within the consortium. An idea best proved through demonstrating that a large number of artifacts could coordinate effectively, and dynamically, according to changing user requirements faced significant practical barriers. Among the most pressing factors were budget concerns (i.e. the amount of hardware that could be purchase through the materials and equipment budgets was relatively low and certainly too low for any reasonable level of silicon device or subsystem development); there was also what may be described as a deficit within the consortium in hardware systems development. Prior to the eGadgets project, when the word ‘system’ was used amongst NMRC researchers (who were the ‘hardware’ partners), it was most likely with reference to a silicon platform containing a single sensor and the conditioning circuitry required to digitize its output. In eGadgets, the hardware system was a full implementation of the gadgets themselves, including wireless communications, physical integration of the sensor arrays (required to build a picture of context) and the computational power to manage the data such that an effective, intelligent decision-making behaviour emerged. This stretched the existing skill sets within the partnership, but this in itself was not a core problem; the programme that drove the Disappearing Computing initiative designed a risk-reward balance that created significant challenges for most of the projects’ partners. The more difficult problem would be in building a system that achieved a sufficient level of complexity to demonstrate the basic principle of the project. The chosen demonstrator was a ‘User Study Environment’ (See Fig. 17.2) where augmented artifacts in the form of tables, chairs, lamps and books could
366
K. Delaney et al.
Fig. 17.2 The construction of a Study Gadget-world in which sensor systems are embedded in objects; objects are digitally “plugged” together to perform user defined actions (typically the user employs programming-by-example after the objects are ‘plugged’ together)
coordinate based upon user requirements, as defined within an editor artifact. The demonstrator construct was called a gadget world. The only effective route - that of using commercial off-the-shelf components became a design task for nodal subsystems built around PDA driven artifacts. The ability to integrate sensors into the artifacts was limited by the design of the sensors, by the nature of the artifacts and by project schedules. It became clear that while miniaturised microelectronic systems offered a platform with the potential to deliver sensor interfaces appropriate to the Disappearing Computer’s goal of unobtrusiveness, the issues of micro-sensor packaging and physical integration were likely to be significant. It was also clear that the nature of the full systems architecture would be a major influence on the local subsystems description. When the final eGadgets demonstrator was successfully completed, the active version of the hardware platform was approximately 100 × 100mm (See Fig.17.3); it supported over twenty sensor devices per broad and contained a microcontroller and FPGA chip in order to meet the platform’s requirement for scalable sensor management and investigation of local intelligence issues (i.e. the use of agents for artifact-level decision-making). Many of the sensors were simple physical sensor devices, such as light dependent resistors (LDR) and tilt-switches. In practical terms the demonstrator worked effectively, but purely as a proof-of-concept with the hardware implementation providing insight into the issues of sensor and systems integration for the chosen artifacts, rather than active solutions. It became clear that creating appropriate sensor interfaces required the development of an investigative toolkit; this was a conclusion that was made by a significant proportion of the R&D community, in this and in related disciplines. A further
17 Building Networkable Smart and Cooperating Objects
367
Fig. 17.3 The eGadgets Sensor Interface board, which permitted over 20 sensors to be physically attached to an object and linked to a Gateway (for the project demonstrator this was a PDA). The board incorporated an FPGA platform to support more complex algorithms.
Fig. 17.4 The Tyndall wireless sensor mote
issue that drove the toolkit approach was a requirement to effectively translate the use of off-the-shelf sensors, processors and transceivers into research tools that impacted upon the research traditional to the NMRC’s microelectronics mission. In other words, it could help determine how research programmes (within a Centre
368
K. Delaney et al.
that investigated miniaturisation, microsystems development and hybrid systems packaging) could benefit from a broader perspective on systems. The specific answer lay in the emerging area of Wireless Sensor Networks (WSN) through which NMRC (now Tyndall) focused the development of a 25mm Tyndall mote (See Fig.17.4), creating a versatile sensor interface capable of supporting complex sensors, such as 6 degree-of-freedom inertial measurement unit.
1.2
A problem statement
At the core of this toolkit are simple processors, low power transceivers and micro sensors; a key requirement for the toolkit is its use in investigating effective packaging techniques for sensor/actuator subsystems. For this the focus (in Tyndall/NMRC and in other Institutes) turned to implementing 10mm, 5mm and ultimately 1mm nodes, where the real potential of micro/nano sensors could be framed for investigation through appropriate applications. In many respects, this operates to a separate, slower timeframe than driving the implementation of gadget worlds and ambient systems. Practically, the full AmI system cannot be broadly realised without innovation at the microsystems level. What should be noted is that the most likely systems solutions, those based upon heterogeneous approaches, can enable the phased evolution of a practical Ambient Intelligence. Of course this assumes that certain issues regarding the hardware implementation are eventually resolved. This brings us back to one of the most consistent hardware systems challenges faced by the eGadgets project: How should distributed sensors be integrated into everyday objects?
Despite the prototyping nature of the eGadgets project it was clear that this was a non-trivial issue, and that, while toolkits supported an investigative process, it could not be assumed that the correct integration methodologies would emerge naturally as sensor nodes functioned more effectively and became smaller. An interesting indicator here is how the general shape of sensor nodes built by toolkit methods has tended to be consistently cubic, influenced by the geometric shapes favored in silicon and printed circuit board fabrication.
1.3
A smart object that uses augmented materials?
Advances in materials technology, sensor and processor miniaturisation, power harvesting and context aware software have opened up the possibility of constructing “augmented” materials; matter that includes computational intelligence and communications that is used directly to build intelligent artefacts. The feasibility, and discussion, on integrating computational intelligence within material fabric has
17 Building Networkable Smart and Cooperating Objects
369
been presented in [3] and outlined in Chapter 2 of this book. There is a significant amount of research to be done in order to fully achieve an augmented material; however, it is clear that the optimum design of such a material will develop from its inclusion as part of a larger, heterogeneous system. The material will provide a significant underlying sensory and computational infrastructure, but certain functions (and, thus, hardware subsystems) will not be part of that infrastructure. Selected features, such as precision inertial measurement, will form part of the larger heterogeneous system, since a single accurate inertial measurement unit (IMU) provides the more effective solution from both a cost and performance perspective. Thus, one can envisage a point in the object assembly process where subsystems will be installed and this considered part of the physical assembly process of the object itself. In an augmentation process, this step would be meaningful as ‘programming’ of the object and underlying augmented material(s). To affect this requires a ‘whole-smart-artifact design’ process. In the next section, this is explored through prototyping a specific artifact, a Smart Table.
2
Building a Smart Object
The focus of this particular case study is a table. It is determined that the design of the table should permit its ‘programmed’ assembly from primarily Augmented Materials once such materials are realised. Thus, the entire table was fabricated in a manner that would allow it to be assembled from a single flat panel. The current version, shown in Fig. 17.5, is made from wood and can be fully disassembled to reform the panel layout shown in Fig. 17.6. A summary of the assembly steps are provided in Table 7.1.
Fig. 17.5 A ‘Smart’ Table Design constructed from wood
370
K. Delaney et al.
Table 17.1 Assembly Sequence for a Smart Table Using pieces (10–25), construct four hollow legs. Each leg should consist of 4 pieces of wood, joined together using screws. Each leg will have a top piece, to ensure that four pilot holes are at the same end of the leg. Connect all four table legs together using timber supports (4, 5). The legs should be connected to the supports using fixtures and screws. Each fixture should take 6 screws. (Use the pilot holes on the leg as a guide) You should now have a rectangular frame in front of you. These next two steps will give the structure more stability. Using fixtures and screws connect each leg to a triangular corner support. Place timber stages at the top corners of the frame and connect using screws.
No.4
No.1
No.3
7
No.2
6
No.17
No.16
No.15
No.14
No.13
No.12
No.11
No.10
Connect a load cell to each corner piece using bolts. Ensure the countersunk head is fitted into the countersunk hole to give a flush surface. The table top can now be placed onto the frame.
9
No.5
8
The test table is constructed with materials cut from one augmented material, and the original network is dissembled and reassembled into a different structure. Every leg is built from four leg pieces and one leg top. And two long support beams, two short support beams and four corner support beams contribute as a frame for the stability of the table. Then the table surface seats well on top of four legs and the frame. From the perspective of component, local wireless sensor module communicates with the other modules nearby. When new components enter into the range of communication, local module talks to its new neighbours and renew the network routing methods in context.
LEG TOP
No.18
No.19
No.20
No.21
No.22
No.23
No.24
No.25
26, 27
28. 29
CORNER SUPPORT BEAM (45’ CUT)
Fig. 17.6 The ‘Smart’ Table may be completely disassembled to form a flat panel (top) and then re-assembled. A number of supports (bottom) are used to ensure its stability
17 Building Networkable Smart and Cooperating Objects
371
The table itself provides a focal point for study on forms of augmentation at both a macro- and micro-level. At macro-level, a selection of sensory features has been added to it, beginning with the ability to determine the weight and location (i.e. centre-of-gravity) of an object resting upon the surface of the table. Four load cells were employed to achieve this, in a manner similar to that reported by Schmidt et al in their paper “Context Acquisition Based on Load Sensing” [4]. Further detail is provided in Section 3. At micro-level, an evaluation of networking methodologies has commenced with the purpose of determining how network deployment and the subsequent behaviour can be merged with the type of table developed in the case study. A preliminary analysis, performed through simulation, is presented in Section 4.
3 3.1
Systems Design for a Smart Table Scalable Distributed Load Sensing
A scalable approach to building a distributed load sensing system was implemented in which a Wireless Sensor Network (WSN) plays a critical roll. For the purpose of managing high volumes of sensor data without overwhelming the traditional processing infrastructure, sensor networks and embedding sensing technologies are combined together to build up an autonomous system that works close to the phenomena being sensed and permits control decisions to be made locally. This experiment focuses on the combination of a wireless sensor network with load sensing subsystems (See Fig. 17.7) and the application of this system to the ‘Smart Table’. This work is relates to a paper presented by Schmidt et al on “Context Acquisition Based on Load Sensing” [1]. In this paper, the researchers were able to identify weight and positions of loads on top of a test table by using an embedded sensing
Fig. 17.7 A load cell on the Smart Table
372
K. Delaney et al.
system of load cells. Our experiment replicated aspects of their work, but utilized a sensor network instead of an integrated processing platform to provide for scalability in the system.
3.2
Load Sensing Experiments
The purpose of this set of experiments is to determine the sensitivity of the distributed load-sensing systems set up on the Smart Table in comparison with experiments performed at Lancaster University using a similar system. The variation in system construction related to the use (in the Smart Table) of a distributed wireless sensor network (WSN) rather than a central system. The purpose of the analysis was to validate that, since the load sensing devices were the same in both experiments that the result should be broadly similar, meaning no negative performance impact due to the WSN. This proved to be the case. The following outlines a series of tests that were made to investigate network performance for the ‘Smart Table’. State 1 - No load Present: Four sensor nodes communicate with sink node separately indicating that no loads are placing on anywhere on the table surface. See Fig. 17.8 State 2 – Single load at various locations on the table: Load weights of 1.9 kg were placed on top of the table at three different positions (see Fig. 17.8). The first position is the centre of the table surface with a coordinates (70cm, 40cm). The second position is the centre of the left bottom region of the table surface with a coordinates (35cm, 20cm). The third position is the centre of the right bottom region of the table surface with a coordinates (35cm, 60cm). State 3 – Differing Weights: Different weights, in increments from 1 kg to 9kg, are place at a variety of positions, such as (0cm, 0cm), (17.5cm, 10cm), (35cm, 20cm), (52.5cm, 30cm) and (70cm, 40cm). The experimental results highlighted that the heavier weights on the Smart Table provided the more accurate results regarding the location of a weight’s centre of gravity.
Fig. 17.8 A Load sensing measurement is taken where the only load is the weight of the table-top itself; the table-top sits on the four load cells
17 Building Networkable Smart and Cooperating Objects
373
Fig. 17.9 Both weight and the centre-of-gravity are measured
The heaviest weight in use was 9 kg and, in this case, the accuracy of the results reached an average of less than 1 cm over both X-axis and Y-axis. In fact, this result appears to be better than the original experiment performed at Lancaster University. Because of the flexibility of the table surface (a deliberate feature), results for lighter weights load are not as good as expected. Better experimental results could be obtained if the table surface material was changed to become more rigid. In terms of the system, the wireless sensor nodes proved to be effective. The behaviour of the material and the shape of the surface were recorded clearly and accurately through a load-bearing measurement and a centre-of-gravity calculation. In further experiments, sensors will be added to acquire information about the shape of the objects resting on the table, to sense self-motion and the proximity of other objects and to add other features.
4
Material Integrated Networks
The top-down approach to determining the full heterogeneous infrastructure for augmenting a ‘Smart Table’ is framed around wireless sensor networking and commercially available sensors. At a prototyping level, this will tend to be effective, though it is likely that many of the sensors will not ultimately provide suitable to use in any subsequent commercial version of such an object. A case in point is the load cell sensor employed in the above experiment, which, as it is typically intended for industrial tasks, is too large to be considered except in a prototype. In considering the integration of an augmented material into this form of ‘Smart Table’, we take what might be considered a bottom-up approach. One of the first issues to be investigated is the formation of network-level activity between the sensor elements that will compose the augmented materials infrastructure. This is important in determining how the material will ultimately be programmed (i.e. network changes when, for example, a material is cut will need to translate to ‘self-knowledge’ in that material and inform a ‘natural’ programming process). As it will impact upon
374
K. Delaney et al.
computation, memory, power and possibly even the sensor device selection, it will also affect the design of the individual sensor elements themselves. One of the approaches being employed to investigate a network-level infrastructure for an augmented material is simulation. A range of simulators, originally developed for wireless sensor networking, can be employed for this purpose. The following section describes some of these simulations, in this case, using an established WSN protocol to investigate network-level performance requirements for this new infrastructure
4.1
Networking Challenges in Augmented Materials
This work focuses on the hardware considerations in terms of location, communication, sensing and power involved in building a co-operative network of sensor elements. It also addresses the appropriate programming model for applications on the resource-constrained platforms embedded within augmented materials and it extends the discussion to include data networking and the challenges faced by dynamically reconfiguring sections of a smart material (whilst preserving the data dissemination capability throughout the complete material structure). Many routing algorithms [5–8] are not designed to function efficiently in environments in which clusters of nodes have the capability to move at the same time. The most common routing algorithm implementations tend to be too complex and produce heavy network congestion during route discovery and maintenance. This paper will evaluate, via simulation, the well known AODV reactive routing algorithm [9] within a typical augmented material structure. The limitations of using classical routing algorithms in augmented materials will be highlighted and recommendations made to reduce network overhead and ultimately develop more energy efficient routing strategies in augmented materials.
4.1.1
The Simulation Environment
A broad variety of different simulation tools can be used to simulate the key characteristics of wireless sensor networks. They range from emulator originated tools like Avrora [10] and TOSSIM [11], to wireless and mobile communication simulation environments, like OMNeT++ [12], OPNET [13] and NS-2 [14]. Each of these classes and tools has its specific advantages and disadvantages and often the selection of a tool is based upon the experience of the researcher rather than on a more rational argument. An overview of different tools and simulation environments with their particular pros and cons has been established in the European Network of Excellence, CRUISE, on Sensors and Ubiquitous Systems and is given in [15]. NS-2 is used in this work to simulate the wireless sensor nodes embedded within a typical augmented material. NS-2 is a discrete-event simulator written in C++ with a TCL front-end, intended for networking research. It is free for educational,
17 Building Networkable Smart and Cooperating Objects
375
non-commercial and internal commercial use, but it is not supported commercially. As with all discrete-event simulators, precise timing simulation is not possible, although a timing model can be added into the simulation. NS-2 uses TCL for scenario generation – this allows complex scenarios to be generated automatically by scripts inside the simulator. The simulator is controlled by TCL commands; however, the internal simulation runs in C++. The propagation models supported are for free-space, two-ray ground reflection and shadowing. Simple energy modeling is supported – this tracks the energy used for each packet transmitted and received. NS-2 contains a complete implementation of the IEEE 802.15.4 wireless sensor layer 1 and 2 protocols and a range of commonly adopted network routing protocols, including AODV.
4.1.2
Network Simulation
The initial deployment, shown in Fig. 17.10, consists of 225 simulated sensor nodes in a 15×15 grid spread uniformly on a flat material. The centre node (112) is a data sink collecting data from intermediate cluster head nodes (in these simulations that
Fig. 17.10 Initial sensor node deployment on the material substrate
376
Fig. 17.11 T = 500 sec, the lower 15×5 nodes move to top of grid
Fig. 17.12 T = 700 sec, the base of an L-Shape formed
K. Delaney et al.
17 Building Networkable Smart and Cooperating Objects
377
Fig. 17.13 T = 900 sec, lower block of 15×5 nodes move to form upright side of L
nodes chosen were: 32, 37, 43, 108, 97, 116, 182, 173 & 208). The cluster heads are chosen to be approximate centre points within material segments covered by 15 × 15 node deployments within the material. Each cluster head transmits a data packet toward the data sink (the rate for simulation-purposes is every 2 seconds). The traffic profile is a constant bit rate (cbr) service with a packet size of 70 bytes. The results presented herein focus on inter cluster routing from the 15×15 node clusters toward the data sink as the simulated material is reconfigured. These results therefore do not address intra-cluster routing and data delivery to the distributed cluster heads. The focus of this experiment is to investigate the networking impact of dynamically reconfiguring a portion of the set of networked devices by introducing mobility to select node groupings. This is in contrast to the more general approach of data routing in which a number of non-neighboring mobile nodes traverse through a networked environment. As expected, results are presented that highlight questions regarding the effectiveness of classical routing approaches in reconfiguring node groupings, such as found in reconfigurable augmented materials. However, it also provides insight into how this might affected in terms of a new routing approach and associated systems infrastructure.
4.1.3
Cluster Movement Pattern
On a physical level, an augmented material consists of a substrate material and a collection of embedded processor elements with local sensing and communication capability. The choice of substrate governs the gross physical properties of
378
K. Delaney et al.
the augmented material, and so is conditioned by its application. As discussed in [3], at the physical level, a material exhibits certain structural properties such as stiffness, ductility, and conductivity. To undertake a simulation-based analysis of the networking challenges faced within an augmented material, a suitable material reconfiguration pattern must be chosen in which the material at a physical level is altered. The reconfiguration process will put particular strains on network functionality and will form the focus of the analysis in these sections. The simulations are run for a complete duration of 1000 seconds in which time the material undergoes three movement patterns. As depicted in Fig. 17.11 to 17.13, the simulated material reconfigures into an ‘L’ shape structure.
4.1.4
Evaluation of Results
The following provides an evaluation of the networking functionality within the simulated augmented material. The evaluation focuses on data transmission throughput, delay & packet dropping in the first instance with a fixed grid with no cluster mobility and then using the cluster mobility pattern as outlined in section 4. At the start of the simulation, each node is initiated in sequence, taking approximately 280 seconds to complete. A further margin of 20 seconds is provided before data transmission begins. At T=300 seconds the data traffic profiles, described in Section 3, are activated and data paths are established using the AODV algorithm between the nine cluster head nodes and the data sink.
4.1.5
Simulation with no mobility in the network
The inherent packet overhead of received data at the sink node for the route discovery process can be clearly seen in Fig. 17.14. In the absence of material movement, the packet delay incurred in reaching the data sink can be considered negligible and as shown in Fig. 17.15 operates within predicable bounds subsequent to route establishment. This represents one of the main characteristics within commonly used reactive network routing protocols – the initial route establishment can be an expensive process; however, post route establishment, typically the minority of nodes are mobile and thus the packet overhead and network congestion within the network can be maintained within acceptable boundaries. It is also observed in the absence of node movement, that once routes are established, data can follow these established data flows without experiencing further delays and packet overheads associated with data path rediscovery. This is shown in Fig. 17.16 for the uniform data transmission profile from node 208 post route discovery. Finally the packet dropping over the complete network throughout the simulation is shown in Fig.17.17. It clearly indicates the extent of network congestion arising from the route discovery phase in which route-request/route-rely messages are distributed throughout the network. It can be observed following successful route establishment
17 Building Networkable Smart and Cooperating Objects
379
throughput of receiving packets at node 112 [No. of Packets/TIL]
25
20
15
10
5
0 100
200
300
400 500 600 simulation time [sec]
700
800
900
Fig. 17.14 Received data throughput at data sink node
delay between node 97 and node 112 [sec]
0.026 0.024 0.022 0.02 0.018 0.016 0.014 0.012 0.01 400
500 600 700 800 packet receive time at node 112 [sec]
Fig. 17.15 Packet delay between cluster head 97 and data sink
900
380
K. Delaney et al.
delay between node 208 and node 112 [sec]
0.042
0.04
0.038
0.036
0.034
0.032
400
500 600 700 800 packet receive time at node 112 [sec]
900
throughput of dropping packets [No. of Packets /TIL]
Fig. 17.16 Packet delay between cluster head 208 and data sink
2500
2000
1500
1000
500
0
100
200
300
400 500 600 simulation time [sec]
Fig. 17.17 Packet Drop rate over entire sensor network
700
800
900
1000
17 Building Networkable Smart and Cooperating Objects
381
and in the absence of network mobility the packet drop rate falls off significantly to the extent that it can be considered negligible.
4.1.6
Simulation with mobility in the network
The reconfiguration of the simulated material structure is achieved by the movement of sensor node clusters to form the desired material shape, in this case a basic ‘L’ structure. From the sensor routing protocol perspective this movement can be considered random as the algorithm cannot anticipate the instant at which the established data paths will be broken and the new relative location of the data sink. Such cluster based movement can have detrimental effects on the routing strategy, particularly during the period in which the cluster movement is occurring. The nature of reactive protocols is to attempt to re-establish data paths on known path failures. As previously discussed, reactive protocols are known to be energy efficient in applications that involve minimal node mobility. In contrast, proactive routing strategies, such as DSR [16], can overcome node mobility by the periodically monitoring data path stability and proactively adjusting data routes prior to substantial network reconfigurations. However, proactive protocols are expensive in terms of energy consumption due to inherently high data transmission and are rarely adopted in energy constrained devices. The occurrence of material reconfiguration can be clearly identified from the received data throughput at the data sink, see Fig. 17.18. The additional packet overhead at the material reconfiguration periods is due to the reception of AODV route request packets along possible data paths from each cluster head node within the mobile material sections. Fig. 17.18 is the first indication of overhead data transmission within the network. A sudden increase in the data transmission rate across large network segments can result in network congestion propagating across the entire network and introduce excessive packet delays between corresponding end nodes. Typical packet delay profiles from cluster nodes 32 and 208 are shown in Fig. 17.19 and Fig. 17.20, respectively. In comparison to Fig. 17.15, the packet delay appears to be less bounded. The packet delay profile of cluster node 208 shows increase on each material segment move. In addition to packet delay, excessive data transmission can result in a profound increase in the packet drop rate. Packet dropping occurs as a result of the medium access control (MAC) functionality in which packets involved in collision are queued and a reconfigured number of retransmission attempts are initiated. If the retransmission attempts are unsuccessful, the packet is dropped. The packet drop rate is depicted in Fig. 14, in which the material movement time profile can be identified. Referring to Fig. 11, the additional route re-establishment packet overhead can be mapped to a sharp increase in packet dropping, a clear indication of severe network congestion.
382
K. Delaney et al.
throughput of receiving packets at node 112 [No. of Packets / TIL]
25
20
15
10
5
0
100
200
300
400 500 600 simulation time [sec]
700
800
900
Fig. 17.18 Received data throughput at data sink node
delay between node 32 and node 112 [sec]
12
10
8
6
4
2
400
500 600 700 800 packet receive time at node 112 [sec]
Fig. 17.19 Packet delay between cluster head 32 and data sink
900
17 Building Networkable Smart and Cooperating Objects
383
delay between node 208 and node 112 [sec]
0.13 0.12 0.11 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 400
500 600 700 800 packet receive time at node 112 [sec]
900
throughput of dropping packets [No. of Packets / TIL]
Fig. 17.20 Packet delay between cluster head 208 and data sink
2500
2000
1500
1000
500
0
100
200
300
400
500
600
simulation time [sec]
Fig. 17.21 The full dropped packet rate with node mobility
700
800
900
1000
384
5
K. Delaney et al.
Conclusion
This Chapter has outlined a discussion on the practical issues involved in creating Smart Objects. Much of this is based upon collaborations that grew within, and then evolved from, a project called “Extrovert Gadgets”. The practical approaches to proving the project’s central goal of creating a technological framework that enabled users in “composing, (re)configuring or using systems of computationally enabled everyday objects”, provided certain insight into the difficulties inherent in building systemic ‘smart’ behaviour into everyday objects, not the least of which was the process of physical integration. The concept of Augmented Materials, which was developed in part of answer this challenge, provides a vision of how this might be achieved; however, it is understood that such materials, once created, will integrate into a larger heterogeneous system. The design of this ‘digital’ system is tightly integrated with the ‘physical’ design of the everyday artefact, creating a need for a ‘whole-smart-artifact design’ process. This was discussed further in the context of a case study on the creation of a ‘Smart Table’.
References 1. Disappearing Computer initiative 2. N. Streitz and P. Nixon, “The disappearing computer”, Special Issue, Communications of the ACM 48(3), March 2005 2. A. Kameas, S. Bellis, I. Mavrommati, K. Delaney, A. Pounds-Cornish and M. Colley, “An Architecture that Treats Everyday Objects as Communicating Tangible Components”, IEEE International Conference on Pervasive Computing and Communications (PerCom2003), Dallas-Fort Worth, Texas, March 2003. 3. S. Dobson, K. Delaney, K. Razeeb and S. Tsvetkov, “A Co-Designed Hardware/Software Architecture for Augmented Materials”, 2nd International Workshop on Mobility Aware Technologies and Applications (MATA’05), October 2005 4. A. Schmidt, K. Van Laerhoven, M. Strohbach, A. Friday and HW. Gellersen, Context Acquistion based on Load Sensing, In Proceedings of Ubicomp 2002, G. Boriello and L.E. Holmquist (Eds). Lecture Notes in Computer Science, Vol 2498, ISBN 3-540-44267-7; Springer Verlag, Göteborg, Sweden. September 2002, pp. 333–351. 5. W.B. Heinzelman, A. P. Chandrakasan,. H. Balakrishnan, “An application-specific protocol architecture for wireless microsensor networks”, IEEE Transactions on Wireless Communications, 1(4): 660–670. October 2002 6. D. Intanagonwiwat, R. Govindan, D. Estrin, J. Heidemann, and F. Silva. Directed diffusion for wireless sensor networking. IEEE/ACM Transactions on NetworkingIEEE/ACM Transactions on Networking, 11, February 2003 7. M. Zorzi and R.R. Rao. Geographic Random Forwarding (GeRaF) for Ad Hoc and Sensor Networks: Multihop Performance. IEEE Transactions on Mobile Computing. Vol. 2, No. 4. October-December 2003. 8. K. Sohrabi, J. Gao, V. Ailawadhi and G. J. Pottie, “Protocols for self-organization of a wireless sensor network,” IEEE Personal Communications, vol.7 pp.16–27, Oct. 2000. 9. C. E. Perkins, E. M. Royer, S. R. Das, RFC 3561 - AODV – Ad-hoc On-demand distance Vector IETF, 2003. 10. Avrora – The AVR Simulation and Analysis Framework: http://compilers.cs.ucla.edu/avrora/ index.html
17 Building Networkable Smart and Cooperating Objects
385
11. P. Levis, N. Lee, M. Welsh, and D. Culler. TOSSIM: Accurate and Scalable Simulation of Entire TinyOS Applications, Proceedings of the First ACM Conference on Embedded Networked Sensor Systems, (SenSys 2003), Pages: 126–137 12. OMNeT++: Discrete Event Simulation System: http://www.omnetpp.org/ 13. OPNET: http://www.opnet.com/ 14. The Network Simulator Ns-2: http://nsnam.isi.edu/nsnam/index.php/User_Information 15. K. Murray, A. Timm-Giel, M. Becker, C. Guo, R. Sokullu, D. Marandin, The European Network of Excellence CRUISE Application Framework and Network Architecture for Wireless Sensor Networks, IEEE Globecom Workshops, 2007, Washington, DC, USA, 26–30 Nov. 2007, pp: 1–6 16. D. B. Johnson, D. A. Maltz, J. Broch, DSR: the dynamic source routing protocol for multihop wireless ad hoc networks, Ad hoc networking, Addison-Wesley Longman Publishing Co., Inc., Boston, MA, 2001
Chapter 18
Dedicated Networking Solutions for a Container Tracking System Daniel Rogoz, Fergus O’Reilly
Abstract Cork Institute of Technology (CIT) researchers in collaboration with Tyndall National Institute, both based in Cork, a city in the southern coast of Ireland, have developed a container management and monitoring system using Wireless Sensor Networks (WSNs), with a support of Cork Port and local industry. The system is designed to seamlessly integrate with existing container management schemes (and with staff procedures) at the port, efficiently, and at low cost, extending its capabilities through remote querying, localization and security. To achieve its goals, the system exploits the capabilities of wireless sensor nodes, used as container ‘tags’, which form a wireless, ad-hoc network throughout the container yard. This chapter will briefly describe the project rationale and the technology development process, which includes hardware solutions built by Tyndall – a dedicated sensor node platform composed from that Institute’s wireless sensor toolkit. It will also discuss the software and networking solutions created and implemented through CIT’s research. This includes specialized graphical user interfaces on portable devices (e.g. a solution was implemented for PDAs, based upon the .NET Compact Framework) and applications for WSN motes, running the TinyOS operating system, to provide full system functionality and multi-hopping communication. The chapter will also describe the work done to overcome the primary project challenges, including the issue of the radio shielding effects of the containers. The final system demonstrates multi-hopping and ad-hoc routing techniques that could exploit the containers, by stack and row, to forward information from one to the next and, in this way, enabling intelligent, reliable communication from anywhere in the port to the management system user. Keywords wireless networks, sensor nodes, multi-hopping, ad-hoc routing, asset tracking, freight container, TinyOS, radio frequency, tagging, location.
Technologies for Embedded Computing (TEC) Centre, Cork Institute of Technology, Ireland
K. Delaney, Ambient Intelligence with Microsystems, © Springer 2008
387
388
1
D. Rogoz, F. O’Reilly
Introduction
As Ireland is an island with a significant external trade market, the efficient, low-cost and fast transhipment of goods is a strategic economic requirement. The timely flow of container traffic through Ireland’s ports is of vital importance to maintaining Ireland’s competitiveness as an export driven economy. In this context, Irish ports operate as economic gateways; container traffic is on/off-loaded, then moved and stacked in tiers in storage yards before shipments are transferred to, and from, the rail or road network. Within ports, the order of on/off-loading, the placement of the containers in the storage yard, the stacking methods and the operating facilities are all central to maintaining an efficient, timely and low-cost operation. Organizational mistakes can bring significant time and labour costs. Four ports in the Republic of Ireland, and two in Northern Ireland, provide load-on/load-off (lo-lo)1 container services. According to The Irish Maritime Development Office report, published in 2007 [1, 2], total island traffic in 2006 was 1,372,206TEU2, representing 60% growth in the 5-year period since 2001 (in 2003 the growth rate was twice the global average). All of the ports have increased their traffic significantly, with Belfast as a leading example in almost doubling its traffic in 2004. The ports vary in throughput from Warrenpoint (41,948TEUs) through Cork (185,002TEUs) to Dublin (680,680TEUs). As the amount of traffic going through Irish ports increases, so does the importance of an effective (both in terms of operation and cost) container management system, a situation made more urgent by the value of some of the goods held in containers. The trade figures, showing the continuation of a strong import/export business, provide both necessity and the opportunity for ports to invest in their infrastructure. The current container management solutions for these smaller ports can have several limitations; they lack the facility for remote location and identification of a container, or the ability to monitor containers through protection/security features. Thus, these systems are prone to errors (e.g. lost containers) that are timeconsuming to recover. In this context, using traditional techniques they may require substantial infrastructural investment to resolve. The solution proposed in this project sought to address these limits with an approach that offers suitable performance and scalability, coupled with both low cost and low infrastructural overhead.
1.1
The Business Opportunity
The European coastline is dotted with a series of small-to-medium trade ports, which provide access to the Europe’s peripheral regions. Within the single economic market, 1
lo-lo, which means the cargo (shipping containers) has to be loaded and unloaded using cranes; in contrast to roll-on/roll-off (ro-ro), where the cargo is wheeled — automobiles, trailers or railroad cars 2 TEU — Twenty foot container Equivalent Unit, volume equivalent to a container of dimensions 20 × 8 × 8 6 , defined by ISO in 1968 in ISO/R-668 normative reference
18 Dedicated Networking Solutions for a Container Tracking System
389
the efficient, low-cost and fast shipment of goods has become a strategic requirement to assist in the economic development for all of these regions. Thus, the situation in Ireland is replicated in a number of other countries in Europe and worldwide. Currently, the on-site management, tracking, location monitoring and auditing of containers at many small- and medium-sized ports is a laborious and time consuming process; often a manual approach is used to locate, identify and track all containers. This imposes a significant cost on these regional ports and transshipment centres. Mistakes or delays result in additional expenses, all of which have an impact on the economic costs of export and import. The systems that are used in large ports, (for example in places like Rotterdam and Singapore) are too expensive to be justified, given the current scale and prospective future growth of traffic levels in most of these peripheral ports. Systems that can automate the management and tracking of container traffic and which can also operate with a low entry cost, scaling upwards with port’s rate of expansion, can provide a significant advantage to any small to medium sized port. With a sensor network-based management system, the major cost is in the sensor nodes themselves and in the physically deployed technology, rather than in a fixed, or heavy, infrastructure. Once deployed, the sensor nodes themselves provide the distributed infrastructure. This removes the upfront cost of the infrastructure and shifts balance of the cost towards the actual usage of the system. Any growing cost then becomes effectively linear with the growth in the quantity of container traffic being managed in the port. This makes it suitable both for the small port and the larger port. Sensor network systems also can be designed to scale effectively, which may be appropriate to a port with plans for expansion. A port growing from 100,000 to 200,000 to 400,000 TEUs per annum, over a number of years, should be able to simply deploy more sensor nodes, allowing the sensor infrastructure to manage the extra capacity. Unlike tracking systems based upon fixed infrastructure there would be no need for additional fixed infrastructure facilities. All ports will have their own physical structures and organizations, work procedures and processes for container throughput. Sensor networks, since they can work with effectively any physical layout (the network itself restructures) and can be integrated with existing work flows (minimal additional steps), have the potential to provide the simplest routes to the introduction of a new management systems to a small to medium port. This may be of vital importance in not impinging upon the success of any existing operations in the port in their initial use/deployment. Properly designed, the sensor network could effectively shadow existing systems, until a fuller confidence is gained in their use and their operating features may (or may not) be expanded. This is an important approach when introducing any required new technologies into a live system. For these reasons, sensor networks are an attractive technology for the small to medium sized ports. The challenges, which were the targets for this project, lie in achieving appropriate functionality, in demonstrating the potential for reliability and, in particular, in validating issues, such as the infrastructureless methodology, that will underpin the reduction of costs.
390
1.2
D. Rogoz, F. O’Reilly
Tracking a Container
Within a typical terminal, containers are stored in rows that are divided into slots. For example, in the Port of Cork, the main yard area consists of approximately 100 rows with 20 slots in each row. Fig. 18.1 (left) shows the structure of these rows and slots, which provide the regular storage area for the containers. The tracks between the containers on each row allow the wheels of the straddle (container) carriers to pass over the containers. The steel-walled containers, currently used globally, are tagged with a registration and classification identifier code, which consists of both the supplier, container number and a description code. The supplier/unit number code consists of four letter code followed by six numeric digits. This code is unique to each container. In addition, a four digit alphanumeric type/classification code identifies the type of container and its permitted usage. By using a full set of codes, each individual container can be uniquely identified. A sample of one such code is shown in Fig. 18.1 (right). To facilitate the entry of management data, all equipment for loading/unloading containers is equipped with a portable I/O terminal into which the machinery operator types the tracking data. In most cases, the last four digits of the container identification information provide sufficient uniqueness; this is typed in to identify the container and then the transfer (loading or unloading) operation carried out on that container. The vehicle and equipment that are primarily used in moving the containers are the gantry cranes (used at the port-to-sea interface) and the straddle carriers (used to load and unload trucks at the port-to-land interface). This equipment is networked wirelessly back to a central management system, which records the each of the transfer operations carried out and then interfaces with the accounting and management systems for customs and payment purposes. In major ports, such as Rotterdam, Machine Vision Systems are used to read container numbers and automatically identify them from databases. These vision
Fig. 18.1 The Port of Cork container storage area (left) and typical container markings used for unique identification of a specific container unit (right)
18 Dedicated Networking Solutions for a Container Tracking System
391
systems are costly to install and maintain and are only justified for large volumes of traffic. Their performance suffers in poor weather conditions, is impacted by damage to numerals, or dirt covering the numerals, on the containers and by any issue which in general makes vision difficult. These are conditions that are prevalent in Ireland. Machine vision systems also do not allow for remote finding and identification of containers, only those currently in the system’s field of vision. Other container tracking solutions are typically based upon passive electronic tagging and RF-ID systems. The use of passive tags, which responds only when queried, will give the identification of specific containers, but will not provide for remote location/identification and will not facilitate monitoring of the container. Passive tags do not have the ability to network to allow communication with tags that are out of local range; thus, they cannot enable any remote location methodology. In the current climate the ability to monitor can ensure that containers are not entered/exited or tampered with (e.g. for terrorism/immigration purposes) and can ensure that containers are handled correctly in a yard (through measurement of physical parameter, such as vibrations, impact forces, etc).
2
Challenges and System Specification
This project proposes a wireless sensor-based tracking system, which will allow the tagging, identification and tracking of shipping containers from when they enter a port to when they depart for their next destination. This must be low cost and efficient for smaller ports to use, allowing them a competitive equality, especially in the regional areas.
2.1
The proposed solution
We propose using sensor network derived, Sensor Identification Devices (SIDs) to self-identify, to track and to help manage throughput of individual containers in a yard. These are self-contained identification devices, approximately the size of a cigarette box, with radio networking capability. The SID devices are attached, in a removable manner, to each container entering the yard and they store identification information regarding the container, its source and destination, as well as any important information regarding the contents. Each SID will operate using an enclosed battery and be capable of communicating via radio over (short) distances of approximately 30m with other SID devices and with handheld readers/PDAs. When containers are stacked and placed in rows, the SIDs will use multi-hopping and ad-hoc networking to forward information from one SID to the next and allow communication from any part of a stack/row in the Port to a designated ‘management’ point.
392
D. Rogoz, F. O’Reilly
The prototype SID devices are based on existing technology developed in CIT and the Tyndall National Institute. Tyndall National Institute has implemented and tested an operationally flexible, modular sensor node platform [3, 4], thus providing much of the basis for the prototype hardware for implementing the SID devices. The prototype container tags’ wireless communication employs the ZigBee standard in an unlicensed ISM RF band.
2.2
The technology challenges
The main challenges to the successful completion of the project are the environment and, in particular, the fact that containers provide physical, visual and radio shielding. Radio frequency communication range is limited by multipath propagation in the presence of steel containers – phenomena such as reflection and diffraction would be omnipresent. Thus, in order to facilitate accessibility to each container tag, an approach that is different to direct communication needs to be taken. Additionally, the container deployment methodology can be highly unpredictable, it imposes no fixed infrastructure on the network formed by the container tags and it is moderately dynamic; the containers are deployed and removed on a regular basis. The robustness of the SIDs to the working environment must be proved for long-term use. Lastly, battery operation for the tags will make the lifetime limited and, thus, power efficiency may become a significant issue.
2.3
The Tracking Data
Within a port, the wireless sensor nodes should, for optimum deployment, integrate with the existing work practices and management systems of the terminal, but, in themselves, become capable of introducing extra tracking capabilities, to be expanded as needed. The intelligence implicit within the sensor nodes can be exploited to provide an independent tracking and monitoring capability. One way of achieving this is (in addition to the standard port tracking information) to replicate much of the logging information that is normally stored only on the port database in the sensor nodes themselves, allowing individual node/container interrogation and some self management. There can be significant storage capability on the nodes to allow for this level of storage. The stored data could be as shown in Table 18.1: Of the data in Table 18.1, only item 3 will (in an automated system) require entry by the operator. This is the Local ID Code/No; the other data can be extracted automatically from the port terminal database for tracking the container. The storage of this information on a container sensor node, or SID, will allow a yard interrogation with a handheld device, such as a PDA. In addition, the recording of the noted row/slot will allow the container sensor nodes to determine themselves that they
18 Dedicated Networking Solutions for a Container Tracking System
393
may be misplaced through locality checking. This information and the logging of operations can be uploaded at container movement/operation time by the terminal operator.
Table 18.1 Management data for Sensor Identification Device 1 2 3 4 5
Supplier Registration No. Classification Code Local ID Code/No. Arrival Time/Date Owner
6 Current Status 7 8 9 10
2.4
Recorded Row Recorded Slot Departure Date Operation Log
Full identification number Code as per the container classification table In the port of Cork, the 4 digit code for tracking Time and date of initial programming/arrival in Port. Owner as per the last programming, this data can be automatically extracted from the Port database and programmed into the SID by the initial programming device. Note, as owners may change on a frequent basis in transit, this is the last programmed owner. Status variable to match with its status in the terminal tracking system Row in the terminal that the container is noted as stored in. Slot in the terminal that the container is noted as stored in. Date container is expected to leave terminal. Series of fields, recording operations, movements carried out on the container.
Creating the Network
Direct wireless transmission over a given distance requires at least four times more energy than would be used over half of that distance. This simple relationship makes multi-hopping approaches more energy efficient, which is crucial considering the typically severe limitations on wireless sensor networks (WSNs) in terms of available power and required life-time. Transmission over a long distance is very often impossible simply due to wireless link range limitations (caused by hardware constraints: the maximum possible transmission power and/or maximum receiving sensitivity) or to issues of obstruction of the ‘line-of-sight’ between the transmitter and the receiver. Unlike fixed networks, WSNs in general (and especially in this particular WSN application) don’t have a priori knowledge of the network topology; thus, there is no initial knowledge of how to direct data flow towards its destination. Determining the way (route) in which the communication should be directed through a wireless network (that is deployed in an ad-hoc manner) in a timely and energy efficient fashion is a challenge. It has been a topic for extensive study over several years, resulting in the emergence of numerous ad-hoc routing protocols and algorithms that do not rely on any fixed infrastructure or central network administration. The identification of a suitable routing protocol requires observation of the desired network features. The following sections will discuss the properties of
394
D. Rogoz, F. O’Reilly
container tag network, characterizing its behaviour, which will then be used in the discussion on the choice of protocol.
2.5
The Target Network Characteristics
2.5.1
The Data Transmission Model
Data transmission in a network falls into four categories: time-driven, event-driven, query-driven or a hybrid. The time-driven model assumes that data transmission is triggered periodically (in a set time interval). In this model transmission is predictable and a steady data transmission rate is maintained. Event-driven transmission is generated by some in-network event, which (depending upon the application) could be, for example the detection of an enemy in battlefield monitoring or exceeding a temperature threshold in an environmental monitoring system. Transmission in a query-driven model is initiated by a base station, which sends a query command asking for data only when it needs to. The hybrid model combines characteristics from all previous ones. With both event-driven and query-driven model the transmission is ‘bursty’, as the network is in an idle state for most of the time and then must transfer data as fast as possible. A routing protocol might cope well with one model but not with another; for instance, a protocol intended for a time-driven model can introduce significant delays initially, before it starts to forward data toward a sink and some data might be lost (usually this is acceptable if a reliable and stable routes are formed afterwards). This same protocol, if used for query-driven model, can be useless if the network must respond quickly. In the container system the transmission scenario is mostly query-driven, initiated by the user from a GUI, although some system functions could involve periodic message broadcasting (Beacon function), which would suggest a hybrid model.
2.5.2
Communication Patterns
Communication in a network can follow some directional patterns. Most of the wireless sensor networks in use today collect some sensed phenomena (like temperature, humidity or light intensity) from the area of the network deployment. In this case, most of the communication occurs in one direction, towards a base station collecting the sensor data, often called sink. Generally a single sink is used, although some applications use multiple sinks. The transmission direction is opposite to that in the example given above when actuator networks are considered. Here, a base station issues commands to network
18 Dedicated Networking Solutions for a Container Tracking System
395
nodes to wirelessly actuate equipment, for example, lighting. Another pattern is node-to-node communication, as in the case of distributed localization algorithms, where the nodes must exchange some data to achieve a common network goal (e. g. determining the location of every network node). In general, the communication pattern can be uni- or bidirectional; towards single or multiple sinks, as well as node-to-node (the node-to-node pattern is to some extent similar to a multiple sink case if we consider that any network node can be a sink). In our case only one base station is used - a gateway connected to the PDA. The communication is bidirectional, as the user can issue commands as well as retrieve data from the container tag; the multi-hopping protocol must operate equally well in both directions. No node-to-node communication is used.
2.5.3
Location Awareness
Some routing protocols take advantage of knowledge of the positions of the network nodes. These algorithms are called geographic routing algorithms and use location information to forward data towards the destination node. The positioning information can be gathered through, for example, a GPS receiver, by estimation from the signal strength of messages received from neighbouring node (or their time of flight), from time-of-flight ultrasound signals, or through combination of any of these methods. The containers are being deployed in ad-hoc manner and no a priori information of their location is available. The system is intended to be used as a support tool for locating the containers, but no general knowledge of the deployed container tags’ network topology is available, therefore use of geographical routing protocols is, in this case, inappropriate.
2.5.4
Network Size and Scalability
The size of the network size will act important determinant for routing protocols; some that are designed for large networks of thousands of nodes may generate undesired additional overheads when used in small-scale networks, while protocols intended for small networks can be inefficient when network size increases. Scalability, on the other hand, is a feature that allows a protocol to adapt to changes in network size, usually through growth, and, in essence, enable it to perform equally well in networks of varying sizes, from small to large. The capacity of the main container yard in the Port of Cork is 3600. This represents the current maximum network size for this particular port, however; typically, such a yard would not necessarily operate at full capacity. In this context, there are methods available to form networks into clusters, through grouping them by radio channel, so that an individual cluster size (one of a number of clusters in the yard) would be a few hundred nodes, which would be considered a medium-sized network.
396
2.5.5
D. Rogoz, F. O’Reilly
Mobility
Many wireless sensor network applications assume that the individual network nodes have a fixed position and that they will remain stationary throughout full network lifetime. However, there are applications that use mobile base stations, or operate to a scenario where in general all of the network nodes can be mobile. In the case of mobile nodes, the routing protocol must react to topology changes; in fact, even in a static topology, the protocol should be adaptive to some extent as new nodes can be expected to join, or leave, the network, either intentionally or due to failure. The container tag network remains static once deployed, but the containers themselves are stored and removed on a regular basis. The gateway can be either stationary or mobile, with mobility being a key feature. The speed of movement of the gateway, as well as that of the containers being stored or removed, is relatively small as it is limited by the speed of the straddle carrier used to move the containers.
2.5.6
The Hardware/operating System
The implementation of a routing protocol is limited by the associated hardware platform’s abilities. It must be taken into consideration that a routing algorithm is only a part of the application and cannot consume all of available resources. It is especially important in wireless sensor networks, where resources are severely constrained, to pay attention to what is available; specifically this includes memory, computational power, radio packet size, the available radio transmission rate and so forth. The hardware specification of the target platform may prohibit the use of certain of the more sophisticated routing protocols. The Tyndall module, used here as prototype container tag, is built with Atmega128 microcontroller with 128kB of Flash program memory and 4kB of internal SRAM memory and operates at 8MHz frequency. The radio chip used is Ember EM2420 and the maximum data rate is 250kbps.
2.5.7
The Summary of the Network Characteristics
In summary, the identified requirement regarding a multi-hopping ad-hoc routing protocol for the container management system is as follows: ● ●
● ●
A query-driven/hybrid data transmission model A bi-directional communication pattern, between a single tag and single base station/gateway No location knowledge available A maximum network size from a few hundred (using clusters) to 3600 nodes (a single, homogenous network) with moderate scalability
18 Dedicated Networking Solutions for a Container Tracking System ●
●
397
A mobile base station, requiring topology changes; adaptability as network nodes will join and leave the network on a regular basis Specific hardware constraints; a microcontroller with 128kB program and 4kB of RAM memory, operating at 8MHz frequency (8 MIPS), with a maximum radio data rate 250 kbps
2.6
Multi-hop Protocol Characteristics
Certain characteristics of wireless sensor networks, such as wireless bandwidth, a dynamic ad-hoc network topology, unpredictable connectivity and limited processing power, are closely related to those found in Mobile Ad-hoc Networks (MANETs) — networks of mobile devices, such as PDAs and notebooks (mostly Wi-Fi enabled). In practice, MANETs and WSNs face similar problems and challenges in terms of networking. In fact, a majority of the work on WSN ad-hoc protocols has focused on adaptation of existing and well-studied MANET protocols. This often involves simplifying the protocol since WSNs typically operate under much more severe computational power, resource and energy constraints. During the process certain tradeoffs must be made, sacrificing functionality in order to ensure the protocol is sufficiently lightweight to be implemented in small sensors. Identifying a suitable routing protocol to be applied in the container tag network effectively becomes an exercise in identifying the MANET protocol most appropriate to be requirements presented in previous section and either implementing it for WSNs or using a pre-existing implementation.
2.6.1
Taxonomy
A taxonomy is presented in [13] that classifies MANET ad-hoc routing protocols by communication model, structure, state information and scheduling. ●
●
Communication model: the first consideration in protocol classification is the underlying wireless communication model. Protocols can either use single or multiple wireless channels for communication (i.e. separate channels for control and data packets), the former class being much larger. Information extracted from lower layers (if they can provide it) can be exploited for routing purposes; for example, link layer failure detection helps to avoid the necessity for highlevel acknowledgements, the physical layer could provide a received signal strength that can be used either for Link Quality estimation or in location estimation for geographical routing. Structure: the assignment of a role that the network node plays in the routing process imposes a structure on the network, which can be either flat (uniform) or hierarchical. Becker et al [14] points out the pros and cons of both uniform and non-uniform protocols:
398
D. Rogoz, F. O’Reilly
“Although such a (uniform) protocol avoids the resource costs involved in maintaining high-level structure, scalability may become an issue in larger networks.… ……Non-uniform protocols attempt to limit routing complexity by reducing the number of nodes participating in a route computation. Such an approach can improve scalability and reduce communication overhead; alternatively, it can support the use of algorithms of greater computational or communication complexity than is possible in the full ad hoc network. In addition, higher-level topology information can facilitate load balancing and QoS support….. …Significant resources are needed to impose topological structure on a highly dynamic ad hoc network.” [14] ●
●
State information: when considering the network state information that each network node maintains and exchanges with its neighbours, protocols can fall either into a topology-based or a destination-based category. In topology-based protocols, each node keeps complete information on the large-scale or full-scale network topology and performs routing decisions locally based upon this information. “Link-state” is the most popular topology-based protocol. All nodes participating in “link-state” networks broadcast the state of their connectivity with all immediate neighbours to the rest of the network and use this information to determine paths for each destination, storing it in a routing table. The amount of information being exchanged between nodes, and time required to build a complete network topology, are significant and largely dependent on network size. This problem is not so severe in fixed networks, but is a disadvantage in a resource-constrained and dynamic ad-hoc mobile network. Some protocols for MANETs address the issue by only maintaining partial topology information, either of a particular network area, or of paths to certain destination nodes. Destination-based protocols do not maintain a large-scale topology (with some exceptions where an immediate neighbourhood topology is used). Instead, “distance-vector” protocols (the largest group of protocols of this category) maintain, and exchange with other nodes, a distance to a destination (i.e. using certain metrics, such as hop-count or link quality) and a vector to the destination (i.e. the next hop node address). In some of these protocols all nodes exchange distance-vectors with each other, while other protocols maintain this information only for the active, or most recent destinations. Distance-vector protocols suffer from routing loops and slow convergence problems (especially in a dynamic environment), which are addressed by including sequence numbers and next-to-last-hop in distance-vectors. Scheduling: the last consideration in routing protocol categorization is the schedule for obtaining the routing information. The solution adopted from fixed network protocols is periodically, or in response to topology changes (or both), to exchange routing information for all known destinations. The advantage of such a (proactive or table-driven) solution is a minimal routing delay and quick determination of whether a destination is reachable. The network resources required are significant and become entirely wasted in the case of unused routes. The more widely adopted approach for MANETs is establishing a route on-demand (reactive). These reactive protocols save resources by avoiding the
18 Dedicated Networking Solutions for a Container Tracking System
399
maintenance of unused routes. The route discovery process commences only when a route to a destination is required – a route request floods the network until it reaches its destination, which decides the best route and sends back route reply. In some cases, intermediate nodes are allowed to reply to a route request if they have the route to the destination. For known destinations, a maintenance process ensures that failed routes are deleted or repaired (new route discovery is initiated or intermediate nodes attempt to perform localized route repair); often this is done by relying on lower layer information (link layer connectivity failure detection). Some protocols use pro-active periodic “Hello” messages to monitor immediate neighbour connectivity. A hybrid approach (pro-active/reactive) exists as well, often with hierarchical protocols, where network nodes on different hierarchical levels can use different route scheduling (i.e. pro-active for cluster heads at the network backbone and reactive for ‘child’ nodes).
2.6.2
The choice of protocol
The set of networking requirements introduced and the taxonomy presented in the previous sections together create a tool for characterizing the ad-hoc protocol most suited to the container management system. If we consider each category in terms of the specific system requirements: ●
●
●
●
Communication: model protocols using a single wireless channel are a much more widespread category than multi-channel, they are easier to debug and they provide options for network partitioning by channel assignment. Link-layer failure detection and LQI support are desirable, though not essential. Geographical routing is inappropriate due to the multi-path steel container environment. Structure: considering the fact that the container tag network consists of homogenous nodes, the resources are strictly limited and the environment is highly dynamic (due to frequent deployment and removal of containers), a uniform structure for the network seems to be most fitting. State information: as the container tag network topology would be highly dynamic and the container tags themselves are resource constrained, a destination-based protocol (distance vector) is a choice that will better ensure scalability and responsiveness to topological changes. Scheduling: sporadic communication with container tags (usually communication with one tag at a time) strongly suggests using an on-demand protocol.
In summary, an ideal ad-hoc protocol for the container management and monitoring system would be single-channel, utilize LQI and link-level acknowledgements, have uniform structure, be destination-based and on-demand. The protocol that most closely matches these features is the Ad hoc On-Demand Distance Vector (AODV) protocol [15].
400
3 3.1
D. Rogoz, F. O’Reilly
The Current Systems Implementation The System Overview
Our system consists of wireless sensor nodes [3, 4] acting as container tags. The tags communicate wirelessly using the 2.4GHz unlicensed ISM frequency band and access to the tags is enabled through a gateway. The gateway can be connected either to a PDA or PC/Laptop, acting as a bridge, which is forwarding messages from a serial connection to the RF interface and back. For user interaction with the system, we have developed a specialised Graphical User Interface (GUI), which can run on any Windows Mobile PDA or Windows PC. The system functionality offers a potentially seamless substitution for the current container management and tracking methods used in the port storage yard. First of all it enables direct RF communication with the tags, establishes a connection to either find the active tag (by known container number), to find an empty tag (by known mote ID, which is a unique tag number), or to discover all of the tags in range (all in direct communication range when no information is required). The system provides a Beacon function to physically locate the tags (i.e. a process where the tag is located using received strength of the signal, or RSSI, the hop count and an LED proximity indicator). The container location can be determined by accessing the stored location data (row/slot) as well. All the container information stored on the tag can be accessed by querying the tags. The full container information can be displayed, including container number, type, arrival and departure dates, location, owner, and any additionally stored information. The user can change the data stored to update the container information. The system allows activation of new/empty tags and the storage of relevant data as well as tag deactivation (resetting the data). Fig. 18.2 summarizes the system architecture.
Fig. 18.2 Systems overview for the Container ‘tag’ management system
18 Dedicated Networking Solutions for a Container Tracking System
3.2
401
The Graphical User Interface
In order to facilitate user interaction with the system, we have developed a specialised Graphical User Interface (GUI). Its purpose is to provide the user with full access to the system functions through a PDA or PC screen, thus hiding much of the underlying complexity of the system. It acts as a bridge between the PDA/ Laptop and the WSN/container tag network. In the initial prototypes, the PDA/ Laptop communicates through a serial connection with a Gateway node, which in turn interacts wirelessly with the deployed container tags, using RF communication based upon the ZigBee standard. As the flexibility of the user interface is a key aspect, we have based the GUI application on a widely established Windows Mobile based PDA device. The gateway is attached to the PDA using a dedicated serial cable. Fig. 18.3 shows the PDA interface. As the Windows Mobile GUI is based upon the .NET Compact Framework, which is to some extent a subset of full .NET Framework, the same GUI application can be launched on an ordinary Windows XP PC with installed .NET Framework without any additional changes. Fig. 18.4 shows the same GUI running on a desktop Windows XP.
3.3
The Gateway and Container Tags
3.3.1
Hardware Platforms
Tyndall National Institute has provided the prototype hardware solution for the project, based upon its DSYS25z wireless sensor node [3, 4]. The node is built with
Fig. 18.3 The graphical user interface (GUI) running on a PDA
402
D. Rogoz, F. O’Reilly
Fig. 18.4 Graphical User Interface running on a desktop PC (Windows XP)
Fig. 18.5 Wireless sensor nodes used as prototype container tags
an ATmega128 microcontroller and an Ember EM2420 radio transceiver (Chipcon CC2420 counterpart); a RF monopole antenna is used for communication. The gateway and container tag modules are essentially the same, with the exception of a serial connector placed on the gateway module to link to a PDA/Laptop. Both are enclosed in waterproof, RF-transparent boxes, with external power switch and two LEDs. Fig. 18.5 shows a tag, sealed and open, with a Tyndall node visible.
3.3.2
Software Solutions
Wireless sensor network applications, for practical reasons, tend to be tightly bound to a specific hardware architecture and manipulate the hardware platform’s resources to execute high-level logic tasks. The applications are specialized and hardware resources are very limited. Therefore, accurate control of how these resources are used is essential, potentially making the software development process long and susceptible to errors. In addition, adapting or changing this platform will normally require the development process to be repeated, completely or in part.
18 Dedicated Networking Solutions for a Container Tracking System
403
Programming the nodes using conventional methods can be challenging, especially when it is necessary to use some of the more complex algorithms, such as ad-hoc networking. However, it is possible to resolve this problem by selecting an ‘operating system’ as a software platform for container tags. In this context we have chosen TinyOS, a dedicated operating system for wireless sensor networks [5]. The gateway and container tags run applications written in nesC [6, 7], which is a C-like programming language for TinyOS. TinyOS: is a multi-platform sensor network operating system designed by U.C. Berkeley EECS Department to address the specific needs of embedded wireless sensor networks [16]. TinyOS is a set of “blocks” representing certain functionality, from which the programmer can choose to build his application, by “snapping” or “wiring” these components together for a target hardware platform. These components can be high-level logic (like routing algorithms), or software abstractions for accessing the hardware resources (such as radio communication, ADC, timers, sensors, or LEDs). They interact through well-defined bi-directional interfaces (i.e. sets of functions), which are the only access points to the component. The bi-directionality of the interfaces allows the introduction of split-phase operation, making commands in TinyOS non-blocking. The Component structure goes from a toplevel logic layer down to a platform-dependent hardware presentation layer. By replacing a component we can change algorithms, hardware platforms or expand hardware platform functionality. TinyOS supports high levels of resource constrained concurrency in the form of tasks and hardware event-handlers as two separate threads of execution. Scheduling tasks allows implementation of power-saving algorithms; the node can go into sleep mode, saving energy while waiting for an event to occur [5].
3.3.3
Application Software
The basic role of the gateway is to be a bridge between the PDA (connected to it via a serial cable in the prototype) and the network of container tags, accessible through the wireless RF connection. It forwards, and returns, the messages received over UART to the RF interface. The messages follow a specific packet structure, as defined by TinyOS – Active Messages, containing the destination address, message type, group ID, length, CRC and message payload. Based upon this information, the gateway can check if the message is corrupted, filter out messages transmitted from outside the system and it can assess the strength of the received radio signal as well. The tag stores detailed container information and makes the information accessible to the system user. It communicates with the gateway using only radio packets, receiving commands and responding accordingly, to provide the previously described functionality. Each container tag also provides services for its neighbours by forwarding their messages towards a destination (due to the NSTAODV protocol [8] running on the tags).
404
3.4
D. Rogoz, F. O’Reilly
The Prototype Deployment
Our current system test-bed consists of a small scale, 10 container tag network, with gateway nodes and a PDA interface. The RF communication is based on a multi-hop connection between the gateway and the nodes (See Fig. 18.6). At this stage the system full functionality as described earlier is implemented. This setup provides the test-bed for connectivity tests, makes possible measuring the RSSI and drop packet rate. Accessibility to every single container tag within the network, from any point in the container yard, is ensured by a multi-hop networking protocol [8]. This has been implemented to exploit the nature of the container placement (stacked rows) in order to allow the tags to forward the radio messages from the gateway deeply into the network. In this way, direct communication is not absolutely necessary in order to access a tag. The multi-hop protocol doesn’t rely on any topology, as the user can connect to the network from any place, provided that at least one tag is within the gateway’s communication range. The containers are loaded and unloaded on a regular basis; thus, container tags will be constantly entering and leaving the network and the size of the network will not be fully defined. In order to accommodate these changes, the networking protocol is reconfigurable, scalable and moderately dynamic. An issue to be addressed for the future is power efficiency, in order to ensure that tag function (between recharge) may be extended to a reasonable timeframe (i.e. months). A power conserving radio Media Access Control (MAC) protocol, similar to those described by Ye et al [9] and Polastre et al [10], should be used to manage the radio state by switching it off when idle, as the radio currently in use in the project is limited and consumes roughly the same amount of current, whether in idle, receive or transmit mode.
4
Experimental Verification
To verify the feasibility of this container management system solution, we have performed a number of tests with containers currently stacked in various combinations in one of the Port of Cork container yards. A dormant container storage area
Fig. 18.6 The current system set-up for multi-hopping
18 Dedicated Networking Solutions for a Container Tracking System
405
Fig. 18.7 The test setup used for verifying container-to-container communication
was used as a test site, since the active storage areas were inaccessible due to normal container loading and unloading. Two key sets of tests have been performed to date. In the first series of tests, the containers were set up as pictured in Fig. 18.7; this was equivalent to a 3-D matrix, four rows and three containers long and stacked three containers high. Selected tags acted as a receiver and others were used to transmit from various locations. For each location, the average strength of the signal was measured as well as the packet delivery rate. The results showed that the packet delivery rates ranged from 73% to 100% and RSSI, or average strength of the signal, ranged from −85dBm to approximately −40dBm. The test demonstrated that communication between tags that are up to 2 containers apart will be possible with sufficient reliability. In the second series of tests, we used individual tags, emitting a ‘raw’ 2.4GHz carrier frequency and a directional antenna with spectrum analyzer. The containers had been placed 10cm apart and the tag was attached to the container doors. Fig. 18.8 shows, in plan-view, the set-up of this test. The strength of the signal was measured in a 5m radius, with the receiving antenna pointing in the direction of the gap between the containers; this is where the transmitter had been placed. The results show that the transmission radiates from the gap between the containers in a wide angle, much more broadly than line of sight, and this makes container-to-container communication feasible. The test results are shown in Fig. 18.8.
5
Conclusions
The container management and tracking system, based upon wireless sensors and described in this chapter, is compatible with existing operations within an environment, such as the Port of Cork, while also having the potential for future enhancement. While further evaluation is warranted, it may provide an attractive opportunity for commercialization in this, or another, application domain. The two enabling features provided by the system, when described at a technological level, are: intelligence (for example, the ability to simply acquire and analyse data, as would be the case for the tag that determines its container has been incorrectly
406
D. Rogoz, F. O’Reilly
Fig. 18.8 The shape of a transmission field radiating from the tag attached to the door of a container, which represents a preferred placement point for operational and reliability reasons
placed) and networking (for example, the ability to locate any container from any part of the yard). The combination of these features enables several innovations which are not currently facilitated in existing tracking systems. System intelligence further allows containers to be tracked and monitored individually, permitting them to log their own conditions/movement and generate alarms where appropriate. This provides the potential ability to extend the systems functions to security/immigration control and to automatically auditing the container management process in the port. Passive or unintelligent tagging, (e.g. RFID) would not allow for this future value-added capability. The ability to record and monitor the duration and any events that occur during each container’s stay at a port would provide a framework for quality management procedures and further support an automated electronic audit trail. This may be important for food, valuable and dangerous goods trans-shipments. The sensor-based ability for data ‘hopping’ in the network, enables access and communication between all containers within a yard from a central location, without needing to individually visit them. It resolves issues around radio communication, in what is in fact a difficult radio environment, due to the significant quantity of steel present. The system will support scalability in operation in line with port development and container traffic growth. Given that there are relatively low infrastructural overheads, the system could be affordable across a range of port sizes and could potentially grow through network-scaling without significant additional outlay. The expansion of trade is one of the key contributors to economic development; accessibility is a key component in realising this expansion of trade. The peripheral regions of the world are, in general, well served by small and medium ports/transshipment centres, but the efficiency of this access could be further optimized. A growth in volume at these ports, typically a strategic necessity, would have a
18 Dedicated Networking Solutions for a Container Tracking System
407
positive regional impact, maintaining their relevance into the future. Sensor technologies offer possible solutions, some of which are demonstrated in this chapter, that would accelerate the expansion of their operation and, if correctly designed, would provide for greater efficiencies without significant outlay by the ports. Harnessing these technologies correctly will provide a competitive advantage to those ports that use it; while. The key to implementing a successful version of the technology rests in addressing operational and environmental challenges through further demonstrators and field-trials. Acknowledgements This work is carried out as part of the Enterprise Ireland funded Project Containers (PC/2005/126), and the support of all of the project partners is recognised.
References 1. G. Murphy and V. Vogel, “The Irish Maritime Transport Economist,” Tech. Rep. 4, The Irish Maritime Development Office, April 2007. 2. Irish Short Sea Shipping, Inter-European Trade Corridors, 2004, published by IMDO-Ireland. 3. S. Bellis, K. Delaney, B. O’Flynn, J. Barton, K. Razeeb, and C. O’Mathuna, “Development of field programmable modular wireless sensor network nodes for ambient systems,” Computer Communications, Special Issue on Wireless Sensor Networks and Applications, vol. 28, pp. 1531–1544, August 2 2005. 4. B. O’Flynn, S. Bellis, K. Mahmood, M. Morris, G. Duffy, K. Delaney, and C. O’Mathuna, “A 3-D Miniaturised Programmable Transceiver,” Microelectronics International, vol. 22, no. 2, pp. 8–12, 2005. 5. J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister, “System architecture directions for networked sensors,” SIGOPS Operating Systems Review, vol. 34, pp. 93–104, December 2000. 6. D. Gay, P. Levis, R. von Behren, M. Welsh, E. Brewer, and D. Culler, “The nesC language: A holistic approach to networked embedded systems,” in Proceedings of Programming Language Design and Implementation (PLDI), June 2003. 7. D. Gay, P. Levis, D. Culler, and E. Brewer, “nesC 1.1 Language Reference Manual,” manual, May 2003. 8. C. Gomez, P. Salvatella, O. Alonso, and J. Paradells, “Adapting AODV for IEEE 802.15.4 Mesh Sensor Networks: Theoretical Discussion and Performance Evaluation in a Real Environment,” in Proceedings of International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM’06), 2006. 9. W. Ye, F. Silva, and J. Heidemann, “Ultra-low duty cycle mac with scheduled channel polling,” in SenSys ‘06: Proceedings of the 4th international conference on Embedded networked sensor systems, (New York, NY, USA), pp. 321–334, ACM, 2006. 10. J. Polastre, J. Hill, and D. Culler, “Versatile low power media access for wireless sensor networks,” in Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems (SenSys), November 3–5 2004. 11. D. Laffey, D. Rogoz, B. O’Flynn, F. O’Reilly, J. Buckley, and J. Barton., “Containers – Innovative Low Cost Solutions for Cargo Tracking,” in Proceedings of Information Technology & Telecommunications Conference 2006, (Carlow), pp. 187–188, Carlow Institute of Technology, October 25–26 2006. 12. D. Rogoz, F. O’Reilly, and K. Delaney, “Dedicated Networking Solutions for Container Tracking System,” in Proceedings of Information Technology & Telecommunications
408
13. 14.
15. 16.
D. Rogoz, F. O’Reilly Conference 2007, (Dublin), pp. 175–182, Institute of Technology Blanchardstown, October 25–26 2007. L. M. Feeney, “A Taxonomy for Routing Protocols in Mobile Ad Hoc Networks,” SICS Technical Report T99/07, Swedish Institute of Computer Science, October 1999. M. Becker, S. Schaust, and E. Wittman, “Performance of Routing Protocols for Real Wireless Sensor Networks,” in Proceedings of International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS 2007), (San Diego, California (USA)), July 16–18 2007. C. Perkins, Ad-hoc on-demand distance vector routing, MILCOM ‘97 panel on Ad Hoc Networks, 1997 The Tiny OS Community Forum: http://www.tinyos.net/
Conclusion
1.1
Ambient Intelligence & Microsystems
This book has sought to address the issue of creating Ambient Intelligence through heterogeneous systems within the specific context of composing smart cooperating objects using a number of technology platforms, but in particular Microsystems.
1.2
Co-innovation
The focus upon technology platforms during the early chapters illustrated both the potential of those platforms and the inter-linked nature of the design process; solutions to specific challenges cannot always be resolved through a possibly isolated innovation process within a single technology domain. Co-design methods are more effective, particularly if a whole-systems perspective is taken during the investigative process. For certain challenges, such as the issues of energy management and reliability dealt with in later chapters, a co-design process is necessary and in fact the creation of a co-innovation process, though difficult, is highly desirable. Collective innovation is not just about resolving and addressing a series of converging research objectives; it should include a methodology for prioritizing the requirements of the user and relevant stakeholders.
1.3
Prototyping
Understanding and providing for user needs, as well as crystallizing the requirements of industry in enabling the delivery of systems and services that meet these needs, is more often than not an iterative process; certainly it is something that is constantly evolving as change is an inevitable part of everyday life. It is broadly agreed that prototyping, many and often, represents the optimum approach for achieving results. 409
410
Conclusion
Relating the results of these prototyping sequences back to something that can become recognised as an infrastructure for Ambient Intelligence is a significant task; this achievement may well only be recognised in retrospect.
1.4
‘Practical’ Visions
However, a certain level of deconstruction of the Ambient Intelligence concept has taken place that can support a practical approach. From this process, new visions, concepts and research initiatives have grown and in fact they are evolving to show more clearly how they might be interlinked; these include the Disappearing Computer, Augmented Materials, Context-Awareness, Autonomic Systems, etc. Each of these research topics supports a direct or indirect role for Microsystems technologies, with the heterogeneous nature of each evolving system informing the specification, design and systems integration requirements for Microsystems devices (selected within the framework of specific applications). In some cases, this can be accommodated by existing MEMs devices and established assembly routes. In an increasing number of areas, however, this is not the case. Of course, if this emerging gap in MEMS device (and perhaps subsystems integration) technology becomes aligned with a specific user-defined problem statement then this presents itself as an opportunity, one in which the challenge of realising what will likely be a complex co-innovation process may well become worth the collective research effort invested in it.
Index
A Acceleration sensors bulk micromachining, 56, 59 surface micro-machining, passive capacitance, 58–59 Acoustic sensors, 64 Acquisition, context data aggregation and fusion techniques, 190–191 middleware approaches, 192–193 sensors/probes, 189–190 Adaptation changing environments, 12 goal-directed, 8 pervasive system, 306 user interfaces, 10 Ad-hoc interaction, 14 Ad-hoc routing protocol location awareness, 395 and MANET, 397, 398 network characteristics, 396–397 network requirements, 399 Advanced packaging technology environmental monitoring, 124 high-density demonstrator, 124–125 textile integration conductive yarn, 122 miniature and non-obtrusive modules, 122–123 vineyard monitoring, 123–124 ALOHA protocol carrier sensing, 163 operation modes, 162 Ambient ecologies composite artifact, 216–217 conceptual model programming basic elements, 212–215 Bunge–Wand–Weber (BWW) ontology, 212
definition, 208 design patterns and programming principles, 235 integrating systems, 209 multidisciplinary efforts, 234–235 software components paradigm, 210–211 states, transitions and behavior modeling, 217–218 Ambient intelligence (AmI), 234, 285 design research, 312, 314 effective and scalable immersive technologies, 21 embedded computing system, 327–329 humidity sensors and microphones, 64 information technology systems, 20 multidisciplinary research programme, 324 ubiquitous computing, 32 user-centred process, 319 Ambient Intelligence (AmI) sensor nodes acceleration sensors, 56–59 bio-and chemo-electrical interface sensors, 64–66 building block schematics, 105–106 e-CUBES concept, 120–121 gyroscope sensors, 59–61 low power sensors, MEMS technology, 50–56 micro-environmental monitoring network, 107 miniature ambient sensor node, 108 pressure sensors, 61–62 shock sensors, 63–64 vibration sensors, 62–63 Ambient sensors, 55–56 Applications, Intel motes, 111–113 Artifacts, 206–207. See also Ubiquitous computing (UbiComp) Aspect-oriented system design (AOSD), 290 Asset tracking, containers 390–391
411
412 Augmented materials, 293 ambient intelligence, 20 analytical approaches, 295 cluster movement pattern, 377–378 co-design process, 287–288 design challenges, 293, 296 evaluation, 378 miniaturised sensing modules, 36–38 multidisciplinary initiatives, 20–21 networkable embedded sensing elements, 39 network simulation node deployment, 375–377 with mobility, 381–383 without mobility, 378–381 object creation process, 25–28 R&D concepts, 32–36 research, 30–32 simulation environment, 374–375 smart and cooperating objects fabrication, 21 smart object, 368–369 system-in-package method, 291 systems description, 28–30 vision statement, 22–25
B Batteries, 75 Behavior modeling, ambient ecologies, 217–218 Behaviour communications, 15 pervasive system, 297–300, 306 users, 13, 14 Bio-and chemo-electrical sensors, 64–66 Bluetooth, 174 Brain Opera interactive surfaces Gesture Wall, 346–347 LaserWall and Tap Tracker Window, 347–348 Magic Carpet, 348 Sensor Chair and Rhythm Tree, 346 BT node, 113–114 Bulk micromachining (BMM) acceleration sensors, 56, 59 gyroscope sensors, 59–60 low power sensors, 51–52 pressure sensors, 61–62 Bunge–Wand–Weber (BWW) ontology, ambient ecologies, 212
Index C Carbonization based battery structures, 75–76 CargoNet, wireless motion sensor, 351–352 Carrier sense multiple access (CSMA) network, 357 protocols, 163 Cellular mobile communication systems, 171–172 Centre programme innovation process, 336–338 POINT Programme, 334–335 sustainable model, 335–336 Chip-In-Laminate interconnection multi-chip modules, 146–147 multi-chip packaging advantages, 147, 149 chip-in-PCB and build-up layer, 148 photoresist process, 149 polymer and embedded chip, 147 Chip-package co-design, 292–293 Claytronics, 34–35 Code division multiple access (CDMA), 161–162 Co-design approach, pervasive systems classical systems development, categories of, 298, 299 environmentally-constrained situation, 305 observation, precision and accuracy, 301–303 physical and virtual sensors, 302, 303 sensing and actuation capabilities, 300 sensor fusion and uncertain reasoning RFID tag, 303 sensors diversity, 303–305 software and hardware elements, 298 Co-design process augmented materials, 287–288 element-level chip-package, 292–293 packaging stresses, 293 passive component integration, 293–295 Right First Time design, 291–292 established methods, 286–287 network level, 290 object-level disappearing computer initiative, 289–290 process, 288–289 Cognitive invisibility, 7, 9 Complementary metal-oxide-semiconductor (CMOS) compatible process, 54, 58 readout electronics, 61
Index silicon fibre technology gate oxide growth and twin well formation, 90–91 metal deposition and polyimide encapsulation, 93 oxide and isotropic etch, 94 polyimide addition and patterning, 92 silicon islands, 89–90 silicon-on-insulator (SOI) technology SIMOX and smart-cut technology, 88 SOI Wafer cross-section, 87 spherical silicon processing technology planar IC, 96–97 silicon spheres, 95–96 Component-based software systems. See Software components paradigm Component level devices AmI system autonomy, 50 sensor array, 49 Composeability description, 206 motivating scenario, 207 outline, 208 Computer–human interactions, 353 Computer networks layered communication network, 159 layered protocol stack approach, 158–159 principles and models, 158 Computer Technology Institute (CTI), 364 Connected limited device configuration (CLDC), 229–231 Container tracking system experimental verification, 404–406 gateway and container tags application software, 403 hardware platforms, 401–402 prototype deployment, 404 software solutions, 402–403 graphical user interface (GUI), 401 machine vision systems, 390, 391 multi-hop protocol characteristics MANET classification, 397–399 protocol selection, 399 network characteristics communication patterns, 394–395 data transmission model, 394 location awareness, 395 mobility and hardware/operating system, 396 network size and scalability, 395 passive tags, 391 sensor-based tracking system proposed solution, 391–392 technology challenges, 392
413 tracking data, 392–393 sensor network systems, 389 supplier/unit number code, 390 system overview and architecture, 400 Contention based protocol, MAC ALOHA protocol, 162–163 duty cycling protocol, 163–164 Context, acquisition data aggregation and fusion techniques, 190–191 middleware approaches, 192–193 sensors/probes, 189–190 Context-aware artifacts, 208, 209 Context-awareness, 12 co-design, 297, 298 definition, 188 feature oriented programming (FOP), 199 programming applications, 197–198 Context economy, 313 Context modeling and reasoning approaches, 195 description, 194 learning based reasoning, 194 modeling techniques, 196–197 service behavior adaptation, 193–194 Cooperating objects, 36. See Smart Objects Co-synthesis, 286, 287
D Damping ratios, 249 Deep reactive ion etching (DRIE), 254 Dense sensor networks, 356–359 Design patterns, observer, 220 Design process, 315, 316, 319, 320 Design research interface and interaction design, 316 accident, 318–319 action and alignment in, 318 address, 317 ambient displays, 317–318 transparency and acceptance, 319–320 look, involve and try categories, 315–316 users identification, 314 Digital Baton, interactive surfaces, 346 Digital clay, 34 Direct sensor access, context acquisition, 191–192 Disappearing computer programme, 285–286, 289–290, 295–296, 364, 365 immersive concepts, 32 PALCOM, 20–21
414 Distributed load sensing system experiments, 372–373 scalable approach, 371–372 Distributed sensors and actuators, 21, 36 Distributed systems, 7 Diverse sensors, 303–304 3D packaging system-in-package (SIP), 133–134 wafer-level packaging copper-to-copper thermo-compression bonding, 135 indium-gold (In-Au) micro-bumps, 135–136 micro-insert, 136, 137 Duty cycling protocol, 163–164
E Ecology, 208–209 E-CUBES system Ambient Intelligence (AmI), 121–122 components, 122 concept of, 120–122 eDesk artifact, 221–224 eDeskLamp resource specification, 232 eGadgets sensor interface board, 366–367 E-Grain advanced packaging technologies, 119 Fraunhofer concept, 118 package-on-package (PoP) concept, 119–120 wafer level assembly, 120–121 Electrochemical etching, 51 Electromagnetic generator devices, 251 load resistance, 261 micro-fabricated, 251–252 principle, 250–251 vs. electrostatic generator, 256 Electromagnetic scavenging, 67–68 Electronically functional fibre (EFF), ICs fabrication, 89 Electronic footwear, interactive dance, 349–350 Electrostatic generators disadvantages, 256 principle, 254 schematic representation, 255 vs. piezoelectric generators, 256 Electrostatic scavengers, 68–69 Embedded computing systems AmI, 328 co-design methodology, 329 disappearing computer program, 328
Index multidisciplinary research programme, 323–325 R&D programme, 326 revolution, 327 TEC centre, 325–326 technical limits, 328–329 Embedded Java Controller (EJC), 231–233 Embedded microelectronic sub-systems Chip-In-Laminate interconnection multi-chip modules, 146–147 multi-chip packaging, 147–149 photoresist process, 149 polymer and embedded chip, 147 3D packaging, 133–136 folding flex flexible substrate, 138–139 flip chip interconnect, 142–144 folded chip assembly, 137–138 module building, 144–146 silicon thinning process, 140–142 microelectronics packaging hierarchy, 132 package function, 131–132 short-term challenges, 132–133 micro-nano interconnect, 149–150 Embedded networks physical layer, 159–160 routing protocols, 164–165 scheduling based MACs, 161–162 Embedded sensing, 363, 371 Embedded systems wireless sensor networks, 32 Energy harvesting. See also Wireless sensor network node devices, 243, 245 electromagnetic generator, 250–252 electrostatic generators, 254–256 indoor solar energy harvesting system, 246–247 piezoelectric generators, 252–254 power conversion for, 260–263 thermoelectric generator, 258–260 Energy scavenging devices electromagnetic scavenging, 67–68 electrostatic scavengers, 68–69 piezo-scavengers, 69–71 radio-active generators, 74 solar energy, 71–72 storage device charging, 66 thermal scavengers, 72–74 Energy storage systems batteries and super-capacitors, 75 carbonization based battery structures, 75–76 fuel cells, 76–77
Index Engineering paradigm applications eDesk artifact, 221–224 synapse-based programming, 220–221 eStudy participating artifacts, 223 EU-FP7 IST project e-CUBES, 274 Event-condition-action (ECA) modeling pattern, 224–226 Extrovert gadgets project eGadgets sensor interface board, 366–367 hardware systems, 365 plug-and-play approach, 364–365 Tyndall wireless sensor mote, 367–368 user study environment, 365–366
F Failure mechanisms, NES nodes, 269 externally induced stresses climatic factors, 270–271 mobile node location, 271 internally induced stresses prediction approaches, 272 wake-sleep cycles, 271–272 strengths analytical approaches, 272–273 physics-of-failure model, 273 reliability prediction, 273–274 Failure modes and effects analysis (FMEA), 274 Faraday’s law of induction, 250 Feature oriented programming (FOP), context awareness, 199 Flip-chip, microelectronics packaging flip chip interconnection conductive adhesives, 144 gold stud bumps, 142–143 folded flex module flip-chip assembly, 145 move to micro-nano interconnection, 149 structure, 136, 137 Flow state, ambient intelligence, 319–320 Folded-flex, microelectronics packaging flexible substrate, 138–139 flip chip interconnection conductive adhesives, 144 gold stud bumps, 142–143 folded chip assembly, 137–138 module building, 144–146 folded flex assembly, 144–145 stack module, 145–146 silicon thinning process fracture stress and strength, 140–141 silicon wafer thinning, 140 wafer bow values, 141–142
415 Formal model, 206, 215–218 Freight container storage, 388 Frequency division multiple access (FDMA), 161 Fuel cells, 76–77
G Gadgetware architectural style (GAS), 219 GaitShoe, wearable bio-motion analysis, 350 Galvanic skin response (GSR) monitor, 359 GAS-OS kernel architecture, 224–226 Geographic routing algorithms, 395 Gesture Wall, interactive surfaces, 346–347 Global positioning system (GPS) device, 189 GLObal Smart Spaces (GLOSS) project, 10 Graphical models, 196 Graphical user interface (GUI), 400, 401 Groggy wakeup, sensor system, 351 Gyroscope sensors bulk micromachining, 59–60 surface micromachining, 61
H Hardware components, 10–11 Hardware reliability challenges, NES, 274 externally induced stresses, 270–271 failure modes and effects analysis, 274 hostile environments, 276 internally induced stresses, 271–272 low cost components, 275 physics-of-failure model, 273 reliability, 269–270 self-discharge and recovery effect, 277 signal integrity, 279 strength analysis, 272–273 Tyndall mote power consumption, 277–278 Hardware toolkits, 364, 365, 366 Heterogeneous system-in-a-package (HSIP), 269, 276, 279 Human–computer interaction (HCI), 4, 368 Humidity sensors, 64 Hybrid systems integration, 31
I IEEE 802.11, 161. See also Wireless network standards IEEE 802.15.1, 174 IEEE 802.15.4, 177 network node types, 175 wirelessHART, 178 ZigBee, 176–177
416 Industry programme analysis of, 333–334 case studies wireless access to medical files, 332–333 wireless sensor systems for, 331–332 industry prototyping projects, 330–331 TEC Centre Industry R&D Workplan, 329, 330 Inertial measurement unit (IMU), 349 Information Society Technologies Advisory Group (ISTAG), 316 Innovation model, 312, 313 Integral passive components, 293–295 Integrated circuits (ICs) fabrication, CMOS device CMOS inverter creation, 83–86 silicon CMOS processing evolution electron beam (e-beam) lithography, 86–87 silicon fibre technology, 89–95 silicon-on-insulator (SOI) technology in, 87–88 silicon spheres, 95–97 silicon wafers, 82–83 Intellectual property (IP), 335, 338 Intelligent systems, 209 Intel mote Intel I-Mote and I-Mote2, 112 SHIMMER, 112–113 Interaction design and interface, 316 accident, 318–319 action and alignment, 318 address, 317 ambient displays, 317–318 transparency and acceptance, 319–320 Interactive surfaces. See Brain Opera interactive surfaces Interconnection, microelectronics packaging chip-in-laminate interconnection multi-chip modules, 146–147 multi-chip packaging, 147–149 photoresist process, 149 polymer and embedded chip, 147 chip level stacking, 134 flip-chip interconnection conductive adhesives, 144 gold stud bumps, 142–143 micro-nano interconnection, 149–150 wafer-level packaging, 133 copper-to-copper thermo-compression bonding, 135 indium-gold (In-Au) micro-bumps, 135–136 micro-insert, 136, 137
Index Interface, 211 Invisible computing, 22, 32
K KegMonitor™, 332 Key value modeling techniques, 196
L Large interactive displays, 347 LaserWall, interactive surfaces, 347–348 Layered communication network medium access control (MAC) contention based protocol, 162–164 protocols, 160 scheduling-based protocol, 161–162 network layer and routing protocols MANET routing protocols, 165–167 sensor network routing protocols, 167–169 physical layer, 159–160 transport protocols, 169–170 Layered protocol stack approach, 158–159 Lead zirconate titanate (PZT), 253–254 Light sources indoor solar energy harvesting system, 246–247 outdoor and indoor applications, 246 photoelectric effect, 245 solar cell load resistance, 261 materials and efficiencies, 245–246 Localised scalability, 8 Location container tracking system, 395, 400, 405 environmental parameters, 12 physical, 4 sensitivity, 7, 8 Logic based models, 196 Low power sensors advantage of on-chip MEMS integration, 54 ambient sensors, 55–56 bulk micromachining, 51–52 mechanical microfabrication and system integration, 54–55 mechanical sensors, 53 surface micromachining, 51–53
M Magic Carpet, interactive surfaces, 348 Markup scheme modeling techniques, 196
Index Masking heterogeneity, 8 Material integrated networks, augmented materials cluster movement pattern, 377–378 evaluation, 378 network simulation node deployment, 375–377 with mobility, 381–383 without mobility, 378–381 simulation environment, 374–375 top-down approach, 373 WSN protocol, 374 Medium access control (MAC) protocols contention based protocol ALOHA protocol, 162–163 duty cycling protocol, 163–164 scheduling-based protocol, 161–162 Micro-electro-mechanical system (MEMS) devices advantages, 52 fabrication processes, 67 microphone, 64, 65 nanotechnology, 34 structure integration, 61 Micro-fabricated electromagnetic generators, 251–252 Micro fuel cells, 76–77 Micro-machined variable capacitors, 255 Micro-nano interconnection, 149–150 Microphones, 64 Micro-sensors, 38 Microsoft EasyLiving Project, 10 Middleware approaches, context acquisition, 192–193 Miniaturisation, silicon thinning process, 140 Miniaturised sensing elements flex and 3-D silicon assembly, 37–38 networkable sensor modules, 36–37 Mobile ad-hoc network (MANET) routing protocols pro-active protocols, 165–166 reactive protocols, 165 Mobile ad-hoc networks (MANETS), applications using, 397–399 Mobile computing, 7 Moore’s law, 21, 22, 31, 349 Multi-hopping protocol arrangement, 404 characteristics, 397–399 network characteristics, 396–397 network creation, 393 sensor identification devices (SIDs), 391 Multi-national Companies (MNCs), 325, 329, 330
417 N National Microelectronics Research Centre (NMRC), 364–368 National Microelectronics Research Institute. See Tyndall National Institute Networkable embedded sensing elements, 39 Networked embedded systems (NES) failure mechanisms, 269 externally induced stresses, 270–271 internally induced stresses, 271–272 strengths, 272–274 hardware reliability challenges, 274 hostile environments, 276 low cost components, 275 self-discharge and recovery effect, 277 Tyndall mote power consumption, 277–278 reliability, 269–270 strengths, 272–274 stresses, 270–272 Network layer, routing protocols MANET, 165–167 sensor network, 167–169 Network simulation node deployment, 375–377 with mobility, 381–383 without mobility, 378–381
O Object creation process computation and composition, 26–27 event-based systems, 28 material-system, 26 physical interfaces, digital representations, 25–26 Object oriented modeling, 196 Object oriented programming system (OOPS), 198 Ontology based models, 197
P Package-on-package (PoP) concept, ambient sensing, 119–120 Packaging, 3-dimensional system-in-package (SiP), 133–134 wafer-level packaging copper-to-copper thermo-compression bonding, 135 indium-gold (In-Au) micro-bumps, 135–136 micro-insert, 136, 137
418 Paintable computing, 35 Parasitic mobility, mobile sensor networks, 353–354 Passive tags, 391 Percussion sensors, interactive surface, 346 Pervasive computing systems ambient intelligence, 32 applications healthcare, 4–5 public transportation, 5–6 smart home environment, 6 context-aware systems, 188–189 convergence of, 40 history of distributed systems and mobile computing, 6–7 research issues, 7, 9 taxonomy of computer systems, 7–8 information technology systems, 20 research issues context-awareness, 12 hardware components, 10–11 interaction, 12–13 security, privacy and trust management, 13–14 software engineering, 11–12 significant projects, 9–10 smart home scenario, 188 software perspective, augmented materials, 14–15 vision of, 3 vs. ubiquitous computing, 4 Physical invisibility, 7 Picture Archiving & Communication Systems (PACS ), 333 Piezoelectric generators advantages, 256 load resistance, 261 piezo-electric effect, 253–254 piezoelectric strain constant, 252–253 Piezo-scavengers event-based energy supply, 70 macro-fibre composites, 70–71 piezo-reactive materials, 69 Plug-and-play approach, 364 Plugs, 215 PLUG sensors, 354–356 Polyvinylidene fluoride (PVDF), 253 Practical Optimisation and Innovation of Networkable Technologies (POINT) Programme, 334–335 Pressure sensor, 61–62 Programmable Matter, 34
Index Programming context aspect oriented programming, 198–199 context aware applications, 197 methodologies for, 198 Prototyping projects industrial programme, 330–331, 334 innovation process, 336 POINT programme, 335 TEC Centre, 326 Pushpin Computer, dense sensor network, 356–357 Pushpin computing project, 35
Q Quadrature amplitude modulation, 160 Quasi-passive wakeup tag, wireless motion sensor, 351–352
R Radio-active generators, 74 Radio frequency, 392 Radio-frequency identification (RFID), 303–304 RFID tag, 352, 353 Reliability, NES, 269–270 Resource consumer (RC), 231 Resource directory (RD), 231–233 Resource providers (RP), 231 Rhythm Tree. See Percussion sensors Roadbed prototype sensor, sensate media, 357–359 Routing protocols. See Layered communication network
S Scalability, embedded computing system, 326 Scheduling-based protocol, MAC, 161–162 Seebeck effect and coefficients, 257–258 Sensor Chair, interactive surfaces, 346 Sensor identification devices (SIDs), 391–393 Sensor networks active badges, 191–192 data aggregation and fusion techniques, 190–191 energy harvesting, 353–354 global positioning system (GPS) device, 189 PLUG platform, 355–356 routing protocols data centric networking and location aware protocol, 168–169 hierarchical protocol, 167–168
Index quality-of-service (QoS)-aware and cross-layer routing, 169 sensate media dense sensor networks, 356–357 roadbed prototype sensor node, 357–359 UbER-Badge and spinner, 359 Sensor node lifetime, 270, 273, 277, 279 Sensors and actuators, 21, 31, 32, 34 augmented materials, 35 miniaturised sensor element, 36–38 networkable sensor elements, 37 node miniaturisation, 31 nodes, 389, 392, 401, 402 software abstractions, 27 Separation by implantation of oxygen (SIMOX), SOI process, 87–88 Shock sensors, 63–64 Silicon, ICs fabrication advantages, 82 and CMOS processing evolution electron beam (e-beam) lithography, 86–87 silicon fibre technology, 89–95 silicon-on-insulator (SOI) technology in, 87–88 silicon spheres, 95–97 fibre technology gate oxide growth and twin well formation, 90–91 metal deposition and polyimide encapsulation, 93 oxide and isotropic etch, 94 polyimide addition and patterning, 92 silicon islands, 89–90 silicon wafers fabrication of, 82 thermal oxidation, 83 Silicon islands, 89–90 Silicon-on-insulator (SOI) technology, 254 SIMOX and smart-cut technology, 88 SOI Wafer cross-section, 87 Silicon thinning process fracture stress and strength, 140–141 wafer bow values, 141–142 wafer thinning, 140 Situation environmentally-constrained, 305 risk assessment, 13 Small-to-medium enterprise (SME) industrial programme, 329, 330 and TEC Centre, 325, 326 Smart-cut technique, SOI process, 88
419 Smart Dust, 116–117 initiative, 286, 290, 295–296 nanotechnology, 33 research programme, 37 sensor nodes miniaturisation, 31 Smart Floors, 32–33 Smart Matter, 33–34, 117 Smart objects, 36, 295 augmented materials, 368–369 extrovert gadgets project, 364–368 problem statement, 368 Smart Table design, 369–371 Software components paradigm, 210–211 Software engineering, 11–12 Solar energy, 71–72 Spherical silicon processing technology spherical IC silicon spheres, 95–96 vs. planar IC, 96–97 Spimes, 36 Spinner, wearable badge sensor, 359 Spread spectrum communications. See Code division multiple access (CDMA) Strengths, NES nodes analytical approaches, 272–273 physics-of-failure model, 273 reliability prediction, 273–274 Stresses, NES nodes externally induced stresses climatic factors, 270–271 mobile node location, 271 internally induced stresses prediction approaches, 272 wake-sleep cycles, 271–272 Super-capacitors, 75 Surface-acoustic-wave (SAW) devices, 353 Surface micro-machining (SMM) acceleration sensors, 58–59 capacitive uniaxial accelerometer, 54 gyroscope sensors, 61 plasma-based etching processes, 52–53 vibration sensor, 63 Sustainability, Centre Programme, 334 Synapse-based programming, 220–221 Synchronised switch harvesting (SSH), 261 System in package (SIP), 31, 133–134, 275–276 Systems description, augmented materials high-level systems description, 29–30 local systems architecture, 28 Systems design, Smart Table assembly sequence, 370 distributed load sensing system, 371–373 macro-and micro-level augmentation, 371
420 T Tagging, 391, 406 Tap Tracker Window, interactive surfaces, 347 Technologies for embedded computing (TEC) centre, 325 co-design methodology, 329 industrial programme, 329–331 POINT Programme, 335 and R&D programme, 326 ultra-wide-band (UWB) technology, 333 wireless sensor network, 332 Textile integration, ambient sensing conductive yarn for, 122 miniature and non-obtrusive modules for, 122–123 Thermal gradient sources Peltier–Seebeck effect, 257 power conversion, 260–261 thermocouple devices, 258–260 thermopiles, 257, 258 Thermal scavengers miniature generators, 72, 74 principle of, 72–73 Seebeck effect, 72 Thermoelectric generator load resistance, 261 micro-fabrication techniques, 259–260 Peltier coolers, 259 power conversion, 260 principle, 258 Time division multiple access (TDMA), 161 Time slot, 161 TinyOS, 403 Top-down and bottom-up methodologies, 30–31 Transport protocols, 169–170 Tribble, dense sensor networks, 357 Tyndall mote communication layer, 114–115 miniaturisation of, 115 Wireless sensor mote, application, 367–368 Tyndall National Institute, 349, 364
U UbER-Badge, wearable sensors, 359, 360 Ubiquitous computing (UbiComp). See also Pervasive computing ambient ecologies artifacts, 211, 216 basic elements, 212–215 composite artifact, 216–217 states, transitions and behavior modeling, 217–218
Index applications of event-condition-action (ECA), 226–228 GAS-OS middleware architecture, 224–226 simple artifact (eDesk), 220–221 synapse-based programming, 221–224 devices, 188 engineering paradigm, 220 gadgetware architectural style (GAS), 218–219 Ultra-miniature sensor node, 119–120 Ultra-wide-band (UWB) technology, 332–333 User-centred design. See also Design research innovation model, 312–313 Snuggly Ifbot and context economy, 313
V Vibration sensor, 62–63 Vibration sources damping ratios, 249 electromagnetic generator, 250–252 electrostatic generators, 254–256 mass-spring-damper system, 248 piezoelectric generators, 252–254 Virtual sensor, 302 Vision statement augmented material creation, 22 conceptual outline, 22–23 3-D distributions, 23 heterogeneous network, 23–24 physics and chemistry disciplines, 24–25 smart objects creation, 24
W Wafer-level packaging (WLP) die stacking process copper-to-copper thermo-compression bonding, 135 indium-gold (In-Au) micro-bumps, 135–136 micro-insert, 136, 137 multichip module (MCM), 134 Wearable sensor node, 349–351 WeC wireless sensor platform, 116–117 Whole-smart-artifact design process, 369 Wireless motion sensor, 351–352 Wireless network standards applications of, 179–180 cellular mobile communication systems, 171–172 embedded devices, 170–171 IEEE 802.11, 172–174
Index IEEE 802.15.1, 174 IEEE 802.15.4 network node types, 175 WirelessHART standard, 178 ZigBee, 176–177 6LoWPAN and Z-wave protocols, 179 Wireless Sensemble IMU, interactive dance, 350, 351 Wireless sensor network nodes Ambient Intelligence (AmI) building block schematics, 105–106 micro-environmental monitoring network, 107 miniature ambient sensor node, 108 BT node, 113–114 E-CUBES system components, 122 concept of, 120–122 E-Grain concept advanced packaging technologies, 119 Fraunhofer concept, 118 package-on-package (PoP) concept, 119–120 wafer level assembly, 120–121 energy harvesting system electromagnetic generators, 250–252 electrostatic generators, 254–256 piezoelectric generators, 252–254 solar energy harvesting system, 245–247 thermoelectric generators, 258–260
421 Intel mote Intel I-Mote and I-Mote2, 112 SHIMMER, 112–113 Mica-Dot and TMote Sky nodes, 109, 111 power consumption profile, 243, 244 power conversion circuit, 260–262 power system block diagram, 244, 245 selection characteristics, 262–263 smart dust, 116–117 smart matter, 117 tyndall mote, 144–115 communication layer, 114–115 miniaturisation of, 115 Wireless sensor networks (WSNs), 268, 332, 338, 393, 397, 401 distributed load sensing system, 371–373 embedded systems, 32 NMRC/Tyndall, 368 simulation environment, 374–375 top-down approach, 373–374
Z ZigBee, wireless mesh networking standard, 176–177 Z-tiles interactive floor, dense sensor networks, 357 Z-Wave, interoperable wireless communication protocol, 179