Innovative Technology for Computer Professionals
March 2005
Engineers, Programmers, and Black Boxes, p. 8 Building the Software Radio, p. 87
h t t p : / / w w w. c o m p u t e r. o r g
The Winner’s Curse in High Tech, p. 96
Innovative Technology for Computer Professionals
March 2005,Volume 38, Number 3
COMPUTING PRACTICES 26
Integrating Biological Research through Web Services Hong Tina Gao, Jane Huffman Hayes, and Henry Cai A case study demonstrates that Web services could be key to coordinating and standardizing incompatible applications in bioinformatics, an effort that is becoming increasingly critical to meaningful biological research.
C O V E R F E AT U R E S 33
Socially Aware Computation and Communication Alex (Sandy) Pentland By building machines that understand social signaling and social context, technologists can dramatically improve collective decision making and help keep remote users in the loop.
41
Designing Smart Artifacts for Smart Environments Norbert A. Streitz, Carsten Röcker, Thorsten Prante, Daniel van Alphen, Richard Stenzel, and Carsten Magerkurth Smart artifacts promise to enhance the relationships among participants in distributed working groups, maintaining personal mobility while offering opportunities for the collaboration, informal communication, and social awareness that contribute to the synergy and cohesiveness inherent in collocated teams.
Cover design and artwork by Dirk Hagner
50
Sumi Helal, William Mann, Hicham El-Zabadani, Jeffrey King, Youssef Kaddoura, and Erwin Jansen Many first-generation pervasive computing systems lack the ability to evolve as new technologies emerge or as an application domain matures. Programmable pervasive spaces, such as the Gator Tech Smart House, offer a scalable, cost-effective way to develop and deploy extensible smart technologies.
ABOUT THIS ISSUE
ncreasingly inexpensive consumer electronics, mature technologies such as RFID, and emerging wireless sensor technologies make possible a new era of smart homes, offices, and other environments. In this issue, we look at state-of-the-art technology applications including socially aware communication-support tools; programmable pervasive spaces that integrate system components; smart environments that incorporate information, communication, and sensing technologies into everyday objects; and an industry-specific initiative that uses a Web-based approach to bring processes, people, and information together to optimize efficiency.
The Gator Tech Smart House: A Programmable Pervasive Space
I
61
Web-Log-Driven Business Activity Monitoring Savitha Srinivasan, Vikas Krishna, and Scott Holmes Using business process transformation to digitize shipments from IBM’s Mexico facility to the US resulted in an improved process that reduced transit time, cut labor costs and paperwork, and provided instant and perpetual access to electronically archived shipping records.
IEEE Computer Society: http://www.computer.org Computer: http://www.computer.org/computer
[email protected] IEEE Computer Society Publications Office: +1 714 821 8380
OPINION 8
At Random Engineers, Programmers, and Black Boxes Bob Colwell
NEWS 14
Industry Trends Search Engines Tackle the Desktop Bernard Cole
18
Technology News Is It Time for Clockless Chips? David Geer
22
News Briefs Finding Ways to Read and Search Handwritten Documents ■ A Gem of an Idea for Improving Chips ■ IBM Lets Open Source Developers Use 500 Patents
MEMBERSHIP NEWS 75
Computer Society Connection
80
Call and Calendar COLUMNS
87
Embedded Computing Building the Software Radio Wayne Wolf
93
NEXT MONTH:
Beyond Internet
Standards Public Opinion’s Influence on Voting System Technology Herb Deutsch
96
IT Systems Perspectives The Winner’s Curse in High Tech G. Anandalingam and Henry C. Lucas Jr.
100
The Profession An Open-Secret Voting System Thomas K. Johnson
D E PA R T M E N T S 4 6 12 70 73 83 86 90 Membership Magazine of the
Article Summaries Letters 32 & 16 Career Opportunities Advertiser/Product Index Products Bookshelf IEEE Computer Society Membership Application COPYRIGHT © 2005 BY THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS INC. ALL RIGHTS RESERVED. ABSTRACTING IS PERMITTED WITH CREDIT TO THE SOURCE. LIBRARIES ARE PERMITTED TO PHOTOCOPY BEYOND THE LIMITS OF US COPYRIGHT LAW FOR PRIVATE USE OF PATRONS: (1) THOSE POST-1977 ARTICLES THAT CARRY A CODE AT THE BOTTOM OF THE FIRST PAGE, PROVIDED THE PER-COPY FEE INDICATED IN THE CODE IS PAID THROUGH THE COPYRIGHT CLEARANCE CENTER, 222 ROSEWOOD DR., DANVERS, MA 01923; (2) PRE1978 ARTICLES WITHOUT FEE. FOR OTHER COPYING, REPRINT, OR REPUBLICATION PERMISSION, WRITE TO COPYRIGHTS AND PERMISSIONS DEPARTMENT, IEEE PUBLICATIONS ADMINISTRATION, 445 HOES LANE, P.O. BOX 1331, PISCATAWAY, NJ 08855-1331.
Innovative Technology for Computer Professionals
Editor in Chief
Computing Practices
Special Issues
Doris L. Carver
Rohit Kapur
Bill Schilit
Louisiana State University
[email protected]
[email protected]
[email protected]
Associate Editors in Chief
Perspectives
Web Editor
Bob Colwell
Ron Vetter
[email protected]
[email protected]
Bill N. Schilit Intel
Research Features
Kathleen Swigger
Kathleen Swigger
University of North Texas
[email protected]
Area Editors
Column Editors
Computer Architectures Douglas C. Burger
At Random Bob Colwell Bookshelf Michael J. Lutz
University of Texas at Austin
Databases/Software Michael R. Blaha
University of Maryland
Standards Jack Cole
OMT Associates Inc.
Graphics and Multimedia Oliver Bimber
Embedded Computing Wayne Wolf
Advisory Panel
Bauhaus University Weimar
Princeton University
University of Virginia
Information and Data Management Naren Ramakrishnan
Entertainment Computing Michael R. Macedonia
Thomas Cain
Georgia Tech Research Institute
Virginia Tech
Ralph Cavin
IT Systems Perspectives Richard G. Mathieu
Semiconductor Research Corp.
IBM Almaden Research Center
Networking Jonathan Liu University of Florida
Software H. Dieter Rombach
Carl K. Chang
[email protected]
CS Publications Board Security Bill Arbaugh
Rochester Institute of Technology
Multimedia Savitha Srinivasan
2004 IEEE Computer Society President
US Army Research Laboratory
James H. Aylor
University of Pittsburgh
Michael R. Williams (chair), Michael R. Blaha, Roger U. Fujii, Sorel Reisman, Jon Rokne, Bill N. Schilit, Nigel Shadbolt, Linda Shafer, Steven L. Tanimoto, Anand Tripathi
CS Magazine Operations Committee Bill Schilit (chair), Jean Bacon, Pradip Bose, Doris L. Carver, Norman Chonacky, George Cybenko, John C. Dill, Frank E. Ferrante, Robert E. Filman, Forouzan Golshani, David Alan Grier, Rajesh Gupta, Warren Harrison, James Hendler, M. Satyanarayanan
Ron Hoelzeman
St. Louis University
University of Pittsburgh
Invisible Computing Bill N. Schilit
Edward A. Parrish Worcester Polytechnic Institute
Intel
Ron Vetter
The Profession Neville Holmes
Alf Weaver
University of North Carolina at Wilmington
University of Tasmania
University of Virginia
AG Software Engineering
Dan Cooke Texas Tech University
Administrative Staff
Editorial Staff Scott Hamilton
Mary-Louise G. Piner
Senior Acquisitions Editor
[email protected]
Staff Lead Editor
Judith Prow
Membership News Editor
Managing Editor
[email protected]
Bryan Sallis Manuscript Assistant
[email protected]
James Sanders
Design Larry Bauer Dirk Hagner Production Larry Bauer
Assistant Publisher Dick Price Membership & Circulation Marketing Manager Georgann Carter
Senior Editor
Lee Garber Senior News Editor
Chris Nelson Associate Editor
Bob Ward
Executive Director David W. Hennage Publisher Angela Burgess
Business Development Manager Sandy Brown Senior Advertising Coordinator Marian Anderson
Circulation: Computer (ISSN 0018-9162) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 100165997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, PO Box 3014, Los Alamitos, CA 90720-1314; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters,1730 Massachusetts Ave. NW, Washington, DC 20036-1903. IEEE Computer Society membership includes $19 for a subscription to Computer magazine. Nonmember subscription rate available upon request. Single-copy prices: members $20.00; nonmembers $94.00. Postmaster: Send undelivered copies and address changes to Computer, IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Canadian GST #125634188. Canada Post Corporation (Canadian distribution) publications mail agreement number 40013885. Return undeliverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA. Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in Computer does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.
2
Computer
Published by the IEEE Computer Society
ARTICLE SUMMARIES Integrating Biological Research through Web Services
Designing Smart Artifacts for Smart Environments
pp. 26-31
pp. 41-49
Hong Tina Gao, Jane Huffman Hayes, and Henry Cai
Norbert A. Streitz, Carsten Röcker, Thorsten Prante, Daniel van Alphen, Richard Stenzel, and Carsten Magerkurth
A
t present, compatibility problems prevent researchers from cooperating in using bioinformatics to solve important biological problems. Web services might be a way to solve this integration problem. Web technology provides a higher layer of abstraction that hides implementation details from applications so that each organization can concentrate on its own competence and still leverage the services other research groups provide. To test the potential of a Web services solution, the authors implemented a microarray data mining system that uses Web services in drug discovery—a research process that attempts to identify new avenues for developing therapeutic drugs. Although their implementation focuses on a problem within the life sciences, they strongly believe that Web services could be a boon to any research field that requires analyzing and mining large volumes of data.
Socially Aware Computation and Communication pp. 33-40 Alex (Sandy) Pentland
M
ost would agree that today’s communication technology seems to be at war with human society. Pagers buzz, cell phones interrupt, and e-mail begs for attention. Technologists have responded with well-meaning solutions that ultimately fail because they ignore the core problem: Computers are socially ignorant. A research group at MIT is taking the first steps toward quantifying social context in human communication. These researchers have developed three socially aware platforms that objectively measure several aspects of social context, including analyzing the speaker’s tone of voice, facial movement, or gestures.
4
Computer
T
he integration of information, communication, and sensing technologies into our everyday objects has created smart environments. Creating the smart artifacts that constitute these environments requires augmenting their standard functionality to support a new quality of interaction and behavior. A system-oriented, importunate smartness approach creates an environment that gives individual smart artifacts or the environment itself certain self-directed actions based on previously collected information. For example, a space can be smart by having and exploiting knowledge about the persons and artifacts currently situated within its borders. In contrast, a people-oriented, empowering smartness approach places the empowering function in the foreground by assuming that smart spaces make people smarter. This approach empowers users to make decisions and take actions as mature and responsible people. Although in some cases it might be more efficient if the system doesn’t ask for a user’s feedback and confirmation at every step in an action chain, the overall design rationale should aim to keep the user in the loop and in control whenever possible.
The Gator Tech Smart House: A Programmable Pervasive Space pp. 50-60 Sumi Helal, Hicham El-Zabadani, Youssef Kaddoura, Erwin Jansen, Jeffrey King, and William Mann
M
any first-generation pervasive computing systems lack the ability to evolve as new technologies emerge or as an application domain matures. Integrating numerous hetero-
Published by the IEEE Computer Society
geneous elements is mostly a manual, ad hoc process. The environments are also closed, limiting development or extension to the original implementers. To address this limitation, the University of Florida’s Mobile and Pervasive Computing Laboratory is developing programmable pervasive spaces in which a smart space exists as both a runtime environment and a software library. Service discovery and gateway protocols automatically integrate system components using generic middleware that maintains a service definition for each sensor and actuator in the space. Programmers assemble services into composite applications, which third parties can easily implement or extend.
Web-Log-Driven Business Activity Monitoring pp. 61-68 Savitha Srinivasan, Vikas Krishna, and Scott Holmes
B
usiness process transformation defines a new level of business optimization that manifests as a range of industry-specific initiatives that bring processes, people, and information together to optimize efficiency. For example, BPT encompasses lights-out manufacturing, targeted treatment solutions, real-time risk management, and dynamic supply chains integrated with variable pricing To examine how BPT can optimize an organization’s processes, the authors describe a corporate initiative that was developed within IBM’s supply chain organization to transform the import compliance process that supports the company’s global logistics.
@
L E T TERS
CONCERNS ABOUT EDUCATION In “Determining Computing Science’s Role” (The Profession, Dec. 2004, pp.128, 126-127), Simone Santini speaks for many of us who are worried about the direction of computer science—and higher education in general. I’m concerned that we are fast approaching a time in this country when science will be directed by powerful industry and business objectives first and foremost, and “pure research” will become increasingly marginalized. I believe this is the end result of a capitalist system, where money rules nearly every activity. This process was given a big push by US President Ronald Reagan 20 years ago, and it’s now accelerating under the Bush administration. Unfortunately, I don’t see any way to stop this slide under the present conditions and cultural climate. Jim Williams Silver Springs, Md.
[email protected] I enjoyed reading Simone Santini’s excellent article in Computer’s December issue. From working in both academia and industry for many years, I can add the following. Industry is concerned not just with commercial applicability, but with immediate commercial applicability (their thinking is very short term) in response to current requests from customers—it’s an easier sale if the customer is already demanding the product. A breakthrough that has immediate commercial applicability, but is so novel that no customer has thought of it and asked for it, is of lesser value. There is an infinite number of algebras that can be defined and an infinite number of algorithms that can be developed, but relational algebra is very helpful and so is Quicksort. All academic pursuits are not equal, and there needs to be some measure of the usefulness of one over another. I agree that short-term industrial con6
Computer
cerns should not dictate this measure. Steve Rice University of Mississippi
[email protected] Simone Santini responds: Consumer wishes often don’t convey the infallible foresight that industry would like. In 1920, consumers didn’t know they wanted radio. In 1975, they didn’t know they wanted CDs—they were perfectly happy with vinyl. At most, they merely desired better pickups so as not to ruin their records, and they wanted better hi-fi systems to play them on. The list goes on. The problem is that, in many cases, industry only takes small steps for fear of the risks, forgetting that no focus group will ever propose the next major step. All you can get from a focus group is advice on how to marginally improve an existing product. This is important, of course, but there is more to innovation, even to industrial innovation than that—academia, as I have tried to argue, should have different priorities. I have nothing against practical applications of computing science, of course. In fact, I think any mathematician would be happy to know that his theorem has improved the breadto-prosciutto ratio in sandwiches worldwide. I am just saying that practical applications can’t be the force that drives the discipline. The fact is that Quicksort and relational databases do not spring up whole like Athena from the head of Zeus. They are part of a process, and the process must proceed by its own internal logic. It would be an illusion to think that you can get results that have practical applicability without the “pure” Published by the IEEE Computer Society
research that lies behind them. No amount of money could have convinced engineers in the Victorian era to invent television. It took Maxwell’s aesthetic dissatisfaction when faced with the asymmetry of the field equations to get things started. Industry would like to have “readyto-wear” research—applicable results without the cultural (and often not directly applicable) background—but this is an illusion.
J2EE FRAMEWORK DEVELOPER The article titled “J2EE Development Frameworks” (Rod Johnson, IT Systems Perspectives, Jan. 2005, pp. 107-110) was well-written and insightful. However, it would have been useful to know that the author is also one of the creators of the Spring framework. This connection does not detract from the article, but it is clearly a relevant piece of information that should have been disclosed to the reader. Landon Davies Baltimore, Md.
[email protected] Rod Johnson replies: As a former academic, I agree that it is important to remain impartial with regard to specific technologies. Therefore, I took care to mention alternatives to Spring when writing this article.
RETOOLING FOR SUCCESS IN A KNOWLEDGE-BASED ECONOMY In “People and Software in a Knowledge-Based Economy” (The Profession, Jan. 2005, pp. 116, 114115), Wojciech Cellary uses simple and elegant service sector taxonomies to analyze human roles in a knowledgebased economy. He rightly points out that even as the increasing use of computers to provide routine intellectual services shrinks the market for humans performing these services, humans will continue to excel in areas that involve
the production of intangible goods and advanced services. Although the author anticipates that robots and automated machines will prevail in the production of tangible goods (presumably in the industrial sector), he does not elaborate on the impact of automation in the manual services sectors. It is particularly interesting to observe the evolving roles of humans in manual skill areas that not so long ago required only moderate intellectual abilities. For example, modern automobiles come with complex electronically controlled subsystems that require using sophisticated diagnostic machines for troubleshooting when they fail. In addition to learning how to operate these machines, auto mechanics also must keep up to date with new technologies so they can recognize and fix problems, especially as additional innovations are incorporated into newer models. The proliferation of self-serve systems has eliminated the need for many services that humans formerly performed; instead, the human role now focuses on providing supervision and offering assistance if needed. Even household appliances are becoming intelligent—vacuum cleaners that can guide themselves around a room are now well within the reach of the average consumer. While technology improves human productivity and frees people from tedious effort, at times it also has the effect of eliminating employment opportunities. The challenge for those affected is to retool their skills in ways that emphasize the same qualities that would enable them to succeed in intellectual areas, namely creativity, manual expertise, and interpersonal skills. Badri Lokanathan Atlanta, Ga.
[email protected]
We welcome your letters. Send them to
[email protected].
REACH HIGHER Advancing in the IEEE Computer Society can elevate your standing in the profession. Application to Senior-grade membership recognizes
✔ ten years or more of professional expertise Nomination to Fellow-grade membership recognizes
✔ exemplary accomplishments in computer engineering
GIVE YOUR CAREER A BOOST ■
UPGRADE YOUR MEMBERSHIP www.computer.org/join/grades.htm March 2005
7
A T
R A N D O M
Engineers, Programmers, and Black Boxes Bob Colwell
niversities teach engineers all sorts of valuable things. We’re taught mathematics—especially calculus, probability, and statistics—all of which are needed to understand physics and circuit analysis. We take courses in system design, control theory, electronics, and fields and waves. But mostly what we’re taught, subliminally, is how to think like an engineer. Behind most of the classes an engineer encounters as an undergraduate is one overriding paradigm: the black box. A black box takes one or more inputs, performs some function on them, and produces one output. It seems simple, but that fundamental idea has astonishing power. You can build and analyze all engineered systems—and many natural systems, specifically excluding interpersonal relationships—by applying this paradigm carefully and repetitively. Part of the magic is that the function the black box contains can be arbitrarily complex. It can, in fact, be composed of multiple other functions. And, luckily for us, we can analyze these compound functions just as we analyze their mathematical counterparts. As part of an audio signal processing chain, a black box can be as simple as a low-pass filter. As part of a communications network, it can be a complicated set of thousands of processors, each with its own local network.
U
8
Computer
You can build and analyze all engineered systems by applying the black box paradigm.
MARVELS OF COMPLEXITY Modern microprocessors are marvels of complexity. Way back when, the Intel 4004 had only 2,300 transistors, a number that is not too large for smart humans to keep in their heads. Engineers knew what each transistor did and why it had been placed where it was on the die. The bad news was that they had to know; there were no CAD tools back then to help keep track of them all. But even then, the black box functional decomposition paradigm was essential. At one level of abstraction, a designer could ask whether the drive Published by the IEEE Computer Society
current from transistor number 451 was sufficient to meet signaling requirements to transistors 517 and 669. If it was, the designer would conceptually leave the transistor level and take the mental elevator that went to the next floor up: logic. At the logic level, the black boxes had labels like NAND and XOR. The designer’s objective at this level was to make sure that the functions selected correctly expressed the design intent from the level above: Should this particular box be a NAND or an AND? There were also subfloors. It’s not only possible, it’s also a very good idea to aggregate sets of boxes to form more abstract boxes. A set of D flip-flops is routinely aggregated into registers in synchronous designs, for example. Next floor up: the microarchitecture. At this level, the boxes had names like register file, ALU, and bus interface. The designer considered things like bandwidths, queuing depths, and throughput without regard for the gates underlying these functions or the actual flow of electrical currents that was such a concern only a few floors below. For hardware engineers, there was one more floor: the instruction set architecture. Most computer engineers never design an ISA during their careers—such is the commercial importance of object code compatibility. For decades now, the prevailing theory has been that to incentivize a buyer to suffer the pain of mass code conversion or obsolescence, any new computational engine that cannot run old code, unchanged, must be at least N times faster than anything else available. The trouble with this theory is that it has never been proven to work. At various times in the past 30 years, N has arguably reached as high as 5 or 10 (at equivalent economics) without having been found to be compelling. The x86 architecture is still king. But the latest contender in the ring is IBM’s Cell, introduced in February at ISSCC 05. Touted as having impressive computational horsepower, Cell is aimed initially at gaming platforms that may
not be as sensitive to the compatibility burden. Stay tuned—this new battle should play out over the next three years. Maybe computer engineers will get to play out in the sunshine of the top floor after all.
SOFTWARE FOLKS DO IT TOO The ability to abstract complex things is vital to all of engineering. As with the 4004’s transistors, without this ability, engineers would have to mentally retain entire production designs. But the designs have become so complicated that it has been about 25 years since I last saw a designer who could do that. Requiring designers to keep such complex designs in their heads would limit what is achievable, and doing so isn’t necessary as long as we wield our blackbox abstractions properly. In the early days of P6 development at Intel, I found it amusing to try to identify various engineers’ backgrounds by the way they thought and argued during meetings. My observations went through several phases. I was intrigued to observe that a group of 10 engineers sitting around a conference room table invariably had a subtle but apparent common mode: They all used the black-box abstraction implicitly and exclusively, as naturally as they used arithmetic or consumed diet Coke. Although these engineers came from different engineering schools, and their degrees ranged from a BS to an MS or a PhD, they implicitly accepted that any discussion would occur in one of two ways—either at one horizontal abstraction layer of the design or explicitly across two or more layers. It was generally quite easy to infer which of those two modes was in play, and all 10 engineers had no difficulty following mode changes as the conversation evolved. When thinking about this (and yes, I probably should have been paying attention to the technical discussion instead of daydreaming), it occurred to me that the first two years of my undergraduate EE training had sometimes
seemed like a military boot camp. In fact, it was a boot camp. With the exception of social sciences, humanities, history, and phys. ed., all of our classes were done in exactly this way. I don’t know if we became EEs because we gravitated toward the academic disciplines that seemed most natural to us, or if we just learned to think this way as a by-product of our training. Maybe we just recognized a great paradigm when we saw it and did the obvious by adopting it.
In general, constraints and boundaries are a good thing—they focus the mind.
Microprocessor design teams also have engineers with computer science backgrounds, who may not have gone through an equivalent boot camp. I tried to see if I could spot any of them by watching for less adroitness in following implicit abstraction-layer changes in meetings. I thought I saw a few instances of this, but there’s a countervailing effect: CS majors live and breathe abstraction layers, presumably by dint of their heavy exposure to programming languages that demand this skill. When I began pondering the effect of black-box function-style thinking and programming language abstractions to see if that might distinguish between CS- and EE-trained engineers, I did see a difference. Good hardware engineers have a visceral sense of standing on the ground at all times. They know that in the end, their design will succeed or fail based on how well they have anticipated nature itself: electrons with the same charge they have carried since the birth of the universe, moving at the same speed they always have, obeying physical laws that govern electronic and magnetic interactions along wires, and at all times constrained by thermodynamics.
Even though EEs may spend 95 percent of their time in front of a computer putting CAD tools through their paces (and most of the other 5 percent swearing at those same tools), they have an immovable, unforgettable point of contact with ultimate reality in the back of their minds. Most of the decisions they make can be at least partially evaluated by how they square against natural constraints.
CONSTRAINTS ARE GOOD FOR YOU You might think such fixed constraints would make design more difficult. Indeed, if you were to interview a design engineer in the middle of a tough morning of wrestling with intransigent design problems, she might well express a desire to throw a constraint or two out the window. Depending on the particular morning, she might even consider jumping out after them. In general, though, constraints and boundaries are a good thing—they focus the mind. I’ve come to believe that hardware engineers benefit tremendously from their requisite close ties to nature’s own rules. On the other hand, the CS folks are generally big believers in specifications and writing down the rules by which various modules (black boxes) interact. They have to be—these “rules” are made up. They could be anything. Assumptions are not just subtly dangerous here, they simply won’t work— the possibility space is too large. It’s not that every choice a hardware engineer makes is directly governed by nature and thus unambiguous. What functions go where and how they communicate at a protocol level are examples of choices made in a reasonably large space, and there a CS grad’s proclivity to document is extremely valuable. To be sure, some programmers face natural constraints just as real as any the hardware designers see. Real-time code and anything that humans can perceive—video and audio, for example—impose the same kinds of immovable constraints that a die size limit does for a hardware engineer. March 2005
9
At Random
I’m not looking for black and white—I’m just wondering if there are shades of gray between EE and CS. My attempt to discern differences between EE and CS grads was simply intended to see if the two camps were distinguishable “in the wild”—to see if that might lead to any useful insights. Computer science is not generally taught relative to natural laws, other than math itself, which is arguably a special case. I don’t know if it should be, or even can be, and it’s not my intention to pass a value judgment here. The CS folks, it seems to me, tend to be very comfortable in a universe bounded only by conventions that they (or programmers like them) have erected in the first place: language restrictions, OS facilities, application architectures, and programming interfaces. The closest they generally come to putting one foot down on the ground is when they consider how their software would run on the hardware they are designing—and that interface is, at least to some extent, negotiable with the EE denizens on the top floor. Absolutes, in the nonnegotiable natural-law sense of what EEs deal with, are unusual to them. The best engineers I have worked with were equally comfortable with hardware and software, regardless of their educational backgrounds. They had somehow achieved a deep enough understanding of both fields that they could sense and adjust to whatever world view was currently in play at a meeting, without giving up the best attributes of the alternative view. There is a certain intellectual thrill when you finally break through to a new understanding of something, be it physics or engineering or math—or poetry analysis, for that matter. I always felt that same thrill when I saw someone blithely displaying this kind of intellectual virtuosity.
BOTTOMS UP AND TOPS DOWN The engineers I know who routinely do this intellectual magic somehow arrived at their profound level of 10
Computer
understanding via the random walk of their experiences and education, combined with extraordinary innate intelligence. Can we teach it? Yale Patt and Sanjay Patel think so. It’s a basic tenet of their book, Introduction to Computing Systems (McGraw Hill, 2004). On the inside cover, no less a luminary than Donald Knuth says, “People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise, the programs they write will be pretty weird.”
There is a certain intellectual thrill when you finally break through to a new understanding of something.
Conversely, people who design computers without a good idea of how programs are written, what makes them easy or hard, and what makes them fail, will in all likelihood conjure up a useless design. I once heard a compiler expert opine that there’s a special place in the netherworld for computer designers who create a machine before they know if a compiler can be written for it. One other data point I’m sure of: Me. I had taken an OS course and several programming language courses and did well at them, but I didn’t understand what computer architecture really meant until I had to write assembly code for a PDP-11. My program had to read the front panel switches, do a computation on them, and display the results on the front panel lights. My first program didn’t work reliably, and I spent hours staring at the code, line by line, trying to identify the conceptual bug. I couldn’t find it. I finally went back to the lab and stared instead at the machine. Eureka! It suddenly occurred to me that the assign-
ment hadn’t actually stated that the switches were debounced, and the PDP-11 documentation didn’t say that either. I had simply assumed it. Mechanical switches are constructed so that flipping the switch causes an internal metal plate to quickly move from one position to a new one where it now physically touches a stationary metal plate. Upon hitting the stationary plate, the moving metal repeatedly bounces up and down until it eventually settles and touches permanently. Even at the glacial clock rates of the 1970s, the CPU had plenty of time to sample a switch’s electrical state during the bounces. Debouncing them in software was just a matter of inserting a delay loop between switch transition detection and logical state identification. Without an understanding of both the hardware and the software, I’d still be sitting in front of that PDP-11, metaphorically speaking. There are always tradeoffs. The horizontally stratified way we teach computer systems today makes it difficult for students to see how ideas at one level map onto problems at another.
EVEN GOOD ABSTRACTIONS CAN HURT If you really want to snow a student under, try teaching computer system design from application to OS to logic to circuits to silicon physics as a series of vertical slices. In some ways, I think this problem was fundamental to Intel’s failed 432 chips from the early 1980s—they were “capability-based” object-oriented systems in which one global, overriding paradigm was present. The system was designed from one point of view, and to understand it you had to adopt that point of view. To wit: Everything—and I do mean everything—was an object in those systems. In some ways, it was the ultimate attempt to systematically apply the black-box paradigm to an entire computer system. An object in a 432 system was an abstract entity with intrinsic capabilities and extrinsic features. Every object
was protected by default against unauthorized access. If one object (your program, say) wanted access to another (a database, perhaps) your program object had to first prove its bona fides, which hardware would check at runtime. At a software level, this kind of system had been experimented with before, and it does have many appealing features, especially in today’s world of runaway viruses, Trojans, worms and spam. But the 432 went a step further and made even the hardware an object. This meant that the OS could directly look up the CPU’s features as just another object, and it could manipulate that object in exactly the same way as a software object. This was a powerful way of viewing a computing system, but it ran directly contrary to how computer systems are taught. It made the 432 system incomprehensible to most people at first glance. There would be no second glance: Various design errors and a poor match between its Ada compiler
and the microarchitecture made the system almost unusably slow. The 432 passed into history rather quickly. If the design errors had been avoided, would the 432 have taken hold in the design community? All things considered, I don’t think so: It had the wrong target in the first place. The 432 was intended to address a perceived looming software production gap. The common prediction of the late 1970s was that software was too hard to produce, it would essentially stop the industry in its tracks, and whatever hardware changes were needed to address that gap were therefore justified. With a few decades of hindsight, we can now see that the industry simply careened onward and somehow never quite fell into this feared abyss. Perhaps we all just lowered our expectations of quality to “fix” the software gap. Or maybe Bell Labs’ gambit of seeding universities in the 1970s with C and Unix paid off with enough pro-
grammers in the 1980s. Whatever the reason, the pool of people ready to dive into Ada and the 432’s new mindset was too small.
ew paradigms are important. Our world views make it possible for us to be effective in an industry or academic environment, but they also place blinders on us. In the end, I concluded that it wasn’t a matter of identifying which world view is best—EE or CS. The best thing to do is to realize that both have important observations and intuitions to offer and to make sure the differences are valued and not derided. Society at large should go and do likewise. ■
N
Bob Colwell was Intel’s chief IA32 architect through the Pentium II, III, and 4 microprocessors. He is now an independent consultant. Contact him at
[email protected].
The 30th IEEE Conference on Local Computer Networks (LCN) Sydney, Australia – November 15-17, 2005 Call for Papers
http://www.ieeelcn.org
The IEEE LCN conference is one of the premier conferences on the leading edge of practical computer networking. LCN is a highly interactive conference that enables an effective interchange of results and ideas among researchers, users, and product developers. We are targeting embedded networks, wireless networks, ubiquitous computing, heterogeneous networks and security as well as management aspects surrounding them. We encourage you to submit original papers describing research results or practical solutions. Paper topics include, but are not limited to: • Embedded networks • Wearable networks • Wireless networks • Mobility management • Networks to the home • High-speed networks • Optical networks • Ubiquitous computing • Quality-of-Service • Network security/reliability • Adaptive applications • Overlay networks Authors are invited to submit full or short papers for presentation at the conference. Full papers (maximum of 8 camera-ready pages) should present novel perspectives within the general scope of the conference. Short papers are an opportunity to present preliminary or interim results and are limited to 2 camera-ready pages in length. All papers must include title, complete contact information for all authors, abstract, and a maximum of 5 keywords on the cover page. Papers must be submitted electronically. Manuscript submission instructions are available at the LCN web page at http://www.ieeelcn.org. Paper submission deadline is May 10, 2005 and notification of acceptance is July 28, 2005. General Chair: Burkhard Stiller University of Zürich, and ETH Zurich, Switzerland
[email protected]
Program Chair: Hossam Hassanein Queen’s University Canada
[email protected]
Program Co-Chair: Marcel Waldvogel University of Konstanz Germany
[email protected]
1973 1989 1973 1989 •
3 2 & 16 YEARS AGO
•
MARCH 1973 GENE AMDAHL (p. 39). “‘The large computer market is the market that is being addressed most poorly by any of the competition today. It is also the most difficult market to address, and requires the most skill, technological knowhow, and financial backing. Because this is so, if we can meet these challenges properly, we would reasonably expect to have considerably less “transient” competition.’” “So the Amdahl Corporation seems to have a comfortable backlog, adequate financing, and the considerable talents and reputation of their president. What they don’t have is a detailed product description and that all-important track record of successful delivery, installation, operation, and support. And a great deal hinges on Gene Amdahl’s judgment that IBM’s flank really is exposed.” THE FLEXIBLE DISKETTE (p. 43). “A versatile system for entering information into a computer—with a dramatically different look in data storage media—has been announced by International Business Machines Corporation. “The IBM 3740 data entry system incorporates a flexible disk cartridge for capturing data called the IBM Diskette. Weighing just over an ounce, the flexible diskette resembles a small phonograph record, yet can store as many as 242,000 characters—equivalent to a box and a half of 80column cards.” “The IBM 3540 diskette input/output unit, also announced, can be attached to an IBM System/370, permitting data recorded on diskettes to be entered directly into the computer. This high speed unit can hold up to 20 diskettes at a time and read more than 3,000 data records per minute into a System/370. The 3540 also has the capability to receive information from the computer and record it on a diskette at more than 2,000 records per minute.” CALCULATOR (p. 44). “A powerful electronic calculator, small enough to fit into a shirt pocket yet capable of performing the most complex business and financial calculations, was announced recently by Hewlett-Packard Company. “The new HP-80 differs from the HP-35 (HewlettPackard’s original pocket-sized scientific calculator) in its builtin programming. The HP-35 solves functions with a single keystroke; the HP-80 solves equations with a single keystroke. Typical of the functions solved by the HP-35 with one keystroke are: log, ln, sin, cos, tan and xy. Some of these functions are hard-wired into the HP-80 as subroutines within the single keystroke programs. In other words, the HP-35 has one level of programming, while the HP-80 has two levels.” INTEL 8008 SIMULATOR (p. 45). “Intel Corporation has introduced a Fortran IV program for simulating the operation of Intel’s 8008 computer-on-a-chip, a complete 8-bit CPU packaged in an 18-pin DIP. 12
Computer
•
“The program, designated INTERP/8, is available from Intel on magnetic tape. It is also available under time-share arrangements with General Electric Timeshare, Tymshare Corporation and Applied Logic Corporation.” “The addition of this simulator program completes a comprehensive set of hardware and software support to assist development of Intel’s MCS-8 micro computer systems. Support now includes prototyping system, PROM programmer, hardware assembler, Fortran IV assembler, Fortran IV simulator, several control programs and a system interface and control module.” SIMULATION COMPUTER (p. 45). “A new British simulation computer which is programmed and used in a similar way to an analog computer offers digital accuracy, reliability and repeatability. “Designed to replace conventional analog and hybrid equipment with an all-digital system, the Membrain MBD24 consists of a number of separate digital computing modules which are interconnected by means of a patchboard. Each unit is addressable from a keyboard to enable the setting of problem parameters such as gains, initial conditions, time-scale, non-linear functions and timers. Data is transmitted and received simultaneously by all units, the output of each unit being a 24-bit serial number which is updated once every 100 micro-seconds.” “Compared with an analog computer, programming and patching a problem is claimed to be easier and to take less time. Typically, less than half the number of operational elements and patch cords are needed.” MULTICS SYSTEM (p. 46). “Honeywell Inc. has introduced to commercial markets what it calls the most advanced, sophisticated computer system available in the world. “The system, known as Multics (Multiplexed Information and Computing Service) derives from a system that evolved through more than seven years of joint effort with the Massachusetts Institute of Technology. It is designed to operate as a general-purpose system serving a large community of users and their diverse needs.” “According to a Honeywell spokesman, Multics is the most powerful virtual memory system yet available. The Multics hardware and software, ring protection features, and paging and segmentation techniques provide ‘close to ideal’ on-line system characteristics for interactive problem solving.” TALKING COMPUTER (p. 47). “Over 5,000 blind people in the Boston area have a new friend in a talking computer system that allows them to type letter-perfect correspondence, proofread manuscripts, calculate bookkeeping problems, and write computer programs. “The first of these systems, known as an Audio-ResponseTime-Shared (ARTS) Service Bureau, is operating at the Protestant Guild for the Blind in Watertown, Mass. It is
Published by the IEEE Computer Society
built around a Data General Corporation Nova 800 minicomputer. “A blind person telephones the Bureau from his office, home or school and transmits information to the computer via the telephone line by using a console resembling a standard typewriter. The talking computer responds to the typist in words and sentences telling him precisely what he has typed or giving him the results of indicated commands or computations.” CLASSROOM FEEDBACK (p. 47). “An $80,000 electronic student response system, designed to increase the efficiency of student-teacher communication, is now in operation at the University of Southern California School of Medicine. “The system, recently installed in the Louis B. Mayer Medical Teaching Center, allows individual student participation and response which would otherwise be impossible in the large classroom environment of the 500-seat auditorium. “As questions are presented by the instructor, a push-button device on the arm of 265 seats allows the students to pick one of five possible answers. The device immediately indicates to the student whether he is right or wrong, and indicates to the instructor the percentage of the class responding, and the percentage correct or incorrect for each possible answer.”
MARCH 1989 GEOMETRIC COMPUTATION (p. 31). “Despite great advances in geometric and solid modelling, practical implementation of the various geometric operations remains error-prone, and the goal of implementing correct and robust systems for carrying out geometric computation remains elusive.” “… the problem is variously characterized as a matter of achieving sufficient numerical precision, as a fundamental difficulty in dealing with interacting numeric and symbolic data, or as a problem of avoiding degenerate positions.” “In fact, these issues are interrelated and are rooted in the problem that objects conceptually belonging to a continuous domain are analyzed by algorithms doing discrete computation, treating a very large discrete domain—for instance, the set of all representable floating-point numbers—as if it were a continuous domain.” ROBOTIC EXCEPTION HANDLING (p. 43). “A robot program can be logically correct yet fail under abnormal conditions. A major goal of robotics research is to construct robust and reliable robot systems able to handle errors arising from abnormal operating conditions. Consequently, error handling and recovery is becoming increasingly important as researchers strive to construct reliable, autonomous robot systems for factory, space, underwater, and hazardous environments.”
SECURE DATABASES (p. 63). “A multilevel secure database management system is a system that is secure when shared by users from more than one clearance level and contains data of more than one sensitivity level. MLS/DBMSs evolved from multilevel secure computing systems. Presentday DBMSs are not built with adequate controls and mechanisms to enforce a multilevel security policy. Thus, an MLS/DBMS is different from a conventional DBMS in at least the following ways: “(1) Every data item controlled by an MLS/DBMS is classified in one of several sensitivity levels that may need to change with time. “(2) Access to data must be controlled on the basis of each user’s authorization to data at each sensitivity level.” 32-BIT EISA CONNECTOR (p. 72). “All key aspects of the Extended Industry Standard Architecture specification— electrical, mechanical, and system configuration details— have been incorporated and distributed to participating developer companies ….” “The specification now includes the finalization of mechanical details for the EISA 32-bit connector. The new connector will reputedly allow high-performance 32-bit expansion boards to be installed in PCs utilizing EISA when they become available later this year.” MICROCODE COPYRIGHT (p. 78). “Microcode is a computer program and therefore protected under copyright laws, US District Court Judge William F. Gray ruled February 7. The ruling came at the conclusion of a 41/2-year court battle in which Intel claimed that NEC’s V-series microprocessors violated the copyright on Intel’s 8086/88 microcode. “Although he decided that microcode is protected, Gray ruled in NEC’s favor in the main dispute, finding that Intel forfeited its copyright by allowing copies of the 8086/88 chip to be distributed without copyright notice.” SUPERMINICOMPUTER (p. 91). “Wang Laboratories claims that it has optimized its new superminicomputers, the VS 10000 Series, for high-volume computing by incorporating a new disk subsystem and system management software. The new models … are reportedly based on emitter-coupled logic technology with custom gate arrays, VLSI microprocessors, and the mainframe VS instruction set. “The VS 10000 systems use a 90-MHz clock rate and an I/O bus capacity of 30.3 Mbytes per second.” “Other features include 32 Kbytes of write-back cache memory in the CPU, up to 64 Mbytes of addressable memory with physical accommodations for up to 256 Mbytes, a 128-bit-wide memory bus that supports 128-bit read and 64-bit write operations, an independent 80286-based support control unit, and up to 15 intelligent I/O controllers.” Editor: Neville Holmes;
[email protected].
March 2005
13
INDUSTRY TRENDS
Search Engines Tackle the Desktop Bernard Cole
A
s PC hard drives get bigger and new information sources become available, users will have much more data of different types, including multimedia, on their computers. This makes it increasingly difficult to find documents, e-mail messages, spreadsheets, audio clips, and other files. Current desktop-based search capabilities, such as those in Windows, are inadequate to meet this challenge. In response, major Web search providers and other companies are offering engines for searching PC hard drives. This requires new search approaches because desktop-based documents are generally structured differently than those on the Web. A number of smaller vendors such as Accona Industrier, Automony, Blinkx, Copernic Technologies, dTSearch, and X1 Technologies are upgrading or providing free basic versions of their existing desktop search engines. Google has introduced a free beta version of an integrated desktop and Web search engine. Search providers Ask Jeeves, HotBot, Lycos, Microsoft, and Yahoo, as well as major Internet service providers such as AOL and Earthlink, are developing similar technologies. One important factor in the competition is the desire by some Web search providers to use desktop search as a way to convince people to always or at least regularly use their portals. This would create a large user base that could encourage businesses to either advertise on the portals or buy other services.
14
Computer
In addition, some desktop search providers may want to generate revenue by charging businesses for sending their advertisements, targeted to user queries, along with responses. Such advertising has generated considerable revenue for Web search providers. A user could work with several desktop search engines, said Larry Grothaus, lead product manager for Microsoft’s MSN Desktop search products. “But practically speaking, the average consumer will stick with the most attractive, easy-to-use, and familiar alternative.” Some desktop search approaches present security and privacy problems. Nonetheless, search providers are pushing ahead and adding usability features to attract users.
DESKTOP SEARCH CHALLENGES Desktop search features built into current operating systems, e-mail programs, and other applications have far fewer capabilities than Web search engines. They generally offer only simple keyword searches of a set of files, usually of a single file type. On the Web, search engines can exploit information organized into a common HTML format with stanPublished by the IEEE Computer Society
dardized ways of identifying various document elements. The engines can use this information, along with links to other documents, to make statistical guesses that increase the likelihood of returning relevant results. The desktop is more complicated to search because Microsoft Word and other applications format different types of documents in various ways. In addition, desktop files can be either structured or unstructured. The function and meaning of structured files—such as information in a relational database or a text document with embedded tags—are clearly reflected in their structure. The easily identified structure makes searching such files easier. This is not the case with unstructured information, which includes natural-language documents, unformatted text files, speech, audio, images, and video. Therefore, desktop search engines must add capabilities in different ways than Web search applications. The Boolean AND, OR, and NOT mechanisms and keyword-indexing algorithms by which searches are conducted on the desktop are similar to those used for years on the Web, said Daniel Burns, CEO of X1. However, desktop search engines face the additional challenge of recognizing which of the many file types it is dealing with. The engines also must derive whatever metadata authors have chosen to include in e-mail notes, database files, and other document types. While conducting searches, desktop engines must be efficient and avoid imposing a substantial processing or memory load on the computer. “A Web search service can set aside entire server farms to do only searches, while the desktop search engine has to be as efficient as possible within the constraints of the user’s computing resources,” explained Susan Feldman, search-engine market analyst at IDC, a market research firm. To gain these desktop search capabilities, some Web search vendors have either acquired or licensed desktop-
based technology, noted Nelson Mattos, distinguished engineer and director of information integration at IBM. For example, Momma.com bought part of Copernic, Ask Jeeves purchased Tokaroo, and AOL and Yahoo have licensed X1’s technology.
DESKTOP SEARCH METHODOLOGIES Desktop search engines employ one or more file crawler programs—similar to those used by Web search engines— that, upon installation, move through disk drives. As Figure 1 shows, the crawlers use an indexer to create an index of files; their location on a hard drive’s hierarchical tree file structure; file names, types, and extensions (such as .doc or .jpg); and keywords. Once existing files are indexed, the crawler indexes new documents in real time. During searches, the engine matches queries to indexed items to find relevant files faster. The crawlers also collect metadata, which lets the engine access files more intelligently by providing additional search parameters, according to X1’s Burns. Several desktop search engines are integrated with the providers’ Web engines and simultaneously run both types of searches on queries. These providers are putting considerable effort into desktop feature sets and interfaces that will be as familiar and easy to use as their Web-based counterparts, said IDC’s Feldman.
SEARCH WARS Because they want to reach the broadest number of users, all Web search providers entering the desktop arena work only with the market-leading Windows and Internet Explorer platforms, explained Ray Wagner, search-engine analyst at Gartner, a market research firm. Some providers that offer only desktop search engines have versions for other operating systems and browsers. Much of the industry’s attention is focused on three major companies: Google, Microsoft, and Yahoo.
Search form Stores file information Indexer
Crawls files and extracts information
Look in index Index file
Search engine Get list of matches
Send search query Return formatted results Search results display
Stored files Documents, HTML, images, audio
Figure 1. A typical desktop search engine includes an indexer application that crawls existing and new stored files and extracts information on keywords, metadata, size, and location in memory. This information is kept in an index file. Some systems use multiple indexes and indexers, to keep index files from getting too large to work with efficiently. When a user fills out a search form and sends a query, the engine searches the index, identifies the appropriate files, finds their locations on the hard drive, and displays the results.
Google Desktop Search Google was the first major Web search company to release a desktop search beta application (http://desktop. google.com), a free, simple, lightweight (400-Kbyte) plug-in. The Google Desktop Search beta is configured as a local proxy server that stands in for the Web search engine. It performs desktop searches only via Internet Explorer. By default, GDS returns desktop and Web search results together, but users can configure it to return them separately. The GDS beta does not let users search a specific field within a file, such as e-mail messages’ “To” and “From” fields. Google expects to release a commercial GDS version this year. The company’s search-related business model relies on revenue generated from real-time advertisements selected to match query terms and search results. With the Web and desktop search engines operating in tandem, the latter maintains a link with the former, which connects to a server responsible for providing advertising that relates to search terms.
GDS tracks and fully indexes Outlook and Outlook Express e-mail messages; AOL instant messages; the Internet Explorer history log; and Microsoft Word, Excel, and PowerPoint documents. Currently it does not index PDF files. And for nondocument files such as those with images, video, and audio, GDS indexes only file names. Reflecting GDS’s use of a Web server as the main mechanism for coordinating desktop and Web searches, the search engine indexes URLs for Web pages saved to the Internet Explorer favorites or history list, noted Nikhil Bhatla, product manager for desktop search at Google. GDS uses a single crawler that indexes all file types.
MSN desktop search MSN’s 400-Kbyte desktop search application, part of the MSN Toolbar Suite (http://beta.toolbar.msn.com), is closely integrated with Windows. When the search utility is available commercially, slated for later this year, users will see it as part of the MSN March 2005
15
I n d u s t r y Tr e n d s
Deskbar, noted Grothaus. The Deskbar, which appears on the Taskbar when Windows boots, contains buttons for direct access to MSN services. The engine also appears as MSN search bars within Outlook, Windows Explorer, and Internet Explorer. Unlike Google’s tool, MSN’s application doesn’t search local files and the Web at the same time. However, the MSN tool can index and search files on network-based drives, which Google’s and Yahoo’s engines don’t. Grothaus said Microsoft doesn’t plan to display advertisements along with the results of desktop searches. The Deskbar tool enables searches for any supported file type—Outlook and Outlook Express e-mail; Microsoft Office’s Word, Excel, PowerPoint, Calendar, Task, and Notes files; plaintext and PDF documents; MSN messenger conversation logs; HTML file names; and many types of media files. By default, the Outlook-based toolbar searches only Outlook and Outlook Express e-mail files, and the Internet Explorer-based toolbar enables searches only of HTML and e-mail files. The Windows Explorer toolbar allows keyword searches of all drives and maintains a history of previous searches. The MSN desktop search engine uses separate file crawlers, each coded to search only for video or documents or any other supported file type, according to Grothaus. On the desktop, he explained, it’s important not to use more computing resources than necessary. MSN has tailored each desktop crawler to perform only the work necessary to do its job.
Yahoo Desktop Search The Yahoo Desktop Search beta (http://desktop.yahoo.com) is a standalone application that runs on Windows. Designed to look and feel like the Yahoo Web search engine, the YDS beta is built on X1’s commercial tool. For the upcoming commercial version, Yahoo says, it intends to create additional customized features and 16
Computer
layer them on top of the X1 technology it licensed. Unlike some other desktop engines, YDS also searches compressed ZIP and Adobe PDF, Acrobat Illustrator, and Photoshop files. Users can find and play audio and video files without launching a separate media player. YDS can only search for Outlook and Outlook Express e-mails, unlike X1’s engine, which also handles Eudora and Mozilla/Netscape mail.
Bigger hard drives require better desktop search tools. A YDS convenience that neither GDS nor the MSN Desktop Search offers is the ability to preview files before opening them. Yahoo’s tool searches HTML pages that users download from the Web and those they create locally. However, Yahoo says, YDS doesn’t index Internet Explorer history or favorites files or the browser’s hard-drive-based cache memory, to keep others from accessing Web files that previous users have viewed. Users can control and change settings to index only specific files or file types or files smaller than a given size. In the future, Yahoo says, it hopes to make the desktop search tool particularly useful by tying it to the company’s portal offerings, including its e-mail, calendar, photo, music, and chat services.
SECURITY AND PRIVACY ISSUES Integrating desktop and Web search capabilities into the same application presents security and privacy challenges.
Security Integrated search engines use a local proxy-server program on the desktop to coordinate the delivery of real-time targeted advertising from Web servers for placement along with search results. This could open a security hole in
the connection between the PC and the Web, according to Daniel Wallach, Rice University assistant professor of computer science. “The more tightly the two are coupled,” he said, “the more likely there are to be holes that hackers can breach.” For example, the local proxy server can let hackers use Java- or JavaScriptbased man-in-the-middle attacks that redirect desktop results intended only for the user to an unauthorized location over the Internet, according to Wallach. Also, hackers in some cases could insert an applet to open a control channel within the proxy server, letting them issue queries to obtain private information. Providers are taking steps to block these attacks. Some desktop search engines’ use of the browser cache to look for previously viewed Web pages could lead to other security breaches. “Access to the browser cache through the integrated search interface is an extraordinary lure to potential hackers,” said Richard Smith, Internet security analyst at ComputerBytesMan.com. Blinkx’s desktop search engine prevents this by encrypting the cache, as well as communications between server and client.
Privacy Some integrated search tools make stored personal files, including e-mail and AOL chat logs, viewable on the Web browser, which could prove embarrassing if someone else has access to the computer. And some tools also allow searches of recently viewed Web sites, a feature that has raised privacy concerns, particularly for users of shared PCs. Microsoft’s desktop tool doesn’t index or allow searches of recently viewed Web sites, although it hasn’t eliminated the possibility of doing so in the future, Grothaus said. YDS doesn’t index the browser cache or the browser history or favorites files. Also, Microsoft’s tool searches for
information based on each user who logs in. If one person uses a computer for personal banking, the next person logging into that machine can’t access the sensitive data, Grothaus said.
ccording to Gartner’s Wagner, the deciding factors in the marketplace competition between desktop search engines “will be the unique usability features they bring to the game and how well they deal with a number of perceived, rather than actual, security and privacy issues that have emerged.” However, said IBM’s Mattos, search engine technology on the Web and the desktop needs radical changes to become truly useful. “On the Web, when a user puts in a sequence of keywords, even with advanced keyword search capabilities, he is liable to get a page telling him there are a million files that match the requirements,” he said. “Searches on the desktop are not much better. They yield several hundred or several thousand. What is needed is
A
something more fine-grained and able to pinpoint more exactly what you are looking for.” The goal of a desktop search is different from that of a Web search. On the Web, you are looking for information, not necessarily a specific document, explained X1’s Burns. “On the desktop,” he said, “you know that what you are searching for is there. You don’t want to wade through pages and pages of possibilities to find it. You want it now—not several possibilities, but the right file.” Many industry observers are thus waiting to see the new XML-based WinFS file system (http://msdn. microsoft.com/data/winfs) that Microsoft plans to incorporate in future Windows versions. The company originally anticipated including WinFS in its upcoming Longhorn version of Windows but apparently won’t be able to do so. According to Blinkx cofounder Suranga Chandratillake, moving to an XML-based structure is difficult and won’t occur for years. The Web and
local storage are growing rapidly, and most of the growing number of data types they contain work with traditional file structures, he explained. Imposing a new file structure on all this data is impractical, he said. He concluded, “The alternative that I favor and that offers the only hope of keeping up with the growth and the increasing diversity of information on both the desktop and the Web, is wrestling with data, finding clever ways to add metadata, and discovering better search mechanisms that work within the file structures with which we are already familiar.” ■ Bernard Cole is a freelance technology writer based in Flagstaff, Arizona. Contact him at
[email protected].
Editor: Lee Garber, Computer,
[email protected]
GET CERTIFIED Apply now for the 1 April—30 June test window.
CERTIFIED SOFTWARE DEVELOPMENT PROFESSIONAL PROGRAM Doing Software Right ■ Demonstrate your level of ability in relation to your peers ■ Measure your professional knowledge and competence Certification through the CSDP Program differentiates between you and other software developers. Although the field offers many kinds of credentials, the CSDP is the only one developed in close collaboration with software engineering professionals. “The exam is valuable to me for two reasons: One, it validates my knowledge in various areas of expertise within the software field, without regard to specific knowledge of tools or commercial products... Two, my participation, along with others, in the exam and in continuing education sends a message that software development is a professional pursuit requiring advanced education and/or experience, and all the other requirements the IEEE Computer Society has established. I also believe in living by the Software Engineering code of ethics endorsed by the Computer Society. All of this will help to improve the overall quality of the products and services we provide to our customers...” — Karen Thurston, Base Two Solutions
Visit the CSDP web site at www.computer.org/certification or contact
[email protected] March 2005
17
TECHNOLOGY NEWS
Is It Time for Clockless Chips? David Geer
V
endors are revisiting an old concept—the clockless chip— as they look for new processor approaches to work with the growing number of cellular phones, PDAs, and other highperformance, battery-powered devices. Clockless processors, also called asynchronous or self-timed, don’t use the oscillating crystal that serves as the regularly “ticking” clock that paces the work done by traditional synchronous processors. Rather than waiting for a clock tick, clockless-chip elements hand off the results of their work as soon as they are finished. Recent breakthroughs have boosted clockless chips’ performance, removing an important obstacle to their wider use. In addition to their efficient power use, a major advantage of clockless chips is the low electromagnetic interference (EMI) they generate. Both of these factors have increased the chips’ reliability and robustness and have made them popular research subjects for applications such as pagers, smart cards, mobile devices, and cell phones. Clockless chips have long been a subject of research at facilities such as the California Institute of Technology’s Asynchronous VLSI Group (www. async.caltech.edu/) and the University of Manchester’s Amulet project (www. cs.man.ac.uk/apt/projects/processors/ amulet/). Now, after a few small efforts and false starts in the 1990s, companies such as Fulcrum Microsystems, Handshake Solutions, Sun Microsystems, and Theseus Logic are again looking
18
Computer
to release commercial asynchronous chips, as the “A Wave of Clockless Chips” sidebar describes. However, clockless chips still generate concerns—such as a lack of development tools and expertise as well as difficulties interfacing with synchronous chip technology—that proponents must address before their commercial use can be widespread.
PROBLEMS WITH CLOCKS Clocked processors have dominated the computer industry since the 1960s because chip developers saw them as more reliable, capable of higher performance, and easier to design, test, and run than their clockless counterparts. The clock establishes a timing constraint within which all chip elements must work, and constraints can make design easier by reducing the number of potential decisions.
Clocked chips The chip’s clock is an oscillating crystal that vibrates at a regular frequency, depending on the voltage applied. This frequency is measured in gigahertz or megahertz. All the chip’s work is synchronized via the clock, which sends its signals out along all circuits and controls the registers, the data Published by the IEEE Computer Society
flow, and the order in which the processor performs the necessary tasks. An advantage of synchronous chips is that the order in which signals arrive doesn’t matter. Signals can arrive at different times, but the register waits until the next clock tick before capturing them. As long as they all arrive before the next tick, the system can process them in the proper order. Designers thus don’t have to worry about related issues, such as wire lengths, when working on chips. And it is easier to determine the maximum performance of a clocked system. With these systems, calculating performance simply involves counting the number of clock cycles needed to complete an operation. Calculating performance is less defined with asynchronous designs. This is an important marketing consideration.
The downside Clocks lead to several types of inefficiencies, including those shown in Figure 1, particularly as chips get larger and faster. Each tick must be long enough for signals to traverse even a chip’s longest wires in one cycle. However, the tasks performed on parts of a chip that are close together finish well before a cycle but can’t move on until the next tick. As chips get bigger and more complex, it becomes more difficult for ticks to reach all elements, particularly as clocks get faster. To cope, designers are using increasingly complicated and expensive approaches, such as hierarchies of buses and circuits that adjust clock readings at various components. This approach could, for example, delay the start of a clock tick so that it occurs when circuits are ready to pass and receive data. Also, individual chip components can have their own clocks and communicate via buses, according to Ryan Jorgenson, Theseus’s vice president of engineering. Clock ticks thus only have to cross individual components. The clocks themselves consume
power and produce heat. In addition, in synchronous designs, registers use energy to switch so that they are ready to receive new data whenever the clock ticks, whether they have inputs to process or not. In asynchronous designs, gates switch only when they have inputs.
HOW CLOCKLESS CHIPS WORK There are no purely asynchronous chips yet. Instead, today’s clockless processors are actually clocked processors with asynchronous elements. Clockless elements use perfect clock gating, in which circuits operate only when they have work to do, not whenever a clock ticks. Instead of clock-based synchronization, local handshaking controls the passing of data between logic modules. The asynchronous processor places the location of the stored data it wants to read onto the address bus and issues a request for the information. The memory reads the address off the bus, finds the information, and places it on the data bus. The memory then acknowledges that it has read the data. Finally, the processor grabs the information from the data bus. Pipeline controls and FIFO sequencers move data and instructions around and keep them in the right order. According to Jorgenson, “Data arrives at any rate and leaves at any rate. When the arrival rate exceeds the departure rate, the circuit stalls the input until the output catches up.” The many handshakes themselves require more power than a clock’s operations. However, clockless systems more than offset this because, unlike synchronous chips, each circuit uses power only when it performs work.
CLOCKLESS ADVANTAGES In synchronous designs, the data moves on every clock edge, causing voltage spikes. In clockless chips, data doesn’t all move at the same time, which spreads out current flow, thereby minimizing the strength and frequency of spikes and emitting less
A Wave of Clockless Chips In the near future, Handshake Solutions and ARM, a chip-design firm, plan to release a commercial asynchronous ARM core for use in devices such as smart cards, consumer electronics, and automotive applications, according to Handshake chief technical officer Ad Peeters. Sun Microsystems is building a supercomputer with at least 100,000 processors, some using asynchronous circuits, noted Sun Fellow Jim Mitchell. Sun’s UltraSPARC IIIi processor for servers and workstations also features asynchronous circuits, said Sun Distinguished Engineer Jo Ebergen. Fulcrum Microsystems offers an asynchronous PivotPoint high-performance switch chip for multigigabit networking and storage devices, according to Mike Zeile, the company’s vice president of marketing. The company has also developed clockless cores for use with embedded systems, he noted. “Theseus Logic developed a clockless version of Motorola’s 8-bit microcontroller with lower power consumption and reduced noise,” said vice president of engineering Ryan Jorgenson. Theseus designed the device for use in batterypowered or signal-processing applications. “Also, Theseus and [medical-equipment provider] Medtronic have worked on a [clockless] chip for defibrillators and pacemakers,” Jorgenson said.
Cycle time of clocked logic Logic time
Manufacturing margin Clock jitter, skew margin
Cycle time of clockless logic
Worst case–average case (logic execution time) Source: Fulcrum Microsystems
Figure 1. Clockless chips offer an advantage over their synchronous counterparts because they efficiently use cycle times. Synchronous processors must make sure they can complete each part of a computation in one clock tick. Thus, in addition to running their logic, the chips must add cycle time to compensate for how much longer it takes to run some operations than to run average operations (worst case – average case), variations in clock operations (jitter and skew), and manufacturing and environmental irregularities.
EMI. Less EMI reduces both noiserelated errors within circuits and interference with nearby devices.
Power efficiency, responsiveness, robustness Because asynchronous chips have no clock and each circuit powers up only when used, asynchronous processors use less energy than synchronous chips by providing only the voltage necessary for a particular operation.
According to Jorgenson, clockless chips are particularly energy-efficient for running video, audio, and other streaming applications—data-intensive programs that frequently cause synchronous processors to use considerable power. Streaming data applications have frequent periods of dead time—such as when there is no sound or when video frames change very little from their immediate predecessors—and little need for running March 2005
19
Te c h n o l o g y N e w s
error-correction logic. During this inactive time, asynchronous processors don’t use much power. Clockless processors activate only the circuits needed to handle data, thus they leave unused circuits ready to respond quickly to other demands. Asynchronous chips run cooler and have fewer and lower voltage spikes. Therefore, they are less likely to experience temperature-related problems and are more robust. Because they use handshaking, clockless chips give data time to arrive and stabilize before circuits pass it on. This contributes to reliability because it avoids the rushed data handling that central clocks sometimes necessitate, according to University of Manchester Professor Steve Furber, who runs the Amulet project.
Simple, efficient design Companies can develop logic modules without regard to compatibility with a central clock frequency, which makes the design process easier, according to Furber. Also, because asynchronous processors don’t need specially designed modules that all work at the same clock frequency, they can use standard components. This enables simpler, faster design and assembly.
RECENT ADVANCES BOOST PERFORMANCE Traditionally, asynchronous designs have had lackluster performance, even though their circuitry can handle data without waiting for clock ticks. According to Fulcrum cofounder Andrew Lines, most clockless chips have used combinational logic, an early, uncomplicated form of logic based on simple state recognition. However, combinational logic uses the larger and slower p-type transistors. This has typically led to large feature sizes and slow performance, particularly for complex clockless chips. However, the recent use of both domino logic and the delay-insensitive mode in asynchronous processors has 20
Computer
created a fast approach known as integrated pipelines mode. Domino logic improves performance because a system can evaluate several lines of data at a time in one cycle, as opposed to the typical approach of handing one line in each cycle. Domino logic is also efficient because it acts only on data that has changed during processing, rather than acting on all data throughout the process. The delay-insensitive mode allows
Clockless chips offer power efficiency, robustness, and reliability. an arbitrary time delay for logic blocks. “Registers communicate at their fastest common speed. If one block is slow, the blocks that it communicates with slow down,” said Jorgenson. This gives a system time to handle and validate data before passing it along, thereby reducing errors.
CLOCKLESS CHALLENGES Asynchronous chips face a couple of important challenges.
Integrating clockless and clocked solutions In today’s clockless chips, asynchronous and synchronous circuitry must interface. Unlike synchronous processors, asynchronous chips don’t complete instructions at times set by a clock. This variability can cause problems interfacing with synchronous systems, particularly with their memory and bus systems. Clocked components require that data bits be valid and arrive by each clock tick, whereas asynchronous components allow validation and arrival to occur at their own pace. This requires special circuits to align the asynchronous information with the synchronous system’s clock, explained Mike Zeile, Fulcrum’s vice president of marketing. In some cases, asynchronous systems
can try to mesh with synchronous systems by working with a clock. However, because the two systems are so different, this approach can fail.
Lack of tools and expertise Because most chips use synchronous technology, there is a shortage of expertise, as well as coding and design tools, for clockless processors. According to Jorgensen, this forces clockless designers to either invent their own tools or adapt existing clocked tools, a potentially expensive and time-consuming process. Although manufacturers can use typical silicon-based fabrication to build asynchronous chips, the lack of design tools makes producing clockless processors more expensive, explained Intel Fellow Shekhar Borkar. However, companies involved in asynchronous-processor design are beginning to release more tools. For example, to build clockless chips, Handshake uses its proprietary Haste programming language, as well as the Tangram compiler developed at Philips Research Laboratories. The University of Manchester has produced the Balsa Asynchronous Synthesis System, and Silistix Ltd. is commercializing clockless-design tools. “We have developed a complete suite of tools,” said Professor Alain Martin, who heads Caltech’s Asynchronous VLSI Group. “We are considering commercializing the tools through a startup (Situs Logic).” There is also a shortage of asynchronous design expertise. Not only is there little opportunity for developers to gain experience with clockless chips, but colleges have fewer asynchronous design courses.
A HYBRID FUTURE No company is likely to release a completely asynchronous chip in the near future. Thus, chip systems could feature clockless islands tied together by a main clock design that ticks only for data that passes between the sections. This adds the benefits of asyn-
chronous design to synchronous chips. On the other hand, University of Utah Professor Chris Myers contended, the industry will move gradually toward chip designs that are “globally asynchronous, locally synchronous.” Synchronous islands would operate at different clock speeds using handshaking to communicate through an asynchronous buffer or fabric. According to Myers, distributing a clock signal across an entire processor is becoming difficult, so clocking would be used only to distribute the signal across smaller chip sections that communicate asynchronously.
xperts say synchronous chips’ performance will continue to improve. Therefore, said Fulcrum’s Lines, there may not be much demand for asynchronous chips to enhance performance. Furber, on the other hand, contended there will be demand for clockless chips because of their many advantages. “Most of the research problems are resolved,” Myers said. “We’re left with development work. [We require] more design examples that prove the need for asynchronous design.” Said Intel’s Borkar, “I’m not shy about using asynchronous chips. I’m here to serve the engineering community. But someone please prove their benefit to me.” Added Will Strauss, principal analyst at Forward Concepts, a market research firm, “I’ve yet to see a commercially successful clockless logic chip shipping in volume. It requires thinking outside the box to find volume applications that benefit from the clockless approach at a reasonable cost.” ■
E
David Geer is a freelance technology writer based in Ashtabula, Ohio. Contact him at
[email protected].
How to Reach Computer Writers We welcome submissions. For detailed information, visit www. computer.org/computer/author.htm.
News Ideas Contact Lee Garber at lgarber@ computer.org with ideas for news features or news briefs.
Products and Books Send product announcements to
[email protected]. Contact
[email protected] with book announcements.
Letters to the Editor Please provide an e-mail address with your letter. Send letters to
[email protected].
On the Web Explore www.computer.org/computer/ for free articles and general information about Computer magazine.
Magazine Change of Address Send change-of-address requests for magazine subscriptions to
[email protected]. Make sure to specify Computer.
Missing or Damaged Copies If you are missing an issue or received a damaged copy, contact
[email protected].
Reprint Permission To obtain permission to reprint an article, contact William Hagen, IEEE Copyrights and Trademarks Manager, at
[email protected]. To buy a reprint, send a query to
[email protected].
Editor: Lee Garber, Computer,
[email protected]
March 2005
21
NEWS BRIEFS
Finding Ways to Read and Search Handwritten Documents echnologies for reading and searching digitized documents have helped academic researchers. However, no one has developed a truly effective engine that can work with handwritten documents, a potentially valuable source of information for many purposes. R. Manmatha, a research assistant professor with the Center for Intelligent Information Retrieval at the University of Massachusetts, Amherst, hopes to change this. Handwritten documents are generally scanned as images of entire pages, not as individual characters that optical-
T
character-recognition technology can recognize when searching for responses to queries. Current handwriting recognition systems generally work well only with documents that contain specific types of information written in a consistent format, such as addresses. Thus, to read and search most handwritten documents, someone must type them up and create digitized versions, a costly and time-consuming process. Manmatha’s system scans handwritten documents as images. His research team first tried to match each written letter with a digital image of
Source: University of Massachusetts, Amherst
A University of Massachusetts researcher has developed a technique for reading and searching handwritten documents. The system works with a statistical model that learns to associate images of words with actual words in a probabilistic manner. The system first segments a document to obtain images of individual words. It compares the images with images it has encountered in the past to find a probable match. The system then identifies and tags the new image as the word associated with the likely match. It keeps these new identifications in an index for future reference.
22
Computer
Published by the IEEE Computer Society
a letter. However, handwriting variations—such as letter height and slant— made consistent accuracy difficult. Instead, Manmatha developed a system that examines entire words, which provides more context than individual letters for identifying written material. Using a statistical model, he explained, the system learns to associate images of words with actual words in a probabilistic manner and then stores this information in a database. The system compares an image of a word with images it has encountered in the past to find a likely match. It then identifies the new image as the word associated with the likely match. To develop their system, Manmatha and his students obtained about 1,000 pages of US President George Washington’s correspondence that had been scanned from microfilm by the Library of Congress. Even after training, the system’s accuracy in identifying words ranges from 54 to 84 percent. Manmatha said refinements such as better image processing could make his technology more accurate. And making the system faster, perhaps by developing more efficient algorithms, will be particularly important so that it can work with large collections of documents, he noted. Chris Sherman, associate editor at SearchEngineWatch.com, noted that research on searching handwritten document has been taking place since the 1980s. There seems to be a limited demand for this technology, Sherman said. “I could see this being used for scholarly archives going back to eras when there weren’t computers, but I don’t see it as being a huge application.” ■
A Gem of an Idea for Improving Chips US researcher is developing ways to use diamonds in chips to overcome some of silicon’s limitations. Damon Jackson, a research physicist at the Lawrence Livermore National Laboratory, has used diamonds to house electronic circuitry. Jackson’s research team lays a 10- to 50-micron layer of tungsten film on a one-third-carat diamond, adds circuitry, then grows a single crystal layer of synthetic diamond on top of the tungsten so that the wires are completely embedded. The research team uses diamonds because they offer advantages over silicon in strength and their ability to resist high temperatures, improve cooling by conducting heat away from the circuitry, and withstand high pressure and radiation. This protection makes the system ideal for circuitry used in challenging environments such as space, Jackson said. Satellites, for example, experience considerable heat buildup, atmospheric pressure, and radiation. However, there are significant obstacles to using diamonds in chips. First, diamonds are expensive, although widespread use in chips would eventually reduce the per-unit cost to some degree. Fabrication-related activities and research would also be costly, Jackson noted. He is working with Yogesh Vohra, a physics professor at the University of Alabama at Birmingham who developed the chemical-vapor-deposition technique for growing industrial-quality diamonds by cooking methane, hydrogen, and oxygen gases in a very hot microwave oven. The advantages of this method, Vohra explained, are that the raw materials are inexpensive, the process scales well, it’s easy to embed wiring, and the diamond’s electrical properties can be changed via doping. And as more businesses use diamonds in manufacturing, their price will drop.
A
Researchers have found a way to house electronic circuitry on a diamond. Diamonds have advantages over silicon in strength and in the ability to resist heat and withstand high pressure and radiation. This makes “diamond chips” ideal for use in challenging environments such as space.
At some point, Vohra said, researchers may even develop diamondbased circuitry. Pushkar Apte, vice president of technology programs at the Semiconductor Industry Association, a US trade asso-
ciation, expressed doubt about using diamonds in chips, saying that silicon is already a well-established technology. However, he added, “It may be used in some niche applications that demand thermal conductivity.” ■
IBM Lets Open Source Developers Use 500 Patents BM has made 500 US software patents available royalty-free to open source developers. The patents cover 14 categories of technology, including e-commerce, storage, image processing, data handling, networking, and Internet communications. “We wanted to give access to a broad range of patents,” explained Marc Ehrlich, IBM’s legal counsel for patent portfolio management. He said the patents represent “areas reflective of activity in the open source commu-
I
nity,” such as databases and processor cores. IBM will continue to own the 500 patents but will allow fee-free use of their technologies in any software that meets the requirements of the Open Source Definition, managed and promoted by the nonprofit Open Source Initiative (www.opensource.org). IBM, which has vigorously supported the open source operating system Linux, has expressed hope its action will establish a “patent comMarch 2005
23
News Briefs
mons” on which open source software developers can base their code without worrying about legal problems. Traditionally, IBM and other companies amass patents, charge anyone who wants to work with them, and take legal action against anyone who uses them without paying royalties. IBM has 40,000 patents worldwide, including 25,000 in the US. It has obtained more US patents than any other company during each of the past 12 years. However, Ehrlich said, IBM has realized that letting open source developers use some of its patents without charge
will allow them to expand on the technologies in ways that the company might never do on its own. This could benefit IBM and others, he explained. IBM could create new products or services for a fee on top of open source applications that use its patented technologies, said Navi Radjou vice president of enterprise applications at Forrester Research, a market analysis firm. And, he said, selling versions of software that open source developers have based on its patents eliminates the need for IBM to pay its own developers for the new work. In the process, Radjou noted, IBM’s
Biometrics Could Make Guns Safer An innovative biometric system could keep children, thieves, and others from firing guns that don’t belong to them. New Jersey Institute of Technology (NJIT) professor Timothy N. Chang and associate professor Michael L. Recce are developing the new approach, which is based on authorized user’s grip patterns. In the past, researchers have worked with fingerprint scanners to recognize authorized shooters, and systems with tokens that users wear to wirelessly transmit an unlocking code to a weapon. In the NJIT system, 16 tiny sensors in a gun’s grip measure the amount and pattern of finger and palm pressure as the user tries to squeeze the trigger. “The system doesn’t care how you pull the gun out of the holster or how you handle it when you are not actually shooting it,” explained Donald H. Sebastian, NJIT’s senior vice president for research and development. Unlike the static biometrics found in fingerprint scanning, NJIT’s system looks at a pattern of movement over time. Shooters create a unique pattern of pressure when squeezing a weapon while firing it. During the first tenth of a second of trigger pull, the system can determine whether the shooter is authorized to use a gun, according to Sebastian. If not, the system will not let the gun fire. Sensors measure the voltage patterns the system’s circuitry generates when a user tries to pull the trigger. The system then converts the analog signals to digital patterns for analysis by specially designed software. All authorized users of a gun initially train the system to recognize the patterns they create when using the weapon. This information is stored for comparison any time someone tries to use the gun. Currently, a computer cord tethers the gun to a laptop that houses the biometric system’s software. However, Chang said, the team plans to move the circuits from the laptop into the gun’s grip. The system presently has a 90 percent recognition rate. Sebastian said this is not precise enough for a commercial system but “90 percent accuracy out of 16 sensors is amazing.” The research team plans to use up to 150 sensors to improve precision and may add biometric palm recognition as a backup. According to Sebastian, the technology may be ready for commercial release within three to five years. ■
24
Computer
patent release lends credibility to the open source movement and gives IT departments more confidence in using open source products. Because groups of independent developers create open source software, proponents sometimes have trouble knowing whether products include patented technologies. This could expose open source proponents and their products to legal action by patent holders. And finding patented software in open source products could force programmers to write new products and customers to switch to the new versions. However, Ehrlich noted, sophisticated open source projects are starting to adopt practices to ensure that the code being used is free from patentrelated problems. He said IBM may give open source developers royalty-free access to more patents in the future and hopes other companies will do the same. Businesses such as Novell and Linux vendor Red Hat have already either offered their technologies to open source developers or taken steps to protect users of their open source software. “Unsurprisingly, the open source community thinks this is a good thing,” said Eric S. Raymond, the Open Source Initiative’s founder and president emeritus. “But the deeper message here is that IBM is saying by its actions that the patent system is broken. The top patentholder in the US, the biggest beneficiary of the system, has concluded that the best way it can encourage innovation is to voluntarily relinquish its rights.” According to Raymond, this should “give pause to those who believe strong intellectual-property laws and IP enforcement are vital to the health of the software industry.” ■ News Briefs written by Linda Dailey Paulson, a freelance technology writer based in Ventura, California. Contact her at
[email protected]. Editor: Lee Garber, Computer;
[email protected]
December 18 - 21, 2005 • Goa, India
CALL FOR PARTICIPATION CALL FOR PAPERS
The 12th Annual International Conference on High Performance Computing (HiPC 2005) will be held in Goa, the Pearl of the Orient, during December 18-21, 2005. It will serve as a forum to present the current work by researchers from around the world, act as a venue to provide stimulating discussions, and highlight high performance computing activities in Asia. HiPC has a history of attracting participation from reputed researchers from all over the world. HiPC 2004 was held in Banglore, India, and included 48 contributed papers selected from over 250 submissions from 13 countries. HiPC 2005 will emphasize the design and analysis of high performance computing and networking systems and their scientific, engineering, and commercial applications. In addition to technical sessions consisting of contributed papers, the conference will include invited presentations, a poster session, tutorials, and vendor presentations. IMPORTANT DATES May 2, 2005
Conference Paper Due
May 9, 2005
Workshop Proposal Due
May 16, 2005
Tutorial Proposal Due
July 11, 2005
Notification of Acceptance/Rejection
August 15, 2005 Camera-Ready Paper Due October 3, 2005
Poster/Presentation Summary Due
For further information visit the HiPC website at
www.hipc.org
Authors are invited to submit original unpublished manuscripts that demonstrate current research in all areas of high performance computing including design and analysis of parallel and distributed systems, embedded systems, and their applications. All submissions will be peer reviewed. The deadline for submitting the papers is May 2, 2005. Best paper awards will be given to outstanding contributed papers in two areas: a) Algorithms and Applications, and b) Systems.
GENERAL CO-CHAIRS Manish Parashar, Rutgers University V. Sridhar, Satyam Computer Services Ltd.
VICE GENERAL CHAIR Rajendra V. Boppana, University of Texas at San Antonio
PROGRAM CHAIR David A. Bader, University of New Mexico
STEERING CHAIR Viktor K. Prasanna, University of Southern California
PROGRAM VICE CHAIRS Algorithms Michael A. Bender, SUNY Stony Brook, USA Applications Zhiwei Xu, Chinese Academy of Sciences, China Architecture Jose Duato, Technical University of Valencia, Spain Communication Networks Cristina M. Pinotti, University of Perugia, Italy Systems Software Satoshi Matsuoka, Tokyo Institute of Technology, Japan CO-SPONSORED BY: IEEE Computer Society Technical Committee on Parallel Processing, ACM SIGARCH , Goa University, European Association for Theoretical Computer Science, IFIP Working Group on Concurrent Systems, National Association of Software and Services Companies (NASSCOM), Manufacturers Association for Information Technology (MAIT)
COMPUTING PRACTICES
Integrating Biological Research through Web Services A case study demonstrates that Web services could be key to coordinating and standardizing incompatible applications in bioinformatics, an effort that is becoming increasingly critical to meaningful biological research.
Hong Tina Gao Jane Huffman Hayes University of Kentucky
Henry Cai Big Lots
26
N
o longer only a field of experimental science, biology now uses computer science and information technology extensively across its many research areas. This increased reliance on technology has motivated the creation of bioinformatics, a discipline that researches, develops, or applies computational tools and approaches for expanding the use of biological, medical, behavioral, or health data.1 Because tools and approaches cover how to acquire, store, organize, archive, analyze, and visualize data,1 bioinformatics is a promising way to help researchers handle diverse data and applications more efficiently. Unfortunately, at present, bioinformatics applications are largely incompatible, which means that researchers cannot cooperate in using them to solve important biological problems. The “The Integration Challenge” sidebar explains this problem in detail. Web services might be a way to solve the integration problem because Web services technology provides a higher layer of abstraction that hides implementation details from applications. Using this technology, applications invoke other applications’ functions through well-defined, easy-to-use interfaces. Each organization is free to concentrate on its own competence and still leverage the services that other research groups provide. Computer
To test the potential of a Web services solution, we implemented a microarray data-mining system that uses Web services in drug discovery—a research process that attempts to identify new avenues for developing therapeutic drugs. Although our implementation focuses on a problem within the life sciences, we strongly believe that Web services could be a boon to any research field that requires analyzing volumes of data and conducting complex data mining.
WHY WEB SERVICES? A Web service is a group of network-accessible operations that other systems can invoke through XML messages using the Simple Object Access Protocol (SOAP). The service can be a requester, provider, or registry. A service provider publishes its available services on a registry. A service requester looks through the registry to find the service it needs and consumes the service by binding with the corresponding service provider. The services are independent of environment and implementation language. In biology research, these traits are advantageous because, as long as the interfaces remain unchanged, researchers need not modify the application or database or unify diverse schemas. Moreover, invoking a Web service can be as easy as checking an information directory and calling the right number. Given that data analysis is the most
Published by the IEEE Computer Society
0018-9162/05/$20.00 © 2005 IEEE
time-consuming step in many bioinformatics applications, this simplicity makes it tolerable to incur even the overhead of transmitting XML tags for explaining the data structures. Web services also transform biology’s current ad hoc software-development architecture into a component-based structure. Unlike technologies such as the common object request broker architecture (Corba), using Web services makes it easier to glue components together by exploiting existing standards and implementing underlying communication protocols instead of using a specifically defined transportation protocol for each technology. Corba assumes that its users will be competent programming professionals. Web services are oriented toward the less technical IT communities. For biological researchers in highly specific subfields, the less technical solution is far better. A group annotating a human genome segment, for example, must precisely locate genes on genomes and assign genes to their protein products. To invoke services that implement the needed algorithms, the researchers simply acquire the services’ descriptions from the registry and generate SOAP requests to those services. They don’t have to know how to implement the algorithms. More important, because integration occurs at the client instead of on the server side, service providers and requesters have more flexibility and autonomy. Each service provider can incrementally add value to the overall community by building Web services that integrate existing services.
WEB SERVICES IN DRUG DISCOVERY Our microarray data-mining system uses Web services to identify potential drug targets—molecules with problematic biological effects that cause diseases in animal models. The drug targets then serve as a basis for developing therapeutic human drugs.
With a better understanding of human genes, scientists can identify more drug targets and design more effective drugs, but traditional techniques— those based on one gene in one experiment— discover gene functions too slowly. Many highthroughput genomics technologies, such as microarrays and gene chips, could speed up genefunction analysis. Arranging gene products in a microarray lets researchers monitor the entire genome’s expression on a single glass slide2 and gain insight into the interactions among thousands of genes simultaneously.
Drug discovery scenario Drug discovery using a microarray involves a
The Integration Challenge Because of the Human Genome Project’s great success, the current research trend in the life sciences is to understand the systemic functions of cells and organisms. Not only has the project increased data on gene and protein sequences, it has further diversified biology itself. Many study processes now involve multistep research, with each step answering a specific question. Researchers in vastly different organizations design and develop computing algorithms, software applications, and data stores—often with no thought as to how other researchers are doing the same tasks. Consequently, one interdisciplinary research question might require interactions with many incompatible databases and applications. The study of E. coli enzymes is a good example. Researchers must visit EcoCyc, Swiss-Prot, Eco2DBase, and PDB to obtain information about the enzymes’ catalytic activities, amino acid sequences, expression levels, and three-dimensional structures.1 This labor-intensive process can be even more tedious if the research requires studying thousands of genes. An integrated process that follows a certain research pathway is thus critical, and its successful evolution depends heavily on the compatibility of the applications involved. The current incompatibility level of bioinformatics applications makes integration of data sources and programs a daunting hurdle. From cutting-edge genomic sequencing programs to high-throughput experimental data management and analysis platforms, computing is pervasive, yet individual groups do little to coordinate with one another. Instead, they develop programs and databases to meet their own needs, often with different languages and platforms and with tailored data formats that do not comply with other specifications. Moreover, because biology research lacks a well-established resource registry, no one can share information efficiently. Users from diverse backgrounds repeatedly generate scripts for merging the boundaries between upstream and downstream applications, wasting considerable time and effort. The integration challenge is not just for those in the life sciences. Any discipline that deals with massive amounts of data and computing loads and geographically distributed people and resources faces the same problem: economics, Earth sciences, astronomy, mechanical engineering, and aerospace, for example. Solving the integration problem in the life sciences will provide vital benefits to these fields as well. References 1. P.D. Karp, “Database Links Are a Foundation for Interoperability,” Tibtech, vol. 14, 1996, pp. 273-279.
chain of large-scale data-processing modules and databases. In our implementation, we wrapped each module in the data-analysis chain into a Web service and integrated them. We then built a portal to make this aggregated functionality available to users. March 2005
27
Microarray experimental data a. Gene-expression pattern-analysis service provider
4 1 5 Biology registry
2 3
User application
6 7
b. Gene sequence search and retrieval service provider
ÖAGCGGGTACCAGGTTCACTGCCTAGAÖ
8 10
9 c. Sequence alignment service provider
Further experiments to find binding proteins as potential drug targets
CTAGGCATCGCTTC-TTCGTATGAATACTTT TTAGATGTCTTTTC-TTCGTAT-----TTCA TTCTATATTATATA-TACACAT----CTTTT TTCTATTTTGTTTCATTCTTATATCCCTTTC * * * * ** *
Figure 1. Microarray data-analysis scenario for identifying targets in drug discovery. Components a, b, and c are three service providers that provide Web services for the data analysis related to drug discovery. The numbered lines are the steps in the analysis path. A researcher passes the data collected from a microarray experiment to a user application (1), which queries a biology service registry for the locations of service providers (2 and 3). The user application invokes the Web services provided by the three service providers (4 to 9). The user application transmits the result of an upstream service as the input of the next downstream service. Finally, the researcher passes the result of the last queried Web service to other drug discovery experiments (10).
Figure 1 shows the three Web services in the scenario and the data-analysis path to discover drug targets using these services. The path begins with the user finding the URLs of the necessary Web services from a biology service registry. She then queries those remote services to find similar fragments from the gene sequences that have similar expression patterns in the microarray experiments. Finally, she uses the fragments in additional experiments to identify drug targets.
Scenario implementation We decided to implement scenario steps in three applications that use different, largely incompatible algorithms or databases to accomplish their tasks. We reviewed only applications we felt we could easily translate into Web services. The candidate applications had to have • good encapsulation of implementation details, • clear interface definitions, and • simple input and output data structures. Table 1 lists the three applications we selected: IBM’s Genes@Work,3 the National Center for Biotechnology Information’s Entrez Databases,4 and the 28
Computer
Baylor College of Medicine’s Search Launcher.5 We then built a Web service for each scenario step using a mix of green-field and bottom-up strategies.6 The green-field strategy is a from-scratch implementation of both the Web service’s description and its functionality. The bottom-up approach is similar except that the functionality it exposes as a Web service already exists. Next we rewrote the interfaces for each application. For Genes@Work, the interface takes as its input the gene-expression data set file and the corresponding phenotype file for each microarray experiment and returns an expression pattern file. We adopted SOAP with attachment technology to transfer the files. Finally, we wrote the service interface and implementation descriptions. Many tools are available to help generate these definitions, but we used the Java2 Web Services Description Language (WSDL) tool in IBM’s Web services toolkit.6 We published the service interface and implementation in a local registry suitable for testing and for restricting user access to services. In some cases, the service provider might want to make the services available to the entire community. If so, a public registry, such as universal description discovery
Table 1. Selected applications for the drug discovery scenario.
and integration (UDDI) or a special biological registry would be more appropriate. We also built a client platform to consume the services. Users can invoke the three services independently as network-accessible stand-alone applications or as a group, to perform the tasks in our scenario. This system provides more flexibility for researchers to use the functionality in the three applications we chose, while data integration through Web services streamlines the entire analysis process.
Results With the Web services, service registry, and Web portal we built, we were able to smoothly pass the experimental data from microarray experiments to individual service providers and perform the analysis in Figure 1. Although we were the only users who performed pilot tests with this system, we believe anyone doing drug discovery research could easily use this system with very little computer training. Users must understand only generic operations, such as loading data files, entering the index of the gene expression patterns of interest, selecting the genes of the sequences to be retrieved, and so on. They need not worry about writing their own patches of scripts to transform data among incompatible programs from various versions. Our system handles the many time-consuming and tedious transformations between data formats. The only time left is the time it takes for each service provider’s analysis program to execute and the delays from network traffic. Using the traditional approach, it could take a user one hour to set up the Genes@Work standalone application and a few more hours to manually transform results to the legible input for gene identification. This doesn’t include the mechanics of cutting and pasting, which can take 5 to 10 minutes per operation, depending on how many patterns a user must query. With our system, typically it takes approximately 10 minutes from uploading the microarray data to finally clustering the gene sequences of the expression patterns of interest. Considering that a microarray experiment usually includes a few hundred to thousands of genes, our system saves significant time, most of which is otherwise spent in tedious tasks.
Lessons learned In conducting this project, we discovered two keystones to the successful and widespread use of
Application
Scenario component
Genes@Work
a
Entrez Databases
b
Search Launcher
c
Description A package that automatically analyzes gene expression patterns from the data microarray that technologies obtain A search and retrieval system that stores nucleotide sequences, protein sequences, and other sequences A project that aids in clustering gene and protein sequences
Web services in biological research. Well-defined interfaces. To support a services-oriented architecture, each software component must have a well-defined function and interface. If functions for different components are orthogonal, software coupling will be minimal, which will make it more convenient to transform these components into Web services that many kinds of researchers find acceptable. One way to achieve clean functions and a decoupled software architecture is to use an objectoriented design with systematic analysis in conjunction with design patterns. The bioinformatics software we worked with, for example, required much refactoring to separate the calculation logic from its Java Swing interface. Had its implementers followed the model-view controller design pattern instead of coupling the presentation logic and the business logic, we might have been able to extract a clear interface much more easily. Then the remaining work would have been to simply wrap the interface with the Web service. Standardization. Only by using a standard and widely agreed-on vocabulary can a given service requester and provider understand each other. If the biological research community is to realize the full benefit of Web services, it will have to make more progress in standardizing data formats and ontologies. Many researchers have already taken steps toward accomplishing this, such as conducting the Minimum Information about a Microarray Experiment for microarray data7 and developing ontologies that can apply to all life sciences and accommodate the growth and change in knowledge about gene and protein roles in cells.8 Standardization can also aid in creating corresponding data serializers and deserializers more systematically. Although this could take time, researchers need not wait until the community has defined every standard in detail. With Web services, they can transmit highly complicated data as attachments to SOAP messages, which can save the bandwidth taken by sending XML tags. In addition to working on vocabularies and data formats, standardization must formalize service March 2005
29
descriptions so that registries can assign all Web services that address the same problems to the same category. A service requester can then easily identify all the available services that can solve a problem. And it can choose to invoke different services that provide the same interface without modifying the client-side programs. Registry standardization is also critical. The data objects and service descriptions in a registry can give software developers clues about how others have defined services. The problem for biology researchers is that, although registries such as UDDI store many services, most are unrelated to biology research. To avoid wasting time sifting through irrelevant services, biologists need registries built specifically for biology and its subfields. These registries should have a hierarchical structure, with the top-level registry mirroring the registries of other scientific fields. Finally, help from widely coordinated organizations can be invaluable. The Web Services Interoperability Organization (www.ws-i.org), for example, provides guidance, best practices, and resources for developing Web services solutions across standards organizations. Its first release of WS-I Basic Profile, a set of nonproprietary Web services specifications, represents a milestone for Web services interoperability.
run as a Web service using SOAP and WSDL. XEMBL users can keep their original Corba framework.
A
s our case study shows, Web services have great potential for solving the data- and application-integration problems in biology, particularly for time-consuming data analysis. The wider application of this technology depends greatly on the willingness of the biological research community to pursue standardization, including building ontologies, developing biology-specific registries, and defining the service interfaces for well-known functions. The community will also need to develop more frequently used services and address the concerns of security and quality of service. Fortunately, there is a huge volume of existing applications and modules on which to base these efforts, and the successful implementation of Web services will further it. Clearly much work lies ahead, but the efficiency payoff should be well worth the effort. In the interim, researchers who spend even a short time becoming familiar with the service descriptions will benefit. This familiarity will expedite the spread of technology, increase the number of services provided, and eventually raise the quality and quantity of available Web services. ■
WORK IN PROGRESS Evolving standardization will not be trivial, but the adoption of Web services technology is a solid first step because such a move can have a snowball effect: The more people are willing to provide their resources in Web services format, the more attractive this strategy becomes for others—and the more favorably users and providers will view standardization in general. Some health agencies have already taken this step. The National Cancer Institute, for example, provides a group of legacy Web services for direct access to information and has a list of applications already wrapped into Web services (http://cabio. nci.nih.gov/soap/services/index.html). In 2003, for the DNA Data Bank of Japan (DDBJ), Hideaki Sugawara and colleagues defined DDBJ-XML and developed a DDBJ-SOAP server. Their work is Japan’s earliest published effort using Web services in the life sciences.9 Some organizations that have already invested in an integration technology can use Web services as an enhancement. The EMBL Nucleotide Sequence Database provides an extended version of EMBL (http://www.ebi.ac.uk/xembl/) that can 30
Computer
References 1. M. Huerta et al., “NIH Working Definition of Bioinformatics and Computational Biology,” The Biomedical Information Science and Technology Initiative Consortium (BISTIC) Definition Committee of National Institutes of Health (NIH), 17 July 2000; www.bisti.nih.gov/CompuBioDef.pdf. 2. M. Schena et al., “Quantitative Monitoring of Gene Expression Patterns with a Complementary DNA Microarray,” Science, vol. 270, no. 5232, 1995, pp. 467-470. 3. A. Califano, G. Stolovitzky, and Y. Tu, “Analysis of Gene Expression Microarrays for Phenotype Classification,” Proc. Int’l Conf. Intelligent Systems for Molecular Biology, vol. 8, AAAI Press, 2000, pp. 7585. 4. D.L. Wheeler et al., “Databases Resources of the National Center for Biotechnology,” Nucleic Acids Research, vol. 31, no. 1, 2003, pp. 28-33. 5. R.F. Smith et al., “BCM Search Launcher—An Integrated Interface to Molecular Biology Database Search and Analysis Services Available on the World Wide Web,” Genome Research, May 1996, pp. 454-462.
6. J. Snell, “Implementing Web Services with the WSTK v3.3: Part 1,” IBM DeveloperWorks, Dec. 2002, pp. 5-6. 7. A. Brazma et al., “Minimum Information about a Microarray Experiment (MIAME)—Toward Standards for Microarray Data,” Nature Genetics, vol. 29, 2001, pp. 365-371. 8. M. Ashburner et al., “Gene Ontology: Tool for the Unification of Biology,” Nature Genetics, vol. 25, 2000, pp. 25-29. 9. H. Sugawara and S. Miyazaki, “Biological SOAP Servers and Web Services Provided by the Public Sequence Data Bank,” Nucleic Acids Research, vol. 31, 2003, pp. 3836-3839.
Hong Tina Gao is a software engineer at Lexmark. Her research interests include bioinformatics, software maintenance, testing, software architecture, and Web engineering. She received an MS in com-
puter science from the University of Kentucky and an MS in molecular biology from Shanghai Jiao Tong University in China. Contact her at tgao@ lexmark.com.
Jane Huffman Hayes is an assistant professor of computer science at the University of Kentucky. Her research interests include software verification and validation, requirements engineering, and software maintenance. Huffman Hayes received a PhD in information technology from George Mason University. Contact her at
[email protected].
Henry Cai is a senior application analyst at Big Lots. His research interests include software engineering, supply chain management, and e-commerce. Cai received an MS in computer science from the University of Kentucky. Contact him at
[email protected].
March 2005
31
&DOOIRU3DSHUV
63(&76 ,QWHUQDWLRQDO6\PSRVLXPRQ3HUIRUPDQFH(YDOXDWLRQ RI&RPSXWHUDQG7HOHFRPPXQLFDWLRQ6\VWHPV
*HQHUDO&KDLU 0RKDPPDG62EDLGDW 'HSWRI&RPSXWHU6FLHQFH0RQPRXWK8QLYHUVLW\ (PDLOREDLGDW#PRQPRXWKHGX 9LFH*HQHUDO&KDLU )UDQFR'DYROL ',678QLYHUVLW\RI*HQRD (PDLOIUDQFR#GLVWXQLJHLW 3URJUDP&KDLUV 0DULR0DUFKHVH &1,78QLYHUVLW\RI*HQRD,WDO\ (PDLOPDULRPDUFKHVH#FQLWLW -RVH/0DU]R 8QLYHUVLW\RI*LURQD6SDLQ (PDLOMRVHOXLVPDU]R#XGJHV 9LFH3URJUDP&KDLUDQG6SHFLDO6HVVLRQV&KDLU ,PDG0DKJRXE )ORULGD$WODQWLF8QLYHUVLW\86$ (PDLOLPDG#FVHIDXHGX 9LFH3URJUDP&KDLUDQG/RFDO$UUDQJHPHQW &KDLU &RQVWDQWLQH.DWVLQLV 'UH[HO8QLYHUVLW\86$ (PDLOFNDWVLQL#HFHGUH[HOHGX 9LFH3URJUDP&KDLU 3DVFDO/RUHQ] 8QLYHUVLW\RI+DXWH$OVDFH)UDQFH (PDLOORUHQ]#LHHHRUJ 7XWRULDO&KDLU $EEDV-DPDOLSRXU 6FKRRORI(OHFWULFDODQG,QIRUPDWLRQ(QJLQHHULQJ 8QLYHUVLW\RI6\GQH\ (PDLODMDPDOLSRXU#LHHHRUJ 7HFKQLFDO3URJUDP&RPPLWWHH $EGXOODK$ERQDPDK=D\HG8QLYHUVLW\8$( ,DQ$N\LOGL]*HRUJLD7HFK86$ 0RKDPPHG$WLTX]]DPDQ8QLYHUVLW\RI2NODKRPD86$ 1RXUHGGLQH%RXGULJD8QLYHUVLW\RI7XQLV7XQLVLD 0DULD&&DO]DURVVD8QLYHUVLW\RI3DYLD,WDO\ $QGUHZ&DPSEHOO&ROXPELD8QLYHUVLW\86$ +DLWKDP&UXLFNVKDQN8QLYHUVLW\RI6XUUH\8. *DERU)RGRU(ULFVVRQ5DGLR6\VWHPV6ZHGHQ *HRIIUH\)R[,QGLDQD8QLYHUVLW\86$ -RKQ)R[0RWRUROD,QF8. 6HEDVWLD*DOPHV8QLYHUVLWDWGHOHV,OOHV%DOHDUV6SDLQ (URO*HOHQEH,PSHULDO&ROOHJH8. 6DPL+DELE.XZDLW8QLYHUVLW\.XZDLW 2PDU+DPPDPL(167$)UDQFH -DUPR+DUMX7DPSHUH8QLYHUVLW\RI7HFKQRORJ\)LQODQG +HUPDQ+XJKHV0LFKLJDQ6WDWH8QLYHUVLW\86$ 5DM-DLQ1D\QD1HWZRUNV,QF 2KLR6DWH8QLY86$ &DUORV-XL]8QLYHUVLWDWGHOHV,OOHV%DOHDUV6SDLQ ,QJHPDU.DM8SSVDOD8QLYHUVLW\6ZHGHQ .ULVKQD.DQW,QWHO86$ +HOHQ.DUDW]D$ULVWRWOH8QLYHUVLW\RI7KHVVDORQLFD*UHHFH 'HPHWULRV.D]DNRV8QLYHUVLW\RI,GDKR86$ 8OULFK.LOODW7HFK8QLYRI+DPEXUJ*HUPDQ\ .HYLQ.ZLDW$LU)RUFH5HVHDUFK/DERUDWRU\86$ 9HURQLFD/DJUDQJH05HLV+3&RUS86$ $[HO/HKPDQQ8QLYHUVLWlWGHU%XQGHVZHKU0QFKHQ*HUPDQ\ 0LNH7/LX2KLR6WDWH8QLYHUVLW\86$ (ULFK/XW]'/5*HUPDQ\ 6DP0DNNL4XHHQVODQG8QLYHUVLW\RI7HFKQRORJ\$XVWUDOLD .U]\V]WRI0DOLQRZVNL:DUVDZ7HFKQLFDO8QLYHUVLW\3RODQG 0DUHN0DORZLG]NL0LOLWDU\&RPPXQLFDWLRQ,QVWLWXWH3RODQG 3DVFDOH0LQHW,15,$)UDQFH 9RMLVODY0LVLF8QLYHUVLW\RI0DQLWRED&DQDGD +XVVHLQ0RXIWDK8QLYHUVLW\RI2WWDZD&DQDGD ,EUDKLP2Q\XNVHO1RUWKHUQ,OOLQRLV8QLYHUVLW\86$ 0RKDPHG2XOG.KDRXD8QLYHUVLW\RI*ODVJRZ8. (OHQD3DJDQL8QLYHUVLWjGL0LODQR,WDO\ *HRUJLRV,3DSDGLPLWULRX$ULVWRWOH8QLYHUVLW\*UHHFH $FKLOOH3DWWDYLQD3ROLWHFQLFRGL0LODQR,WDO\ .U]\V]WRI3DZOLNRZVNL8QLYHUVLW\RI&DQWHUEXU\1HZ=HDODQG $QWRQLR3HVFDSH¶8QLYHUVLWD¶GL1DSROL³)HGHULFR,,´,WDO\ *UHJRU\'3HWHUVRQ8QLYHUVLW\RI7HQQHVVHH86$ 6WHYHQ3LQN8QLYHUVLW\RI$UL]RQD86$ *HRUJH3RO\]RV$8(%*UHHFH 5DPRQ3XLJMDQHU8QLYHUVLWDWGHOHV,OOHV%DOHDUV6SDLQ .DOLDSSD5DYLQGUDQ&81<86$ *LDQ3DROR5RVVL8QLYHUVLWjGL0LODQR,WDO\ 9LFHQWH6DQWRQMD7HFKQLFDO8QLYHUVLW\RI9DOHQFLD6SDLQ 'RQDOG6FKLOOLQJ&81<86$ -HQV%6FKPLWW78'DUPVWDGW*HUPDQ\ +DUDOG6NLQQHPRHQ1HUD1RUZD\ :HLOLDQ6X1DYDO3RVWJUDGXDWH6FKRRO0RQWHUH\86$ 7DWVX\D6XGD8&,86$ ,ZDR7RGD)XMXWVX/DERUDWRULHV/WG-DSDQ /MLOMDQD7UDMNRYLF6LPRQ)UDVHU8QLYHUVLW\&DQDGD 3KXRF7UDQ*LD8QLYHUVLW\RI:XHU]EXUJ*HUPDQ\ .LVKRU7ULYHGL'XNH8QLYHUVLW\86$ 7UDF\7XQJ8QLYHUVLW\RI6\GQH\$XVWUDOLD .HQQHWK69DVWROD5HQVVHODHU3RO\WHFKQLF,QVWLWXWH86$ 0DQXHO9LOOHQ$OWDPLUDQR7HOHIRQLFD6SDLQ %HUQG(:RO¿QJHU+DPEXUJ8QLYHUVLW\*HUPDQ\ 0LFKHOH=RU]L8QLYHUVLWjGL)HUUDUD,WDO\
KWWSZZZVFVRUJFRQIHUHQFHVSHFWV 3OHDVHVXEPLWSDSHUVWRKWWSPFPDQXVFULSWFHQWUDOFRPVSHFWV 6SRQVRUHGE\WKH6RFLHW\IRU0RGHOLQJDQG6LPXODWLRQ,QWHUQDWLRQDOKWWSZZZVFVRUJ 7HFKQLFDO&R6SRQVRU,(((6\VWHPV0DQDQG&\EHUQHWLFV60& 6RFLHW\
-XO\ +LOWRQ3KLODGHOSKLD&KHUU\+LOO3KLODGHOSKLD3HQQV\OYDQLD86$ 7KLVDQQXDOLQWHUQDWLRQDOFRQIHUHQFHLVDIRUXPIRUSURIHVVLRQDOVLQYROYHGLQSHUIRUPDQFHHYDOXDWLRQRI FRPSXWHUDQGWHOHFRPPXQLFDWLRQV\VWHPV(YDOXDWLRQRIFRPSXWHUV\VWHPVDQGQHWZRUNVLVQHHGHGDW HYHU\VWDJHLQWKHOLIHF\FOHRIWKHSURGXFWLQFOXGLQJGHVLJQPDQXIDFWXULQJVDOHVSXUFKDVHXVHXSJUDGH WXQLQJHWF7KHGLVFLSOLQHRISHUIRUPDQFHHYDOXDWLRQKDVSURJUHVVHGUDSLGO\LQWKHSDVWGHFDGHDQGLWKDV QRZEHJXQWRDSSURDFKPDWXULW\6LJQL¿FDQWSURJUHVVKDVEHHQPDGHLQDQDO\WLFPRGHOLQJVLPXODWLRQDQG PHDVXUHPHQWDSSURDFKHVIRUSHUIRUPDQFHHYDOXDWLRQRIFRPSXWHUDQGWHOHFRPPXQLFDWLRQV\VWHPV 7KHOLVWRIWRSLFVLQFOXGHVEXWLVQRWOLPLWHGWR 1HWZRUNLQJDQG7HOHFRPPXQLFDWLRQ &RPSXWHU6\VWHPV 'LVWULEXWHG$UFKLWHFWXUHV 6\VWHPV
7RROV0HWKRGRORJLHV DQG$SSOLFDWLRQV
,QWHUQHW7HFKQRORJ\ 4XDOLW\RI6HUYLFH4R6 'LII6HUY,QW6HUY 03/6 7&3 :RUOG:LGH:HE::: 7HFKQRORJ\ 1HWZRUNLQJ7HFKQLTXHV 8QLFDVWDQG0XOWLFDVW5RXWLQJ &RQJHVWLRQ&RQWURO 6ZLWFKLQJ7HFKQLTXHV 7HOHWUDI¿F 1HWZRUN3URWRFROV 1HWZRUN0DQDJHPHQWDQG&RQWURO 1HWZRUN&DSDFLW\3ODQQLQJ 1HWZRUN$UFKLWHFWXUH(YDOXDWLRQ 6HUYLFHDQG4R63ULFLQJ 6HFXULW\DQG$XWKHQWLFDWLRQ %URDGEDQG1HWZRUNV +LJKVSHHG1HWZRUNLQJ $70 2SWLFDO1HWZRUNV :LUHOHVV6\VWHPVDQG1HWZRUNV 6DWHOOLWH6\VWHPV 8076 0RELOH1HWZRUNV&RPSXWLQJ $GKRFQHWZRUNV 0XOWLPHGLD&RPPXQLFDWLRQVDQG$SSOLFDWLRQV
3DUDOOHODQG'LVWULEXWHG 6LPXODWLRQ 9HUL¿FDWLRQDQG9DOLGDWLRQ 1HXUDO1HWZRUNVDQG)X]]\ /RJLF$SSOLFDWLRQV 3HUIRUPDQFH2SWLPL]DWLRQ %RXQGVDQG0RGHOV 4XHXLQJ6\VWHPVDQG 1HWZRUNV 6FDODELOLW\6WXGLHV ,QWHJUDWHG0RGHOLQJDQG 0HDVXUHPHQW 2QOLQH3HUIRUPDQFH $GDSWDWLRQDQG7XQLQJ 3URFHVV$OJHEUDEDVHG 0RGHOV 0DWKHPDWLFDO$VSHFWVDQG ,QWHJUDWHG'HVLJQRI 3HUIRUPDQFH &DVH6WXGLHV
&OLHQW6HUYHU 'LVWULEXWHG6\VWHPVDQG$JHQWV 3DUDOOHODQG'LVWULEXWHG&RPSXWLQJ 0DVVLYHO\3DUDOOHO6\VWHPV &OXVWHU&RPSXWLQJ *ULG&RPSXWLQJ ,QWHUFRQQHFWLRQ1HWZRUNV &RPSXWHU$UFKLWHFWXUHV 0LFURSURFHVVRUV0LFURFRPSXWHUV 0HPRU\6\VWHPV +LJKSHUIRUPDQFH,2 5HDOWLPH6\VWHPV 6FKHGXOLQJ6FKHPHV 6RIWZDUH 6RIWZDUH3HUIRUPDQFH(YDOXDWLRQDQG 7HVWLQJ 3DUDOOHO$OJRULWKPVDQG/DQJXDJHV +DUGZDUHDQG6RIWZDUH0RQLWRUV +LJKSHUIRUPDQFH&RPSXWLQJ :RUNORDGDQG7UDI¿F&KDUDFWHUL]DWLRQ 6FLHQWL¿F&RPSXWLQJ$OJRULWKPV 5HFRQ¿JXUDEOH&RPSXWLQJ (OHFWURQLF&RPPHUFH
:RUNVKRSRQ3HUIRUPDQFHRI:LUHOHVV1HWZRUNVDQG&RPPXQLFDWLRQ6\VWHPV:L1&6¶ KWWSZZZFVXPDQLWREDFDaYPLVLFSXEV:L1&6SGIKWWSZZZVFVRUJVXPPHUVLPVSHFWV
3$3(568%0,66,21 $OOUHTXLUHGLQVWUXFWLRQVZLOOEHSRVWHGRQOLQH6XEPLVVLRQVVKRXOGQRWH[FHHGGRXEOHVSDFHG[ LQFKSDJHVLQFOXGLQJ¿JXUHVWDEOHVDQGUHIHUHQFHV LQSRLQWIRQW,QFOXGH¿YHWRNH\ZRUGV FRPSOHWHSRVWDODQGHPDLODGGUHVVHVDQGID[DQGSKRQHQXPEHUVRIFRUUHVSRQGLQJDXWKRUV,I\RXKDYH GLI¿FXOW\LQHOHFWURQLFVXEPLVVLRQFRQWDFWWKHZHEPDVWHU3URJUDP&KDLURUWKH&RQIHUHQFH&RRUGLQDWRU 0U6WHYH%UDQFK7KH6RFLHW\IRU0RGHOLQJDQG6LPXODWLRQ,QWHUQDWLRQDO5RQVRQ&RXUW6XLWH/6DQ 'LHJR&$86$7HO )D[ (PDLOVEUDQFK#VFVRUJ ([WHQGHGYHUVLRQVRIVHOHFWHGDFFHSWHGSDSHUVLQ63(&76ZLOOEHFRQVLGHUHGIRUSRVVLEOHSXEOLFD WLRQLQVFKRODUO\MRXUQDOV3URSRVDOVIRUWXWRULDOVVKRXOGEHVHQWWRWKH7XWRULDO&KDLU3URSRVDOVIRUVSHFLDO VHVVLRQVDQGSDQHOVHVVLRQVVKRXOGEHVXEPLWWHGWRWKH9LFH3URJUDP&KDLUDQG6SHFLDO6HVVLRQV&KDLU ,PDG0DKJRXE)RUPRUHLQIRUPDWLRQUHJDUGLQJH[KLELWVDW63(&76FRQWDFW0U6WHYH%UDQFKDWWKH DGGUHVVVKRZQDERYH
38%/,&,7<&200,77(( *XRSLQJ=HQJ1RUWHO1HWZRUNV86$]HQJJX#QRUWHOQHWZRUNVFRP
9,&(&+$,56 0DUWLQ.DSSHV'R&R0R(XUR/DEV0XQLFK*HUPDQ\NDSSHV#GRFRPRODEHXURFRP 1RXUHGGLQH%RXGULJD683&20&DUWKDJH8QLYHUVLW\7XQLVLDQDE#VXSFRPUQXWQ 6XVDQ
:(%0$67(5 0LFKDHO-&KLQQL86$UP\7$&20$5'(&PFKLQQL#SLFDDUP\PLO
'($'/,1(6 6XEPLVVLRQRI)XOO3DSHUV0DUFK 1RWL¿FDWLRQRI$FFHSWDQFH0D\ )LQDO&DPHUDUHDG\6XEPLVVLRQ0D\
COVER FEATURE
Socially Aware Computation and Communication By building machines that understand social signaling and social context, technologists can dramatically improve collective decision making and help keep remote users in the loop.
Alex (Sandy) Pentland Massachusetts Institute of Technology
0018-9162/05/$20.00 © 2005 IEEE
W
ouldn’t it be wonderful if people could work together more smoothly and productively? Imagine a world in which it is normal to openly speak your concerns and to have a fair and honest group discussion, in which people are enthusiastic about carrying through group decisions in a transparent and comprehensive way. Given the variety and frequency of jokes about bad meetings, and indeed about failed communication in general, such meetings and enthusiasm seem destined to remain wishful thinking. Although developers of communication-support tools have certainly tried to create products that support group thinking, they usually do so without adequately accounting for social context, so that all too often these systems are jarring and even downright rude. In fact, most people would agree that today’s communication technology seems to be at war with human society. Pagers buzz, cell phones interrupt, and e-mail begs for attention until we have to pause and wonder if we are being assimilated into some sort of unhappy Borg Collective. Technologists have responded with interfaces that wink at us and call us by name, filters that attempt to shield us from the digital onslaught, and smart devices that organize our lives by gossiping behind our backs. The result usually feels as if the intent is to keep us isolated, wandering like a clueless extra in a computer-controlled game. These solutions, while well-meaning, ultimately fail because they ignore the core problem: Computers are socially ignorant. Researchers seem to
have forgotten that people are social animals and that their roles in human organizations define the quality of their lives. Technology must account for this by recognizing that communication is always socially situated and that discussions are not just words but part of a larger social dialogue. This web of social interaction forms a sort of collective intelligence; it is the unspoken shared understanding that enforces the dominance hierarchy and passes judgment about whether your proposal fits with “the way things are done around here.” Successful human communicators acknowledge this collective intelligence and work with it; digital communications must begin to do the same by building tools that can accurately quantify social context and teach computers about successful social behavior. At MIT, our research group is taking first steps toward quantifying social context in human communication. We have developed three socially aware platforms that objectively measure several aspects of social context, including nonlinguistic social signals measured by analyzing the person’s tone of voice, facial movement, or gesture.1 We have found nonlinguistic social signals to be particularly powerful for analyzing and predicting human behavior, sometimes exceeding even expert human capabilities. These tools measure social context, which lets the communications system support social and organizational roles instead of viewing the individual as an isolated entity. Sample applications include automatically patching people into socially important conversations, instigating con-
Published by the IEEE Computer Society
March 2005
33
Prosodic style is the most revealing channel for social signals because it is the least subject to conscious control.
versations among people to build a more solid social network, and reinforcing family ties.
SOCIAL SIGNALS
Psychologists have firmly established that social signals are a powerful determinant of human behavior and speculate that they might have evolved as a way to establish hierarchy and group cohesion.2,3 Most culture-specific social communications are conscious, but other social signals function as a subconscious collective discussion about relationships, resources, risks, and rewards. In essence, they become a subconscious “social mind” that interacts with the conscious individual mind. In many situations the nonlinguistic signals that serve as the basis for this collective social discussion are just as important as conscious content in determining human behavior.2-5
A mental partnership Imagine a tribe on the African veldt. Each day the adults gather and hunt, and in the evening they return to sit around a central clearing where they recount the day’s events and observations and discuss what to do tomorrow. During the discussion, social signals, such as body posture and tone of voice, reflect the power hierarchy as well as individual desires. Each bit of new information comes with some collective social signaling that clearly communicates to each individual what the group thinks about that news or idea. By the discussion’s end, the group has made many collective decisions, and the iron hand of social pressure will enforce the required individual behaviors. Dominance displays have since given way to office politics, but the mechanism and result haven’t changed much. The collective mind still uses social signals to guide individual behavior.
What are they? Body language, facial expression, and tone of voice are some of the nonlinguistic signals that underpin this mental partnership. You might see someone taking charge of a conversation, for example, or hear a person setting the conversational tone—skills often associated with higher social status or leadership. Others seem more adept at establishing a friendly interaction, which indicates skill at social connection, a trait many successful salespeople exhibit.5 Prosodic style—also called tone of voice, roughly the way people vary pitch and volume in speak34
Computer
ing—is perhaps the most powerful channel for these nonlinguistic social signals because it is the least subject to conscious control.3 Social psychologists have found social signals to be extremely powerful in predicting human behavior across a wide range of school, business, government, and family situations. With only a few minutes of observation, an expert psychologist can regularly predict behavioral outcome with about 70 percent accuracy.3 Amazingly, observing such thin slices of behavior can accurately predict important life events— divorce, student performance, and criminal conviction—even though these events might not occur until months, or sometimes years, later.
PREDICTING SOCIAL OUTCOMES Following the social psychologists’ example, a test for our ability to automatically measure social signals should also be a test of our ability to predict outcomes from observing a “thin slice” of human interactions. Could we predict human behavior without listening to words or knowing about the people involved? Our research group has built a computer system that objectively measures a set of nonlinguistic social signals, such as engagement, mirroring, activity level, and stress, by looking at tone of voice over one-minute periods.1 Unlike most other researchers, our goal was to measure signals of speaker attitude rather than trying to puzzle out the speaker’s instantaneous internal state. Consequently, we treated prosody and gesture as a longer-term motion texture rather than focusing on individual motions or accents. Although people are largely unconscious of this type of behavior, other researchers2,3,6,7 have shown that similar measurements are predictive of infant language development, empathy judgments, attitude, and even personality development in children. Using our social perception machine, we could listen in to the social signals within conversations, while ignoring the words themselves. We found that after a few minutes of listening, we could predict • who would exchange business cards at a meeting; • which couples would exchange phone numbers at a bar; • who would come out ahead in a negotiation; • who was a connector within a workgroup; and • a range of subjective judgments, including whether or not a person felt a negotiation was
Measuring Prediction Accuracy
After excluding cases in which we didn’t have enough signal to make a decision, our prediction accuracy averaged almost 90 percent. The “Measuring Prediction Accuracy” sidebar tells how we calculated accuracy. Achieving this level of accuracy is pretty amazing, especially given that experiments using human judges have typically shown considerably less accuracy. Moreover, the decisions we examined are among the most important in life: finding a mate, getting a job, negotiating a salary, and finding a place in a social network. These are activities for which humans prepare intellectually and strategically for decades. What is surprising is that the largely subconscious social signaling that occurs at the start of the interaction appears to be more predictive than either the contextual facts (attractiveness and experience) or the linguistic structure (strategy chosen, arguments employed, and so on).
5.0 4.5 4.0 3.5 Frequency of data
honest and fair or a conversation was interesting.
We calculated a linear predictor of outcome by a cross-validated linear regression between the four audio social signaling features (described elsewhere) and behavioral outcome. We then compared this predictor to the actual behavioral outcome. The histogram in Figure A shows a typical case, in which the data is “Would you like to work with this person or not?” In a typical case with a three-class linear decision (yes, not enough information, no) the yes/no accuracy is almost 90 percent. Accuracy is typically around 80 percent with a two-class linear decision rule, where we make a decision for every case. More generally, linear predictors based on the measured social signals typically have a correlation of 0.65, ranging from around 0.40 to as much as 0.90. Most experiments involved around 90 participants, typically 25 to 35 years old, with one-third being female. Recent papers and technical notes about these experiments are available at http://hd.media.mit.edu.
3.0 2.5 2.0 1.5
QUANTIFYING SOCIAL SIGNALS The machine understanding community has studied human communication on many scales— phonemes, words, phrases, and dialogs, for example—and researchers have analyzed both semantic and prosodic structures. However, the sort of longer-term, multiutterance structure associated with signaling social attitude (interested, attracted, confrontational, friendly, and so on) has received little attention. To quantify these social signals, we began by reading the voice analysis and social science literature and eventually developed texture-like measures for four types of social signaling: activity level, engagement, stress, and mirroring.1 By using these measurements to tap into the social signaling in face-to-face discussions, we could identify learned statistical regularities to anticipate outcomes. In addition to vocal measures of social signaling, facial and hand gesture equivalents to the audio features are being developed, and experiments using these visual features are under way.
Activity level Activity level—the simplest measure—is how much you participate in the conversation. For the activity-level measure, we use a two-level hidden Markov model (HMM) to segment the speech stream of each person into voiced and nonvoiced segments and then group the voiced segments into
1.0 0.5 0
0.8
1.0
1.2
1.4
1.6 1.8 Predictor
2.0
2.2
2.4
2.6
Figure A. Histogram for the data on “Would you like to work with this person or not?” The blue bars are “no” answers, the red bars are “yes.” Greater predictor values mean that a “yes” is more likely. Placing the yes/no boundary at 1.4 yields a 72 percent decision accuracy.
speaking and nonspeaking. We then measure conversational activity level by the percentage of speaking time.
Engagement In broad terms, engagement is how involved a person is in the current interaction. Is he driving the conversation? Is she setting the tone? We measure engagement by the influence each person’s pattern of speaking versus not speaking has on the other person’s pattern. Essentially, it is the measure of who drives the pattern of conversational turn taking. When two people are interacting, their individual turn-taking dynamics influence each other, which we can model as a Markov process.6 March 2005
35
Figure 1. Badge-like platform. Built on the Laibowitz and Paradiso Uberbadge, this system allows social context sensing by infrared, audio, and motion so that wearers can automatically bookmark interesting people and demonstrations, and it displays messages designed to build social networks.
By quantifying the influence each participant has on the other, we obtain a measure of their engagement. To measure these influences, we use an HMM to model their individual turn taking and measure the coupling of these two dynamic systems to estimate the influence each has on the other’s turn-taking dynamics.8 Our method is similar to the classic work of Joseph Jaffe and colleagues,6 who found that engagement between infant-mother dyads is predictive of language development. Our formulation generalizes those parameters so that we can calculate the direction of influence and analyze conversations involving many participants.
Stress
Figure 2. The GroupMedia system. The system, built around a Sharp Zaurus PDA, measures attraction signaling in dating and other social events. The system can also provide feedback to users and patch remote users in to interesting or socially important conversations. The interface in the image, which is still in the experimental stage, is a split screen of messages and biosignals from another user.
Stress is the variation in prosodic emphasis. For each voiced segment we extract the mean pitch (frequency of the fundamental format) and the spectral entropy. Averaging over longer periods provides estimates of the mean-scaled standard deviation of the format frequency and spectral entropy (roughly, variation in the base frequency and frequency spread). The sum of these standard deviations becomes a measure of speaker stress; such stress can be either purposeful (prosodic emphasis) or unintentional (caused by discomfort). Other research has used similar measures of vocal stress to detect deception and to predict the development of personality traits such as extroversion in very young children.
Mirroring Mirroring occurs when one participant subconsciously copies another participant’s prosody and gesture. Considered a signal of empathy, mirroring can positively influence the outcome of a negotiation and other interpersonal interactions.7 In our experiments, the distribution of utterance length is often bimodal. Sentences and sentence fragments typically occur at several-second and longer time scales. At time scales less than one second, the utterances include both short interjections (“Uh-huh.”) and back-and-forth exchanges typically consisting of single words (“OK?” “OK!” “Done?” “Yup.”). The frequency of these short exchanges is our measure of mirroring behavior.
INSIDE A SOCIALLY AWARE SYSTEM Figure 3. The Serendipity system. Built on the Nokia 6600 phone, the system senses the proximity of other people and compares their interests to make socially appropriate introductions.
36
Computer
We have incorporated these social signaling measurements into the development of three socially aware communications systems. Figures 1 through 3 show these systems in use. The Laibowitz and
The Face of Socially Aware Communication
Paradiso Uberbadge is a badge-like platform,9 GroupMedia10 is based on the Sharp Zaurus PDA, and Serendipity11 is based on the Nokia 6600 mobile telephone. In each system, the basic element of social context is the identity of people in the user’s immediate presence. The systems use several methods to determine identity, including Bluetooth-based proximity detection, infrared (IR) or radio-frequency (RF) tags, and vocal analysis. To this basic context, it is possible to add audio feature analysis, sensors for head and body movement, and even biosignals, such as galvanic skin response (GSR). These sensing capabilities provide a quantitative measure of social context for the user’s immediate, face-to-face situation. The result is a lightweight, unobtrusive, wearable system that can identify face-to-face interactions, capture collective social information, extract meaningful group descriptors, and transmit the group context to remote group members. When the system detects a face-to-face interaction, defined as the combination of proximity and conversational turn taking, it specifies a group context that consists of the participants’ identities, the four social signals, and the compressed audio (and possibly video) information stream. The system then creates a social gateway that contains the group context information and lets preapproved members of the social or work group access the ongoing conversation and group context information. The social gateway uses real-time machine learning methods to identify relevant group context changes. A distance-separated user can then access these changes.
A NEW LEVEL OF COMMUNICATION Enabling machines to know social context will enhance many forms of socially aware communication. A simple use of social context is to provide people with feedback on their own interactions. Did you sound forceful during a negotiation? Did you sound interested when you were talking to your spouse? Did you sound like a good team member during the teleconference? Such feedback can potentially head off many unnecessary problems, as the “The Face of Socially Aware Communication” sidebar describes. The same sort of analysis can also be useful for robots and voice interfaces. Although word selection and dialog strategy are important to a successful human-machine interaction, our experiments and those of others show that social signaling could be even more important.
Four applications are being considered for commercial application: • Mood Ring (aka “jerk-o-meter”). Women often complain that men don’t pay attention to them when they talk on the phone. The Mood Ring is a cell phone application that monitors conversations between a husband and wife and alerts the husband with a special ring tone if he is sounding inattentive or uninterested. • Comfort Connection. What most of us miss when dealing with a financial institution is a friendly, trustworthy human to talk to. Comfort Connection classifies your preferred style of interaction during an initial interview, and then hooks you up with a service representative with whom you will feel comfortable working. • Personal Trainer. One of the problems with the subconscious nature of social signals is that we are often unaware of how we sound to others. Consequently we often fail to put our best foot forward, most commonly when we are confused or stressed—exactly when it matters most. The Personal Trainer runs on a mobile telephone application and provides feedback at the end of each telephone call about how you sounded: aggressive, friendly, interested, firm, or cooperative. This feedback is valuable in helping you learn to present yourself in the manner you intend. • Winning Combination. Businesses depend on buying low and selling high, and in most businesses this means having purchase agents and sales agents that come out ahead of the competition. The difficulty is that the style of a particular agent is not optimal for all clients—for certain pairings the company agent will tend to come out ahead, for other pairings that person will tend to lose. Winning Combination classifies the speaking style of each purchase or sales agent, and makes sure that the agent is paired with the right client.
What was that name? An obvious use of social context is to help build social networks. At some time, nearly everyone has met an interesting person and then has lost that person’s business card or forgotten that person’s name. On the basis of an audio analysis and observations of body motion, our Uberbadge-based system9 can keep track of all interactions during which you seem interested in the other person and e-mail you the names and particulars of those individuals at the end of the day.
Building social capital Social capital is the ability to leverage your social network by knowing who knows what and knowing to whom you should speak to get things done. It is perhaps the central social skill for any entrepreneurial effort, yet many people find it difficult. We are therefore building systems that can help a person build social capital. One example is the Serendipity11 system, which is implemented on Bluetooth-enabled mobile phones and built on BlueAware, an application that scans for other Bluetooth devices in the user’s proximity. When Serendipity discovers a new device nearby, it automatically sends a message to a social gateway server with the discovered device’s ID. If it March 2005
37
finds a match, it sends a customized picture message to each user, introducing them to one Improving group another. function requires The real power of this system is that it can be used to create, verify, and better characthe ability to terize relationships in online social network monitor the social systems, such as Friendster or Orkut. communication If two people hang out after work, they are and provide probably social friends. If they meet only at real-time work or not at all, they are likely to have a very different relationship. The system can intervention. refine the relationship characterization by analyzing the social signaling that occurs during phone calls between the two people. The phone extracts the social signaling features as a background process so that it can provide feedback to the user about how that person sounded and to build a profile of the interactions the user had with the other person.
Staying in the loop A major problem with distributed workgroups is keeping yourself in the loop. Socially mediated communications, such as GroupMedia,10 can help with this problem by patching people into important conversations. When it detects a potentially interesting conversation, the system notifies a distant group member. Whether or not a certain member receives notification depends on measured interest levels, direction of information flow, and group membership. A distant group member who receives a notification has several options. These options include subscribing to the information and receiving the raw audio signal plus annotations of the social context, receiving a notification from the system only in case of especially interesting comments, or storing the audio signal with social annotations for later review. Suppose, for example, most of your workgroup has gathered, the information flow is from the boss, and the interest level is high. You might be wise to patch into the audio and track the measured level of group interest for each participant’s comments. The group context information and the linking-in notification that the system gateway provides can increase both the group cohesion and your understanding of the raw audio. The same framework could also enhance the social life of close friends. Suppose two or three of your closest friends have discovered an amazing band at a bar and are having a great time. The system could detect the situation and, given appropriate prior authorization, automatically send you an invitation to join your friends. Although such a 38
Computer
system wouldn’t be to everyone’s taste, this idea generally gets a thumbs-up from college undergraduates.
Group dynamics Social scientists have carefully studied how groups of people make decisions and the role of social context in that process. Unfortunately, they have found that socially mediated decision making has some serious problems, including group polarization, group think, and several other types of irrational behaviors that consistently undermine group decision making.2,4 Improving group function requires the ability to monitor the social communication and provide real-time intervention. Human experts—facilitators or moderators—can do that effectively, but to date machines have been blind to the social signals that are such an important part of a human group’s function. The challenge, then, is how to make a computer recognize social signaling patterns. In salary negotiations, for example, we found that lower-status individuals do better when showing more mirroring, which communicates that they are team players. In a potential dating situation, the key variable was the female’s activity level, which indicated interest. By knowing that certain signaling patterns reliably lead to these desired states, the computer can begin to gently guide the conversation to a happy ending by providing timely feedback. Similarly, the ability to measure social variables like interest and trust ought to enable more productive discussions, while the ability to measure social competition offers the possibility of reducing problems like groupthink and polarization. If the computer can measure the early signs of problems, it can intervene before the situation becomes unsalvageable. To explore these ideas, every student in my Digital Anthropology seminar used a GroupMedia system so that our team could analyze the group interaction.12 Real-time displays of participant interaction could be generated and publicly displayed to reflect the roles and dyadic relationships within a class. In Figure 4, the advisees (s2, s7, s8) have a high probability of conceding the floor to their professor (s9). This type of analysis can help develop a deeper discussion. Comments that give rise to wide variations in individual reaction can cause the discussion to focus on the reason for the disparity, and those interested can retrieve these controversial topics for further analysis and debate later. The analysis also permits the clustering of opinions
and comments using collaborative filtering. In this way, people can readily see opinion groupings, which sets the stage for inter- and intragroup debates.
s3
s4
s9
s6
Personal relationships Social awareness may also be able to help reinforce family ties, an important capability in this age of constant mobility. Sensing when family members have had an unusually good, or unusually bad, experience can promote supportive communication between them. In one version, the system would randomly leave phone messages reminding family members to call each other. However, when it senses that there has been an unusual experience—a serious argument, an especially fun conversation, or an unusually intense meeting—the system would leave reminders for others to call. The system would not tell people exactly why they should call, because doing so could violate people’s privacy. Instead, the reminders would strengthen the family network by encouraging conversations precisely when family members are most likely to appreciate them.
ocial signaling seems to provide an independent channel of communication, one that is quantifiable and can provide an important new dimension of communication support. The implications of a system that can measure social context are staggering for a mobile, geographically dispersed society. Propagating social context could transform distance learning, for example, letting users become better integrated into ongoing projects and discussions, and thus improving social interaction, teamwork, and social networking. Teleconferencing might become more reflective of actual human contact, since participants could quantify the communication’s value. Automatic help desks might be able to abandon their robotic, information-only delivery or their inappropriately cheerful replies. Our current systems are just a first step toward generally useful communications tools. We must increase the reliability of our social context measurements and learn how to better use them to modulate communication. Much of our ongoing research is focusing on building meaningful mathematical models for estimating social variables and experimentally validating their use in a distance collaboration framework. Considering the personal and societal effects of socially aware communications systems brings to
S
s1 s2
s8
s7
s5
Figure 4. Display of group dynamics between professor (s9) and students during an experiment to study how a group functions. Each student in the seminar received a GroupMedia system, which analyzed the class member’s interactions on the basis of activity level, group interest, and turn-taking patterns. Circle size reflects speaking time; the width of the link lines reflects the probability that the person will concede the floor. The shading within a circle reflects that person’s interest level, the darker the shading, the higher the interest. Thicker circle borders denote groups.
mind Marshall McLuhan’s “the medium is the message.” By designing systems that are aware of human social signaling, and that adapt themselves to human social context, we may be able to remove the medium’s message and replace it with the traditional messaging of face-to-face communication. Just as computers are disappearing into clothing and walls, the otherness of communications technology might disappear as well, leaving us with organizations that are not only more efficient, but that also better balance our formal, informal, and personal lives. Assimilation into the Borg Collective might be inevitable, but we can still make it a more human place to live. ■
Acknowledgments I thank my collaborators—Joost Bonsen, Jared Curhan, David Lazar, Carl Marci, M.C. Martin, and Joe Paradiso—and my current and former students—Sumit Basu, Ron Caneel, Tanzeem Choudhury, Wen Dong, Nathan Eagle, Jon Gips, Anmol Madan, and Mike Sung—for all the hard work and creativity they have added to this project. Thanks also to Deb Roy, Judith Donath, Roz Picard, and Tracy Heibeck for insightful comments and feedback. Parts of this article have appeared on Edge.org and in Proc. IEEE Int’l Conf. Developmental Learning. March 2005
39
References 1. A. Pentland, “Social Dynamics: Signals and Behavior,” Proc. Int’l Conf. Developmental Learning, IEEE Press, 2004; http://hd.media.mit.edu. 2. C. Nass and S. Brave, Voice Activated: How People Are Wired for Speech and How Computers Will Speak with Us, MIT Press, 2004. 3. N. Ambady and R. Rosenthal, “Thin Slices of Expressive Behavior as Predictors of Interpersonal Consequences: A Meta-Analysis,” Psychological Bull., vol. 111, no. 2, 1992, pp. 256-274. 4. R. Brown, Group Polarization in Social Psychology, 2nd ed., Free Press, 1986. 5. M. Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little Brown, 2000. 6. J. Jaffe et al., “Rhythms of Dialogue in Early Infancy,” Monographs of the Soc. for Research in Child Development, vol. 66, no. 2, 2001. 7. T. Chartrand and J. Bargh, “The Chameleon Effect: The Perception-Behavior Link and Social Interaction,” J. Personality and Social Psychology, vol. 76, no. 6, 1999, pp. 893-910. 8. T. Choudhury, “Sensing and Modeling Human Networks,” PhD dissertation, Dept. Media Arts and Sciences, MIT, 2003; http://hd.media.mit.edu.
40
Computer
9. M. Laibowitz and J. Paradiso, “The UberBadge Project,” 2004; www.media.mit.edu/resenv/projects.html. 10. A. Madan, R. Caneel, and A. Pentland, “GroupMedia: Distributed Multimodal Interfaces,” 2004; http://hd.media.mit.edu. 11. N. Eagle and A. Pentland, “Social Serendipity: Proximity Sensing and Cueing,” 2004; http://hd.media. mit.edu. 12. N. Eagle and A. Pentland, “Social Network Computing,” LNCS 2864, Springer-Verlag, 1999, pp. 289-296; http://hd.media.mit.edu.
Alex (Sandy) Pentland is the Toshiba Professor of Media Arts and Sciences at MIT and the former academic head of the MIT Media Lab. His work encompasses wearable computing, communications technology for developing countries, humanmachine interfaces, artificial intelligence, and machine perception. A cofounder of the IEEE Computer Society’s Wearable Information Systems Technical Committee and the IEEE Computational Intelligence Society’s Autonomous Mental Development Technical Committee, Pentland has received numerous awards in the arts, engineering, and sciences. Contact him at
[email protected].
COVER FEATURE
Designing Smart Artifacts for Smart Environments Smart artifacts promise to enhance the relationships among participants in distributed working groups, maintaining personal mobility while offering opportunities for the collaboration, informal communication, and social awareness that contribute to the synergy and cohesiveness inherent in collocated teams.
Norbert A. Streitz Carsten Röcker Thorsten Prante Daniel van Alphen Richard Stenzel Carsten Magerkurth Fraunhofer IPSI Darmstadt, Germany
0018-9162/05/$20.00 © 2005 IEEE
A
n integral part of our environment, computers contribute to the social context that determines our day-to-day activities while at the office, on the road, at home, or on vacation. The widespread availability of devices such as desktop and laptop computers has fueled our increasing dependency on a wide range of computing services. The technological advances that underlie the laptop, PDA, or cell phone also provide the foundation for nontraditional computer-based devices such as interactive walls, tables, and chairs—examples of roomware components that provide new functionality when combined with innovative software.1 Two complementary trends have resulted in the creation of smart environments that integrate information, communication, and sensing technologies into everyday objects.2 First, continual miniaturization has resulted in computers and related technological devices that are small enough to be nearly invisible. Although they are not visible, these devices still permeate many artifacts in our environment. Second, researchers have augmented the standard functionality of everyday objects to create smart artifacts constituting an environment that supports a new quality of interaction and behavior. In our work, we distinguish between two types of smart artifacts: system-oriented, importunate smartness and people-oriented, empowering smartness. System-oriented, importunate smartness creates an environment in which individual smart artifacts
or the environment as a whole can take certain selfdirected actions based on previously collected information. For example, a space can be smart by having and exploiting knowledge about the persons and artifacts currently situated within its borders, for example, how long they have occupied the space and what actions they have performed while in it. In this version of smartness, the space would be active, in many cases even proactive. It would make decisions about what to do next and actually execute those actions without a human in the loop. In a smart home, for example, the control system automatically performs functions such as adjusting the heating system and opening or closing the windows and blinds. In some cases, however, these actions could be unwelcome or ill-timed. Consider a smart refrigerator that analyzes the occupants’ consumption patterns and autonomously orders replacements for depleted menu items. Although we might appreciate suggestions for recipes we can make with the food that is currently available, we would probably resent a smart refrigerator that ordered food automatically that we could not consume because of circumstances beyond the refrigerator’s knowledge such as an unanticipated absence or illness. In contrast, people-oriented, empowering smartness places the empowering function in the foreground so that “smart spaces make people smarter.” This approach empowers users to make decisions and take mature and responsible actions.
Published by the IEEE Computer Society
March 2005
41
In this case, the system also collects and aggregates data about what goes on in the Developing future space, but it provides and communicates this applications of information intuitively so that ordinary peoubiquitous and ple can comprehend and determine the system’s subsequent actions. This type of smart ambient computing space might make suggestions based on in smart workspaces the information collected, but users remain required a wide in the loop and can always decide what to range of expertise do next. and a highly This type of system supports its occupants’ smart, intelligent behavior. In an office sceinterdisciplinary nario, for example, the smart space could approach. recommend that current occupants consult with others who worked on the same content while occupying the same space earlier or it could direct them to look at related documents created earlier in the same space. The system-oriented and people-oriented approaches represent the end points of a line along which we can position weighted combinations of both types of smartness depending on the application domain. Although in some cases it might be more efficient if the system does not ask for a user’s feedback and confirmation at every step in an action chain, the overall design rationale should aim to keep the user in the loop and in control whenever possible.
FROM INFORMATION TO EXPERIENCE Much work on smart things and environments focuses on intelligently processing the data and information that supports factory and home control and maintenance tasks or productivity-oriented office tasks. We considered another promising dimension, however: designing experiences via smart spaces. We sought to design smart artifacts that users can interact with simply and intuitively in the overall environment. This includes extending awareness about the physical and social environment by providing observation data and parameters that—in many cases—are invisible to unaugmented human senses. Revealing this information thus enables new experiences. This process of capturing and communicating invisible parameters is applicable to both known existing action contexts and to newly created situations and settings. Known examples include pollution or computer network traffic data that usually escapes detection by the human senses.3 Presenting this data can provide a new experience that gives people a deeper sense of what occurs around them. Depending on the particular application, this capa42
Computer
bility could raise public awareness and potentially trigger changes in people’s behavior. Our work in creating augmented social architectural spaces in office settings culminated in the Ambient Agoras environment (www.ambientagoras.org).4 We are now applying this knowledge to other domains, including interactive hybrid games, home entertainment, and extended home environments. We focus here on computer-based support for activities beyond direct productivity, in particular informal communication and social interaction between local and remote teams in an organization that are working at different but connected sites.5,6 Because these activities are important to an organization’s overall progress and success, they merit more technology-based support.
Ambient Agoras Within this overall context, we used the Ambient Agoras environment as a test bed for developing future applications of ubiquitous and ambient computing in smart workspaces. This required a wide range of expertise and a highly interdisciplinary approach involving not only computer scientists and electrical engineers but also psychologists, architects, and designers. Fraunhofer IPSI in Darmstadt, Germany, provided the scientific, technical, and administrative project coordination. Fraunhofer’s Ambiente Research Division also employed product designers and architects to develop some of the artifacts. Electricité de France, the French electrical power utility, served as the consortium’s user organization. As part of its R&D division, the Laboratory of Design for Cognition in Paris provided the test bed for the evaluation studies and contributed to the observation and participatory design methods. Wilkhahn, a German office furniture manufacturer, contributed to the design and development of some artifacts, leveraging its experience in designing the second generation of Roomware components developed in cooperation with Fraunhofer IPSI in the Future Office Dynamics consortium.7
Social marketplace of ideas and information We chose as the guiding metaphor for our work the Greek agora, a marketplace. In line with this, we investigated how to turn everyday places into social marketplaces of ideas and information where people could meet and interact. In our particular context, we addressed the office environment as an integrated organization located
in a physical environment and having particular information needs, both at the organization’s collective level and at the worker’s personal level. Overall, we sought to augment the architectural envelope to create a social architectural space that supports collaboration, informal communication, and social awareness. We achieved this by providing situated services and place-relevant information that communicate the feeling of a place to users. Further, augmented physical artifacts help promote individual and team interactions in the physical environment. Specifically, we used a scenario-based approach, starting with a large number of so called bits of life—short descriptions of functionalities, situations, events, and so on—that we aggregated into scenarios and presented to focus groups using visual aids such as video mock-ups. This, in combination with extensive conceptual work based on different architectural theories,8 served as the basis for developing a wide range of smart artifacts and their corresponding software that, together, provided users with smart services. Design, development, and evaluation followed an iterative and rapid-prototyping approach. For the Ambient Agoras environment, we coupled several interaction design objectives, including the disappearance and ubiquity of computing devices; sensing technologies such as active and passive RFID; smart artifacts such as walls, tables, and mobile devices; and ambient displays. We then investigated the functionality of two or more artifacts working together. In particular, we addressed the • support of informal communication in organizations, both locally and between remote sites; • role and potential of ambient displays in future work environments; and • combination of more or less static artifacts integrated in the architectural environment with mobile devices carried by people.
Mobility and informal communication Several trends are changing how large organizations work. For example, organizations increasingly organize work around teams that change dynamically in response to the temporary nature of projects. People working in these organizations also experience a large degree of personal mobility in two dimensions: • local mobility within the office building as a result of new office concepts such as the loss of personal office space because of shared desk
policies and wide-open office landscapes with movable walls and furniture that can be adapted on the fly to changing requirements and new project team constellations; and • global mobility achieved by using mobile technologies while traveling or working at different sites at the municipal, regional, national, or international level.
Informal awareness about ongoing activities in the local work environment and a sense of community both play vital roles in the workplace.
Although increased mobility offers several benefits, it also has implications that demand new responses, especially at the global mobility level. At the local level, the usually recognized channels of communication between people working together include face-to-face conversations, formal meetings, phone conversations, e-mail messages, and document sharing. In addition, informal communication includes interactions such as chance encounters at the copying machine, hallway chats, and conversations while relaxing in the lounge. These interactions help participants stay on top of things, anticipate future developments in the organization, and exchange gossip and rumors. Like explicit verbal communication, implicit communication occurs in terms of a mutual awareness through which people can determine who’s who and assess their coworkers’ overall mood and morale. Design recommendations for the workplace frequently conclude that both informal awareness about ongoing activities in the local work environment and a sense of community play vital roles in the workplace.9 Teams that share the same physical environment generally benefit from increased informal awareness because the team members have higher mobility within the shared workspace. When looking at global mobility, the situation changes fundamentally. The increased mobility of team members usually leads to poor communication and lack of group cohesion, which negatively affects the teams’ performance. This holds true for individual global mobility caused by intensive traveling and for group global mobility in the case of distributed teams that have subgroups working at different sites. One empirical study that addressed this topic confirmed the trend toward the formation of virtual teams, but noted that such teams reduced interpersonal relations to a minimum.10 Further, this study showed that it is exactly these relationships between team members that have the strongest effect on performance and work satisfaction. The poor communication and lack of group cohesion often expe-
March 2005
43
The Disappearing Computer The European Commission funded the proactive The Disappearing Computer research initiative. Launched by the Future and Emerging Technology section of the Information Society Technology program, The Disappearing Computer initiative seeks “to explore how everyday life can be supported and enhanced through the use of collections of interacting smart artifacts. Together, these artifacts will form new people-friendly environments in which the ‘computer-as-we-know-it’ has no role.” The initiative has three main objectives: • developing new tools and methods for embedding computation in everyday objects to create smart artifacts; • investigating how new functionality and new uses can emerge from collections of interacting artifacts; and • ensuring that people’s experience of these environments is both coherent and engaging in space and time. These objectives require research in ambient intelligence, pervasive and ubiquitous computing, and new forms of human-computer interaction. Researchers have undertaken a cluster of 17 related projects under the umbrella theme of The Disappearing Computer initiative to pursue these three objectives. The Ambient Agoras project is one of these projects. For more information about The Disappearing Computer initiative, visit www.disappearing-computer.net or contact Norbert Streitz, chair of the DC-Net Steering Group.
rienced in virtual and distributed teams has considerable negative effects on team performance.11 Building on this work, we developed the constituents for a smart environment in a corporate setting that augments existing local and distributed architectural spaces, transforming them into spaces that make people “smarter” by supporting social awareness and informal communication.
POPULATING AMBIENT AGORAS Each of the artifacts and software components we developed to populate the Ambient Agoras smart spaces, including InfoRiver, InforMall, and the SIAM-system, meets different aspects of our overall design goals.12 We focus here on the Hello.Wall, ViewPort, and Personal Aura.
Calm technology While working on the ideas that the Ambient Agoras embody, we set another complementary goal for implementing the technology. We felt that the implementation should correspond to and be compatible with the nature of informal communication, social awareness, and team cohesion. Our conceptual analysis combined with information gathered from focus groups showed that traditional approaches to communicating using desktop technology did not achieve this goal and would not meet expectations. 44
Computer
Therefore, we took a different route based on the notion of ambient displays and lightweight support with mobile devices. An observation by Mark Weiser helped inspire our decision to move the computer to the background and develop a calm technology: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”13 The “The Disappearing Computer” sidebar describes Weiser’s influence on proactive research exploring how smart artifacts can support and enhance everyday life in a peoplefriendly environment in which there is no role for the “computer as we know it.”
Ambient displays We decided that a calm, ambient technology best supports the informal social encounters and communication processes within a corporate building. The ambient displays that exemplify this approach go beyond the traditional notions of the typical displays found on PCs, notebooks, PDAs, and even many interactive walls or tables. Some ambient displays employ nature-like metaphors to present information without constantly demanding the user’s full attention. They usually achieve this implicitly by making the displays available in the periphery of attention. Designers envision that ambient displays will spring up all around us, moving information off conventional screens and into the physical environment. They will present information via changes in light, sound, object movement, smell, and so on. Hiroshi Ishii and his colleagues at the MIT Media Lab developed several early examples of this technology.3,14 Given that awareness of people’s activities can strengthen social affiliations, ambient displays can be used to trigger the attention of team members subtly and peripherally by communicating a location’s atmosphere, thus providing a sense of place. Ambient displays provide only one aspect of the implementation, however. Another aspect is sensing people and collecting the parameters relevant to achieving the goal of providing location- and situation-based services.
Hello.Wall We developed the Hello.Wall, our version of an ambient display, for the Ambient Agoras environment. This 1.8-meter-wide by 2-meter-high compound artifact has integrated light cells and sensing technology. As Figure 1 shows, this display facilitates communication via dynamically changing light patterns. The current version uses 124 light-emit-
ting cells organized in an eight-row array structure. A standard computer hidden in the background uses a special driver interface to control the Hello.Wall artifact. To adjust the LED clusters’ brightness, we developed a new control unit that uses pulse-width modulation. The system’s general design captures a range of parameters as input and maps them to a wide range of output patterns. The Hello.Wall provides awareness and notifications to people passing by or watching it. Different light patterns correspond to different types of information. Using abstract patterns allows distinguishing between public and private or personal information. Although everyone knows the meaning of public patterns and can therefore interpret them easily, only the initiated can access the meaning of personal patterns. This makes it possible to communicate personal messages and information in a public space without worry that others will catch their meaning. In the Ambient Agoras environment, the Hello.Wall functions as an ambient display that transmits organization-oriented information publicly and information addressed to individuals privately. We can think of it as an organism that radiates the breath of an organization’s social body, making it perceivable to the organization’s members on the inside as well as others on the outside. The Hello.Wall does more than communicate information and atmosphere, however—its appearance also has an effect on the atmosphere of a place and thus influences the mood of the social body around it. While the artifact serves a dedicated informative role to initiated members of the organization, visitors might consider it as simply an atmospheric decorative element and enjoy its aesthetic quality. As an integral part of the physical environment, the Hello.Wall constitutes a seeding element of a social architectural space that provides awareness to the members of an organization. In this way, the Hello.Wall is a piece of unobtrusive, calm technology that exploits people’s ability to perceive information via codes. It can stay in the background, at the periphery of attention, while those around it concern themselves with another activity, such as a face-to-face conversation. The Hello.Wall’s unique blend of unobtrusive, calm technology and its continual display of high-quality aesthetic patterns make it informative art.15
Sensing and different zones of interaction Beyond developing a new ambient display, we also sought to make the type of information and
Figure 1. Hello.Wall. The ambient display combines unobtrusive, calm technology and a continual display of high-quality aesthetic patterns to convey the idea of turning everyday spaces into agoras—social marketplaces where people can meet and interact.
Figure 2. Communication zones. Depending on the distance from the display, the Hello.Wall has three communication zones: ambient, notification, and interaction.
how it is communicated context-dependent. The artifact should provide services that are location- or situation-based depending on the proximity of people passing by. As Figure 2 shows, depending on the distance from the display, the Hello.Wall has three different communication zones: ambient, notification, and interaction. To cover different ranges, we used integrated sensors that can be adapted according to the surrounding spatial conditions. Using these sensors introduces a distance-dependent semantic, which implies that the distance of an individual from the March 2005
45
Figure 3 shows, each of these mobile ViewPorts consists of a WLAN-equipped PDA-like handheld device based on commercially available components that are mapped to a new form factor. Furthermore, we integrated RFID readers and transponders so that a ViewPort can sense other artifacts and be sensed itself. The Hello.Wall can borrow the ViewPort’s display to privately show more explicit and personal information that can be viewed only on a personal or temporarily personalized device. Depending on access rights and current context, people can use ViewPorts to learn more about the Hello.Wall, to decode visual codes on the wall, or to access a message announced by a code.
(a)
Personal Aura (b)
Figure 3. ViewPort. (a) The implemented prototype and (b) a design prototype showing the next version’s new form factor.
smart artifact defines the kind of information shown and the interaction offered. People passing through the ambient zone contribute to and experience the ambient patterns continuously displayed on the Hello.Wall. These patterns concern, for example, general presence information. People in the notification zone are identified as individuals and agree to have the Hello.Wallenriched environment react to their personal presence. This can result in personal notification patterns being displayed on the Hello.Wall. People in the interaction zone can get directly involved with the Hello.Wall environment. The artifact reflects this by showing special interaction patterns.
ViewPort We designed a complementary mechanism for the Hello.Wall that can “borrow” the displays of other artifacts to communicate additional information that complements the Hello.Wall’s display. As
(a)
(b)
People adopt different social roles in daily life, such as mother, client, or customer. In some situations, communication reveals a lot about a person, while in others it reveals very little. In a corporate organization, employees have different professional roles that might change even during the course of a single workday. An employee can be the project manager on one team and later participate in another meeting as a regular task force member. Based on these considerations, we wanted to provide a similar mechanism for sensor-based environments. We sought to design an easy and intuitive interface that would let users control their appearance in a smart environment. They could decide whether to be visible to a tracking system and, if so, they could control the social role in which they appeared. This mechanism contributes to the increasing discussion of privacy issues that the implementation of smart environments has generated.16 Figure 4 shows the Personal Aura, our first instantiation of this concept. The artifact consists of two matching parts: the reader module and the
(c)
Figure 4. Personal Aura. (a) The reader module and two ID sticks, (b) connecting reader module and ID stick, and (c) an active Personal Aura.
46
Computer
ID stick, which contains a unique identity and optional personal information. Each person has multiple ID sticks, with each stick symbolizing a different role. If people want to signal their availability in the connecting remote team application, for example, they can do so by connecting a specific ID stick to the reader module.
CONNECTING REMOTE TEAMS Aside from opportunistic chance encounters in the hallway, gathering in a lounge area offers people the highest accessibility to informal communication. Although a person’s mood and availability for participating in a chat can be detected easily in a faceto-face situation, identifying similar information in a remote setting is difficult. People must be called or e-mailed to determine their receptiveness to an encounter. When they use standard videoconferencing systems, people usually must plan the encounter and prepare the setup in advance. To evaluate how the Hello.Wall and its supporting artifacts could facilitate communication between two remote teams, we ran a living-lab evaluation in the fall of 2003. This scenario addressed the issue of extending awareness information and facilitating informal communication from within a corporate building to distributed teams working at remote sites. We built and installed Hello.Wall ambient displays and the corresponding sensing infrastructures in two lounge spaces, one at EDF-LDC in Paris and the other at Fraunhofer IPSI in Darmstadt. Figure 5 shows the Hello.Wall in one of the lounge areas. We used different media to allow the continuous exchange of information about the availability of people for chance encounters and to provide a starting point for initiating spontaneous video-based communication between the two remote sites. We mapped the zone model to the floor plans of the lounges at each site. While people in the ambient zone only contributed to the ambient presence patterns, people entering the notification zone were identified via their Personal Aura, and their personal sign was displayed at the Hello.Wall at the opposite remote lounge space. Thus, the Hello.Wall continuously presented a combination of patterns communicating what was going on at the remote site. People could perceive this information in an ambient way without having to explicitly focus their attention on it. When they became aware of the presence of particular people and had the feeling that it was a good time to engage in a spontaneous encounter, people needed a way to communicate their interest in an
intuitive way. The request was triggered by pushing a button, which resulted in a specific pattern that overrode all other patterns on the Hello.Wall at the remote site. The remote site could reject this request or accept the invitation, and the informal video-based communication could proceed. We used dynamic light patterns to communicate different types of information: the presence and number of people at the opposite site, their general mood, the presence and availability of specific team members, and their interest in communicating with the remote team. We designed a specific pattern language that distinguishes between the
Figure 5. Hello.Wall in lounge area. The experimental setting consisted of two lounge areas, each enhanced with a Hello.Wall: one at EDF-LDC in Paris and the other at Fraunhofer IPSI in Darmstadt, Germany.
• ambient patterns representing general information like mood and presence; • notification patterns handling individual or personalized messages; and • interaction patterns handling direct communication requests, such as a request for engaging in a spontaneous video communication with a remote team member. To give them an aesthetically pleasing and nonmonotonic appearance, we purposely designed the patterns to appear abstract. The Hello.Wall continuously displays these dynamic patterns as they interweave with each other. To reduce complexity and facilitate peripheral perception, as Figure 6 shows, the wall displays the presence and mood patterns at only three levels—low, medium, and high. In addition, the Hello.Wall can apply overlays to these patterns. Static personal signs display when a specific team member appears in the lounge area. Figure 7 shows that each person has a specific sign, controlled by the Personal Aura. As a dedicated example of privacy-enhancing technology, the Personal Aura provides users with control over RFID-based identification in smart environments.6 March 2005
47
(a)
(b)
Figure 6. Hello.Wall patterns. The patterns express (a) three different levels of mood and (b) three different levels of presence—low, medium, and high.
isolation of not being physically present without causing privacy problems. The study participants described the Hello.Wall as providing a playful experience while interacting with the remote team. They commented that due to the Hello.Wall, interactions with the remote site took place more often, spontaneous video conference interactions were less formal, and videoconferencing became a daily routine. The Hello.Wall patterns and their smooth movements when flowing over the display were considered to be aesthetically pleasing. People mentioned that the Hello.Wall caused positive feelings and induced a good mood. Our future work will exploit the results gained in this study as we focus on building awareness support for distributed remote home environments in a new EU-funded project, Amigo–Ambient Intelligence for the Networked Home Environment (www. ipsi.fraunhofer.de/ambiente/amigo). ■
Acknowledgments We thank the European Commission for its extensive support of The Disappearing Computer initiative (contract IST-2000-25134). Thanks also to our EDF partners: Saadi Lahlou and his team, DALT, Wilkhahn, wiege, Daniela Plewe, Sebastian Lex, and the members and students in the Ambiente Research Division (www.ipsi.fraunhofer.de/ambiente) at Fraunhofer IPSI. For more information about the Disappearing Computer initiative, visit www.disappearing-computer.net.
References
Figure 7. Static personal signs. The Personal Aura privacy-enhancing technology controls different personal signs, each indicating a specific person’s presence and role.
sing questionnaires to provide feedback revealed that our approach was effective in facilitating workplace awareness and group communication. Our evaluation demonstrated that participants could learn how to identify and interpret the Hello.Wall patterns correctly in a short period of time. The participants indicated that they perceived the Hello.Wall as being an appropriate means of establishing awareness of people who were working at a remote site, thus overcoming the
U 48
Computer
1. T. Prante, N. Streitz, and P. Tandler, “Roomware: Computers Disappear and Interaction Evolves,” Computer, Dec. 2004, pp. 47-54. 2. N. Streitz and P. Nixon, “The Disappearing Computer,” Comm. ACM, Mar. 2005, pp. 33-35. 3. C. Wisneski et al., “Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information,” Cooperative Buildings: Integrating Information, Organization, and Architecture, N. Streitz, S. Konomi, and H. Burkhardt, eds., LNCS 1370, Springer-Verlag, 1998, pp. 22-32. 4. N. Streitz et al., “Ambient Displays and Mobile Devices for the Creation of Social Architectural Spaces: Supporting Informal Communication and Social Awareness in Organizations,” Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies, K. O’Hara et al., eds., Kluwer, 2003, pp. 387-409.
5. T. Prante et al., “Connecting Remote Teams: CrossMedia Integration to Support Remote Informal Encounters,” Adjunct Proc. 6th Int’l Conf. Ubiquitous Computing (UbiComp 04), Univ. Nottingham, UK, 2004. 6. C. Röcker et al., “Using Ambient Displays and Smart Artifacts to Support Community Interaction in Distributed Teams,” Proc. OZCHI 2004, Univ. Wollongong, Australia, 2004. 7. N. Streitz et al., “Roomware: Toward the Next Generation of Human-Computer Interaction Based on an Integrated Design of Real and Virtual Worlds,” Human-Computer Interaction in the New Millennium, J. Carroll, ed., Addison-Wesley, 2001, pp. 553578. 8. C. Alexander, S. Ishikawa, and M. Silverstein, A Pattern Language: Towns, Buildings, Construction, Oxford University Press, 1977. 9. J. Tanis and F. Duffy, A Vision of the New Workplace Revisited, Site Selection, Sept. 1999, pp. 805-814. 10. J. Lurey and M. Raisinghani, “An Empirical Study of Best Practices in Virtual Teams,” Information & Management, vol. 38, 1999, pp. 523-544. 11. R. Blake, J. Mouton, and A. McCanse, Change by Design, Addison-Wesley, 1989. 12. T. Prante et al., “Ambient Agoras: InfoRiver, SIAM, Hello.Wall,” Proc. Human Factors in Computing Systems (CHI 2004), ACM Press, pp. 763-764. 13. M. Weiser, “The Computer for the 21st Century,” Scientific American, Sept. 1991, pp. 94-104. 14. H. Ishii et al., “AmbientROOM: Integrating Ambient Media with Architectural Space,” Proc. Human Factors in Computing Systems (CHI 98), ACM Press, 1998, pp. 173-174. 15. J. Redström, T. Skog, and L. Hallnäs, “Informative Art: Using Amplified Artworks as Information Displays,” Proc. Designing Augmented Reality Environments (DARE 2000), ACM Press, 2000, pp. 103-114. 16. S. Lahlou, M. Langheinrich, and C. Röcker, “Privacy and Trust Issues with Invisible Computers,” Comm. ACM, Mar. 2005, pp. 59-60.
Norbert A. Streitz is head of the Ambiente–Smart Environments of the Future research division at Fraunhofer IPSI, Darmstadt, Germany, where he also teaches in the Department of Computer Science at the Technical University Darmstadt. His research interests include cognitive science, computer-supported cooperative work, and interaction design for ubiquitous computing. Streitz received a PhD in physics from the University of Kiel and a
PhD in psychology from the Technical University RWTH Aachen. He chairs the steering group of the EU-funded The Disappearing Computer initiative. Contact him at
[email protected].
Carsten Röcker is a PhD candidate in computer science at the Technical University Darmstadt and a scientific staff member in the Ambiente–Smart Environments of the Future research division at Fraunhofer IPSI, Darmstadt. His research interests include ubiquitous and ambient computing and awareness and privacy in sensor-based environments. Contact him at
[email protected].
Thorsten Prante is the deputy head of the Ambiente–Smart Environments of the Future research division at Fraunhofer IPSI, Darmstadt, where he also teaches in the Department of Computer Science at the Technical University Darmstadt. His research interests include context-aware information management and computer-supported cooperative work. Prante is a PhD candidate in computer science. Contact him at prante@ipsi. fraunhofer.de.
Daniel van Alphen is a staff member in the Design Department at the Corporate Development Center of Steelcase North America, Grand Rapids, Michigan. He contributed to this work as a consultant to Fraunhofer IPSI after receiving a diploma in industrial design from the University of Arts (UdK), Berlin. Contact him at
[email protected].
Richard Stenzel, a scientific staff member at Fraunhofer IPSI, Darmstadt, and a PhD candidate in computer science at the Technical University Darmstadt, contributed to this work while he was a staff member in the Ambiente division. His research interests include information filtering and distributed systems. Contact him at stenzel@ipsi. fraunhofer.de.
Carsten Magerkurth is a PhD candidate in computer science at the Technical University Darmstadt and a scientific staff member in the Ambiente-Smart Environments of the Future research division at Fraunhofer IPSI, Darmstadt. His research interests include ubiquitous computing, user-interface design, and pervasive gaming. Contact him at magerkurth@ ipsi.fraunhofer.de. March 2005
49
COVER FEATURE
The Gator Tech Smart House: A Programmable Pervasive Space Many first-generation pervasive computing systems lack the ability to evolve as new technologies emerge or as an application domain matures. Programmable pervasive spaces, such as the Gator Tech Smart House, offer a scalable, cost-effective way to develop and deploy extensible smart technologies.
Sumi Helal William Mann Hicham El-Zabadani Jeffrey King Youssef Kaddoura Erwin Jansen University of Florida
50
R
esearch groups in both academia and industry have developed prototype systems to demonstrate the benefits of pervasive computing in various application domains. These projects have typically focused on basic system integration—interconnecting sensors, actuators, computers, and other devices in the environment. Unfortunately, many first-generation pervasive computing systems lack the ability to evolve as new technologies emerge or as an application domain matures. Integrating numerous heterogeneous elements is mostly a manual, ad hoc process. Inserting a new element requires researching its characteristics and operation, determining how to configure and integrate it, and tedious and repeated testing to avoid causing conflicts or indeterminate behavior in the overall system. The environments are also closed, limiting development or extension to the original implementers. To address this limitation, the University of Florida’s Mobile and Pervasive Computing Laboratory is developing programmable pervasive spaces in which a smart space exists as both a runtime environment and a software library.1 Service discovery and gateway protocols automatically integrate system components using generic middleware that maintains a service definition for each sensor and
Computer
actuator in the space. Programmers assemble services into composite applications, which third parties can easily implement or extend. The use of service-oriented programmable spaces is broadening the traditional programmer model. Our approach enables domain experts—for example, health professionals such as psychiatrists or gastroenterologists—to develop and deploy powerful new applications for users. In collaboration with the university’s College of Public Health and Health Professions, and with federal funding from the National Institute on Disability and Rehabilitation Research (NIDRR), we are creating a programmable space specifically designed for the elderly and disabled. The Gator Tech Smart House in Gainesville, Florida, is the culmination of more than five years of research in pervasive and mobile computing. The project’s goal is to create assistive environments such as homes that can sense themselves and their residents and enact mappings between the physical world and remote monitoring and intervention services.
SMART HOUSE TECHNOLOGIES Figure 1 shows most of the “hot spots” that are currently active or under development in the Gator Tech Smart House. An interactive 3D model available at www.icta.ufl.edu/gt.htm provides a virtual
Published by the IEEE Computer Society
0018-9162/05/$20.00 © 2005 IEEE
Smart laundry (F)
Smart Socialprojector SmartWave distant dining (O) (E) (O)
Smart floor (E)
Smart display (E)
Smart Home security blinds monitor (E) (O)
Smart plug (E)
Smart Driving mailbox simulator (E) (E)
Ultrasonic Smart location tracking front door (E) (E)
tour of the house with up-to-date descriptions of the technologies arranged by name and location. Smart mailbox. The mailbox senses mail arrival and notifies the occupant. Smart front door. The front door includes a radio-frequency identification (RFID) tag for keyless entry by residents and authorized personnel. It also features a microphone, camera, text LCD, automatic door opener, electric latch, and speakers that occupants can use to communicate with and admit visitors. Driving simulator. The garage has a driving simulator to evaluate elderly driving abilities and gather data for research purposes.
Smart blinds. All windows have automated blinds that can be preset or adjusted via a remote device to control ambient light and provide privacy. Smart bed. The bed in the master bedroom has special equipment to monitor occupants’ sleep pat-
Smart Smart mirror bathroom (E) (E/O)
Smart closet (F)
Smart bed (O)
terns and keep track of sleepless nights. Smart closet. The master bedroom closet will, in the future, make clothing suggestions based on outdoor weather conditions. Smart laundry. In combination with the smart closet, future RFID-based technology will notify residents when to do laundry as well as help sort it. Smart mirror. The master bathroom mirror displays important messages or reminders—for example, to take a prescribed medication—when needed. This technology could be expanded to other rooms. Smart bathroom. The master bathroom includes a toilet paper sensor, a flush detector, a shower that regulates water temperature and prevents scalding, and a soap dispenser that monitors occupant cleanliness and notifies the service center when a refill is required. Other technologies under development measure occupant biometrics such as body weight and temperature.
Figure 1. Gator Tech Smart House. The project features numerous existing (E), ongoing (O), or future (F) “hot spots” located throughout the premises.
March 2005
51
Smart displays. With the display devices located throughout the house, entertainment A sensor platform media and information can follow occupants effectively converts from room to room. any sensor or SmartWave. The kitchen’s microwave oven actuator in the automatically adjusts the time and power settings for any frozen food package and shows physical layer to a users how to properly prepare the food for software service cooking. that can be Smart refrigerator/pantry. A future refrigprogrammed or erator will monitor food availability and consumption, detect expired food items, create composed into shopping lists, and provide advice on meal other services. preparation based on items stored in the refrigerator and pantry. Social-distant dining. Occupants will be able to use Immersive video and audio technologies installed in the breakfast nook to share a meal with a distant relative or friend. Smart cameras. Image sensors monitor the front porch and patio for privacy and security. Ultrasonic location tracking. Sensors, currently installed only in the living room, detect occupants’ movement, location, and orientation. Smart floor. Sensors in the floor, currently only in the kitchen and entertainment center area, identify and track the location of all house occupants. We are also developing technologies to detect when an occupant falls and to report it to emergency services. Smart phone. This “magic wand for the home” integrates traditional telephone functions with remote control of all appliances and media players in the living room. It also can convey reminders and important information to home owners while they are away. Smart plugs. Sensors behind selected power outlets in the living room, kitchen, and master bedroom detect the presence of an electrical appliance or lamp and link it to a remote monitoring and intervention application. Smart thermostats. In the future, occupants will be able to personalize air conditioning and heat settings throughout the house according to daily tasks or context—for example, they could slightly increase the temperature when taking a shower on a cold winter night. Smart leak detector. Sensors in the garage and kitchen can detect a water leak from the washing machine, dishwasher, or water heater. Smart stove. This future device will monitor stove usage and alert the occupant, via the smart bed, if the stove has been left on. Smart projector. We are developing a projector that uses orientation information provided by ultra52
Computer
sonic location tracking and displays cues, reminders, and event notifications to the living room wall that the occupant is currently facing. Home security monitor. A security system under development continually monitors all windows and doors and, upon request, informs the resident whether any are open or unlocked. Emergency call for help. A future system will track potential emergencies, query the resident if it suspects a problem, and issue a call for outside help when necessary. Cognitive assistant. Another system under development guides residents through various tasks and uses auditory and visual cues to provide reminders about medications, appointments, and so on.
MIDDLEWARE ARCHITECTURE To create the Gator Tech Smart House, we developed a generic reference architecture applicable to any pervasive computing space. As Figure 2 shows, the middleware contains separate physical, sensor platform, service, knowledge, context management, and application layers. We have implemented most of the reference architecture, though much work remains to be done at the knowledge layer.
Physical layer This layer consists of the various devices and appliances the occupants use. Many of these are found in a typical single-family home such as lamps, a TV, a set-top box, a clock radio, and a doorbell. Others are novel technologies such as the SmartWave and the keyless entry system adapted to the Smart Home’s target population. Sensors and actuators such as smoke detectors, air conditioning and heating thermostats, and security-system motion detectors are part of the physical layer as well. In addition, this layer can include any object that fulfills an important role in a space, such as a chair or end table.
Sensor platform layer Not all objects in a given space can or should be accounted for. For example, it may be desirable to capture a toaster, which could cause a fire if inadvertently left on, but not a blender. Each sensor platform defines the boundary of a pervasive space within the Smart House, “capturing” those objects attached to it. A sensor platform can communicate with a wide variety of devices, appliances, sensors, and actuators and represent them to the rest of the middleware in a uniform way. A sensor platform effectively converts any sensor or actuator in the physical layer to a software
Application layer Application manager
Integrated development environment
Service composer
Service layer
Context management layer
Service
Context detection and maintenance engine
Context builder
Debugger
Knowledge layer
Composite services
Service
Simulator
Service registration Service discovery
Service
Reasoning engine Service
Service
Service
Service
Context graphs
Service Service
Service
Knowledge and service semantics
Basic services
OSGi framework
Sensor platform layer OSGi service bundle definition
OSGi service bundle definition
OSGi service bundle definition
Sensor/actuator firmware
Sensor/actuator firmware
Sensor/actuator firmware
Physical Sensor/actuator layer layer
.....
OSGi service bundle definition Sensor/actuator firmware
..... Sensor
Actuator
Physical world layer
service that can be programmed or composed into other services. Developers can thus define services without having to understand the physical world. Decoupling sensors and actuators from sensor platforms ensures openness and makes it possible to introduce new technology as it becomes available.
Sensor/actuator Appliances/devices/objects
A set of de facto standard services may also be available in this layer to increase application developers’ productivity. Such services could include voice recognition, text-to-speech conversion, scheduling, and media streaming, among many others.
Knowledge layer Service layer This layer contains the Open Services Gateway Initiative (OSGi) framework, which maintains leases of activated services. Basic services represent the physical world through sensor platforms, which store service bundle definitions for any sensor or actuator represented in the OSGi framework. Once powered on, a sensor platform registers itself with the service layer by sending its OSGi service bundle definition. Application developers create composite services by using a service discovery protocol to browse existing services and using other bundle services to compose new OSGi bundles. Composite services are essentially the applications available in the pervasive space.
This layer contains an ontology of the various services offered and the appliances and devices connected to the system. This makes it possible to reason about services—for example, that the system must convert output from a Celsius temperature sensor to Fahrenheit before feeding it to another service. Service advertisement and discovery protocols use both service definitions and semantics to register or discover a service. The reasoning engine determines whether certain composite services are available.
Figure 2. Smartspace middleware. This generic reference architecture is applicable to any pervasive computing environment.
Context management layer This layer lets application developers create and register contexts of interest. Each context is a graph implemented as an OSGi service wire API linking various sensors together. A context can define or restrict March 2005
53
Figure 3. Sensor and actuator interaction. Actuators influence sensors, which observe the state of the world and can in turn cause the system or a user to activate the actuator.
F C Sensor Observes
Human or software
Abstracting sensory data
Influences
Controls Actuator
service activation for various applications; it can also specify states that a pervasive space cannot enter. The context engine is responsible for detecting, and possibly recovering from, such states. Our reference architecture has no fixed context-aware programming model.
Application layer This layer consists of an application manager to activate and deactivate services and a graphicalbased integrated development environment with various tools to help create smart spaces. With the context builder a developer can visually construct a graph that associates behavior with context; a programmer also can use it to define impermissible contexts and recovery services. In addition, developers can use the service composer to browse and discover services as well as compose and register new ones. Other tools include a debugger and simulator.
CONTEXT AWARENESS Programming an intelligent space such as the Gator Tech Smart House involves three distinct activities: • Context engineering—interpreting sensory data and identifying high-level states of interest such as “hot” and “sunny.” • Software engineering—describing the various software components’ behavior—for example, turning on the heater or generating a possible menu from a set of ingredients. • Associating behavior with context—defining which pieces of software can execute in a particular context and which pieces the system should invoke upon a contextual change. 54
Computer
Critical to this process is the observe-control interaction between sensors and actuators, as shown in Figure 3.
The Smart House obtains information about the world through various sensors and can use this data to undertake certain actions. The typical home likewise relies on sensors to effect changes—for example, if it gets too cold, the thermostat will activate the heater. However, what distinguishes a truly robust context-aware system such as the Smart House is the ability to abstract state information and carry out actions that correspond to these highlevel descriptions.2,3 Most sensors are designed to detect a particular value in one domain. For example, a temperature sensor might determine that it is 95 degrees Fahrenheit in the house, or a light sensor might record 10,000 lux of light coming through the window. However, hard-coding behavior for each possible combination of direct sensor values is difficult to implement, debug, and extend. It is far easier to associate actions with abstractions such as “hot” and “sunny,” which encompass a range of temperature and luminescence values. When it is hot, the system turns on the air conditioning; if it is sunny outside and the television is on, the system closes the blinds to reduce glare. This approach can easily be extended to various contexts—for example, if the resident is on a diet, the system could prevent the SmartWave from cooking a greasy pizza.
Context management In addition to sensors, the Smart House consists of actuators—physical devices with which people can interact. An actuator can change the state of the world. Sensors, can, in turn, observe an actuator’s effect. For example, a light sensor might determine that the house or resident turned on a lamp. Based upon the observed state of the world, the house or resident might activate an actuator. Every actuator in the Smart House has a certain intentional effect on a domain, which a sensor that senses that particular domain can observe. For example, the intentional effect of turning on the heater is to increase the temperature. Given a clear description of an actuator’s intentional effect, it is possible to determine acceptable behaviors for a given context by examining all possible behaviors in the current state and identifying which intentional effects are mutually exclusive. This guarantees, for example, that the system will
never invoke the air conditioning and heater simultaneously. Context changes can occur due to • an actuator’s intentional effect—for example, after turning on the heater, the house temperature goes from “cold” to “warm”; or • a natural or otherwise uncontrollable force or event—for example, the setting sun causes a change from “daytime” to “nighttime.”
Figure 4. Sensor platform architecture. The modular design provides for alternative and flexible configurations.
Sensor array Sensor 1 Sensor 2 Actuator 1 Actuator 2
Communications module
Processor module
Power module
Sensor 1
Ideally, a smart space that enters an impermissible context should try to get out of it without human monitoring. Toward this end, we are exploring ways that will enable the Smart House to learn how to invoke a set of actuators based upon state information to automatically self-correct problems. Given a standardized description of an actuator’s intentional behavior in a certain domain and how a sensor value relates to a particular context, it should be possible to determine which actuator to invoke to escape from an impermissible context. If escape is impossible, the system can inform an external party that assistance is required. For example, if the pantry does not contain any food and no grocery-delivery service is available, the system could inform an outside caregiver that it is time to restock.
SENSOR PLATFORM Integration can become unwieldy and complex due to the various types of sensors, software, and hardware interfaces involved. Consider, for example, climate control in a house. Normally, you would have to hard-wire the sensors to each room, connect these sensors to a computer, and program which port on the computer correlates to which sensor. Further, you must specify which port contains which type of sensor—for example, humidity or temperature. To systematically integrate the various devices, appliances, sensors, and actuators and to enable the observe-control loop in Figure 3, we created a sensor platform that represents any attached object in a pervasive space simply as a Java program— more specifically, as an OSGi service bundle. To control climate in a home, for example, you would install a wireless sensor platform node in each room, connect both a humidity sensor and temperature sensor to each node, and program the firmware for each node. In addition to the firmware, the sensor platform nodes would contain the sensor driver that decodes temperature and humidity data. Simply powering up a sensor node causes it to transmit the driver wirelessly to a surrogate node,
Sensor 2
Memory module (EEPROM)
Sensor 3
such as a home PC, where the sensors are immediately accessible via other applications. The PC would require no configuration or hardware interfacing. The sensor driver is surrogate software— Java bytecode that contains static information about the sensor and the services it provides— stored in an electrically erasable programmable read-only memory (EEPROM) on the sensor platform node. The platform itself does not understand or process the code; rather, it processes the firmware and other low-level C programs that send data between the sensor and platform. The individual node architecture shown in Figure 4 is modular and provides for alternative and flexible configurations. We use a stackable design to connect alternative memory, processor, power, and communication modules. The memory module provides a mechanism for easily modifying an EEPROM store used for read and write capabilities on the node. This storage contains bootstrap data that specifies general sensor and actuator information. The processing module currently uses an 8-bit Atmel ATmega 128 processor. The processor is housed on a board that is optimized for low power consumption and has two RS232 ports, a Joint Test Action Group (IEEE 1149) and ISP port, and more than 50 programmable I/O pins. We are developing alternative modules with more powerful processing capability, including an onboard Java virtual machine. The communication module currently uses RF wireless communication with a simple transmission protocol. We are also testing and debugging a 10BaseT Ethernet module utilizing a simplified IPv4 stack. Future modules will support low-power Wi-Fi and power-line communication. The latter March 2005
55
RFID tag
RFID reader
Gateway
Outlet 4 Outlet 1 Outlet 3
Outlet 2
Figure 5. Smart plugs. Each power outlet is equipped with a low-cost RFID reader connected to the main computer, while each electrical device has an RFID tag attached to the plug’s end with information about the device.
will also connect to an alternative power module. When a sensor platform is powered up, its EEPROM data acts as a bootstrap mechanism that provides the larger system—for example, a network server or home PC—with the information and behavioral components required to interact with a specific device, appliance, sensor, or actuator. The data can be specified as either human-readable (XML, text with a URL, and so on) or machinereadable (for example, Java bytecode) depending on the specific application. In addition to bytecode, stored data includes device-specific information such as the manufacturer’s name, product serial number, and sensor type.
SMART PLUGS Creating a scalable self-sensing space is impractical using existing pervasive computing technologies.4 Most smart appliances available in the market today do not contain a controllable interface. In addition, numerous available protocols are incompatible. For example, the X10 protocol offers an easy, affordable way to turn a house into a smart one, but many smart devices are not X10 enabled. Regardless of the technology used, a smart space should be able to communicate with any new smart device.5,6 56
Computer
To address this problem, we have developed smart plugs, which provide an intelligent way to sense electrical devices installed in an intelligent space. As Figure 5 shows, each power outlet in the Gator Tech Smart House is equipped with a low-cost RFID reader connected to the main computer. Electrical devices with power cords, such as lamps and clocks, each have an RFID tag attached to the plug’s end with information about the device. When a user plugs the device into an outlet, the reader reads the tag and forwards the data to the main computer. OSGi bundles represent new devices to be installed in the smart space. A bundle is simply a Java archive file containing interfaces, implementations for those interfaces, and a special Activator class.7 The jar file contains a manifest file that includes special OSGi-specific headers that control the bundle’s use within the framework. Each RFID tag has user-data-allocated memory that varies from 8 to 10,000 bytes. Depending on the size of its memory, the tag itself could contain the entire OSGi bundle representing the new device. If the bundle is too large, the tag could instead contain a referral URL for downloading the gateway software from a remote repository. The referral URL can use any protocol that the gateway server has access to, such as http and ftp. Using a Web server also makes upgrading the bundle as easy as replacing the software. The gateway bundles installed in the framework perform all the required downloading and installation of the gateway software for the individual bundles. When a user installs a new device, the system downloads each bundle and registers it in the OSGi framework. Upon request, the framework can report a list of installed devices, all of which can be controlled via methods available in the bundle. In this way, the framework enacts a mapping between the smart space and the outside world. Figure 6 shows a user—for example, a service technician at a monitoring center—controlling a lamp in the Smart House via a remote application; a click on the lamp will download all available methods associated with this device. When the user clicks on a method, the remote application sends a request to the gateway to execute the action.
SMART FLOOR In designing the Gator Tech Smart House floor, we wanted to deploy a low-cost, accurate, unencumbered, position-only location system that could later serve as the foundation for a more powerful hybrid system. Drawing on extensive location-tracking and positioning research, we initially experi-
Figure 6. Remote monitoring of electrical appliances. Clicking on a method causes the remote application to send a request to the Smart House gateway to execute the action.
Gateway
Power outlet
Remote monitoring center
Turn Lamp On Turn Lamp Off Get Status
mented with an acoustic-based location system. Using a set of ultrasonic transceiver pilots in the ceiling, the master device would regularly send chirps into the environment. Users wore vests in which transceiver tags attached to the shoulders would listen for the chirp and respond with their own. While this technology provides precise user position and orientation measurements, it was inappropriate for the Smart House. Each room would require a full set of expensive pilots, and residents would have to don special equipment, which is extremely intrusive and defeats the desired transparency of a pervasive computing environment.8,9 Instead, we opted to embed sensors in the floor to determine user location.10-12 The benefit of not encumbering users outweighed the loss of orientation information, and the availability of an inexpensive sensor platform made this solution extremely cost-effective. We had been using Phidgets (www.phidgetsusa. com) for various automation tasks around the Smart House. The Phidgets Interface Kit 8/8/8 connects up to eight components and provides an API to control the devices over a Universal Serial Bus. Each platform also integrates a two-port USB hub,
making it easy to deploy a large network of devices. We created a grid of 1.5-inch pressure sensors under the floor, as shown in Figure 7, and connected this to the existing Phidgets network.
Pressure sensor
Figure 7. Smart-floor tile block. The Smart House floor consists of a grid of 1.5-inch pressure sensors connected to a network of Phidgets.
Phidget
March 2005
57
Table 1. Smart-floor deployment costs in the kitchen, nook, and family room. Number of blocks
Sensors/ block
Sensor platform/block
Sensor unit price
Sensor platform unit price
Total cost
Cost/ square foot
64
1
1/8
$10
$95
$1,400
$4
The smart house has a 2-inch residential-grade raised floor comprised of a set of blocks, each approximately one square foot. This raised surface simplified the process of running cables, wires, and devices throughout the house. In addition, the floor’s slight springiness puts less strain on the knees and lower back, an ergonomic advantage of particular interest to seniors. We discovered another, unexpected benefit of the raised surface: It allows us to greatly extend the pressure sensors’ range. When a person steps on a tile block, the force of that step is distributed throughout the block. A single sensor at the bottom center can detect a footstep anywhere on that block. In fact, we had to add resistors to the sensor cables to reduce sensitivity and eliminate fluctuations in the readings. Figure 8. Smart-floor mapping system. Tiles with solid lines represent blocks with sensors underneath, while those with dotted lines indicate gaps in coverage due to appliances or room features.
58
Computer
Table 1 details the costs of deploying the smart floor in the kitchen, nook, and family room, a total area of approximately 350 square feet. We do not have to factor the price of the raised floor, which is comparable to other types of residential flooring, into our cost analysis because it is a fundamental part of the Smart House and is used for various purposes. The hardest part of deploying the smart floor involved mapping the sensors to a physical location. Installing the sensors, labeling the coordinates, and manually entering this data into our software took approximately 72 person-hours. Figure 8 shows the mapping system we used for the kitchen, nook, and family room. Tiles with solid lines represent blocks with sensors underneath, while those with dotted lines indicate gaps in cov-
(0,0)
(1,0)
(2,0)
(3,0)
(4,0)
(5,0)
(6,0)
(7,0)
(8,0)
(9,0)
(0,1)
(1,1)
(2,1)
(3,1)
(4,1)
(5,1)
(6,1)
(7,1)
(8,1)
(9,1)
(0,2)
(1,2)
(2,2)
(3,2)
(4,2)
(5,2)
(6,2)
(7,2)
(8,2)
(9,2)
(10,2)
(0,3)
(1,3)
(2,3)
(3,3)
(4,3)
(5,3)
(6,3)
(7,3)
(8,3)
(9,3)
(10,3)
(0,4)
(1,4)
(2,4)
(3,4)
(4,4)
(5,4)
(6,4)
(7,4)
(8,4)
(9,4)
(0,5)
(1,5)
(2,5)
(3,5)
(4,5)
(5,5)
(6,5)
(7,5)
(0,6)
(1,6)
(2,6)
(3,6)
(4,6)
(5,6)
(6,6)
(7,6)
(1,7)
(2,7)
(3,7)
(4,7)
(5,7)
(6,7)
(7,7)
(2,8)
(3,8)
(4,8)
(5,8)
(6,8)
(7,8)
(2,9)
(3,9)
(4,9)
(5,9)
(6,9)
(3,10)
(4,10)
(5,10)
erage due to appliances or room features such as cabinets or the center island. In the future, we intend to redeploy the smart floor using our own sensor platform technology, which will include spatial awareness. This will greatly simplify the installation process and aid in determining the location of one tile relative to another. We will only need to manually specify the position of one tile, and then the system can automatically generate the mapping between sensors and physical locations.
ervasive computing is rapidly evolving from a proven concept to a practical reality. After creating the Matilda Smart House, a 900square-foot laboratory prototype designed to prove the feasibility and usefulness of assistive environments, we realized that hacking hardware and software together resulted in some impressive demonstrations but not something people could actually live in. We designed the second-generation Gator Tech Smart House to outlive existing technologies and be open for new applications that researchers might develop in the future. With nearly 80 million baby boomers in the US just reaching their sixties, the demand for senior-oriented devices and services will explode in the coming years. Ultimately, our goal is to create a “smart house in a box”: off-the-shelf assistive technology for the home that the average user can buy, install, and monitor without the aid of engineers. ■
P
Acknowledgments We thank NIDRR for the generous funding that made this research possible. Many students and research assistants contributed to the Gator Tech Smart House project, including James Russo, Steven Van Der Ploeg, Andi Sukojo, Daniel Nieten, Steven Pickels, Brent Jutras, and Ed Kouch. Choonhwa Lee of Hanyang University significantly helped us to conceptualize the reference architecture.
References 1. S. Helal, “Programming Pervasive Spaces,” IEEE Pervasive Computing, vol. 4, no. 1, 2005, pp. 84-87. 2. A.K. Dey, “Understanding and Using Context,” Personal and Ubiquitous Computing, vol. 5, no. 1, 2001, pp. 4-7. 3. G. Chen and D. Kotz, “A Survey of Context-Aware Mobile Computing Research,” tech. report TR2000-
381, Dept. of Computer Science, Dartmouth College, 2001. 4. R.K. Harle and A. Hopper, “Dynamic World Models from Ray-tracing,” Proc. 2nd IEEE Int’l Conf. Pervasive Computing and Comm., IEEE CS Press, 2004, pp. 55-66. 5. H-W. Gellerson, A. Schmidt, and M. Beigl, “Adding Some Smartness to Devices and Everyday Things,” Proc. 3rd IEEE Workshop Mobile Computing Systems and Applications, IEEE CS Press, 2000, pp. 310. 6. H. Gellersen et al., “Physical Prototyping with SmartIts,” IEEE Pervasive Computing, vol. 3, no. 3, 2004, pp. 74-82. 7. D. Marples and P. Kriens, “The Open Services Gateway Initiative: An Introductory Overview,” IEEE Comm. Magazine, vol. 39, no. 12, 2001, pp. 110114. 8. J. Hightower and G. Borriello, “Location Systems for Ubiquitous Computing,” Computer, Aug. 2001, pp. 57-66. 9. G. Welch and E. Foxlin, “Motion Tracking: No Silver Bullet, But a Respectable Arsenal,” IEEE Computer Graphics and Applications, vol. 22, no. 6, 2002, pp. 24-38. 10. M.D. Addlesee et al., “The ORL Active Floor,” IEEE Personal Comm., vol. 4, no. 5, 1997, pp. 35-41. 11. H.Z. Tan, L.A. Slivovsky, and A. Pentland, “A Sensing Chair Using Pressure Distribution Sensors,” IEEE/ASME Trans. Mechatronics, vol. 6, no. 3, 2001, pp. 261-268. 12. R.J. Orr and G.D. Abowd, “The Smart Floor: A Mechanism for Natural User Identification and Tracking,” Proc. Human Factors in Computing Systems (CHI 00), ACM Press, 2000, pp. 275-276.
Sumi Helal is a professor in the Department of Computer and Information Science and Engineering at the University of Florida and is director and principal investigator of the Mobile and Pervasive Computing Laboratory. His research interests include pervasive and mobile computing, collaborative computing, and Internet applications. Helal received a PhD in computer science from Purdue University. He is a senior member of the IEEE and a member of the ACM and the Usenix Association. Contact him at
[email protected]. William Mann is a professor and chairman of the Department of Occupational Therapy at the University of Florida and is director of the Rehabilitation Engineering Research Center. His research focuses on aging and disability, with an emphasis March 2005
59
on compensatory strategies to maintain and promote independence. Mann received a PhD in higher education from the University of Buffalo. He is a member of the American Society on Aging, the American Geriatric Society, and the Gerontological Society of America. Contact him at wmann@ phhp.ufl.edu. Hicham El-Zabadani is a PhD student in the Department of Computer and Information Science and Engineering at the University of Florida and is a member of the Mobile and Pervasive Computing Laboratory. His research interests include self-sensing spaces, computer vision, and remote monitoring and intervention. El-Zabadani received an MS in computer science from the Lebanese American University. Contact him at
[email protected]. Jeffrey King is a PhD student in the Department of Computer and Information Science and Engineering at the University of Florida and is a member of the Mobile and Pervasive Computing Laboratory. His research interests include security in pervasive computing systems, context-aware computing, thermodynamically reversible computing, and real-
time graphics rendering. King received an MS in computer engineering from the University of Florida. He is a member of the ACM. Contact him at
[email protected]. Youssef Kaddoura is a PhD student in the Department of Computer and Information Science and Engineering at the University of Florida and is a member of the Mobile and Pervasive Computing Laboratory. His research interests include indoor location tracking and location- and orientationaware pervasive services. Kaddoura received an MS in computer science from the Lebanese American University. Contact him at
[email protected]. Erwin Jansen is a PhD candidate in the Department of Computer and Information Science and Engineering at the University of Florida and is a member of the Mobile and Pervasive Computing Laboratory. His research interests include programming models for pervasive computing, context awareness, artificial intelligence, and peer-topeer systems. Jansen received an MS in computer science from Utrecht University. Contact him at
[email protected].
Thank you
JOIN A THINK TANK
The IEEE
Computer Society
thanks these sponsors
for their contributions to the Computer Society International Design Competition.
ooking for a community targeted to your area of expertise? IEEE Computer Society Technical Committees explore a variety of computing niches and provide forums for dialogue among peers. These groups influence our standards development and offer leading conferences in their fields.
L
Join a community that targets your discipline. In our Technical Committees, you’re in good company.
www.computer.org/CSIDC/ 60
Computer
www.computer.org/TCsignup/
COVER FEATURE
Web-Log-Driven Business Activity Monitoring Using business process transformation to digitize shipments from IBM’s Mexico facility to the US resulted in an improved process that reduced transit time, cut labor costs and paperwork, and provided instant and perpetual access to electronically archived shipping records.
Savitha Srinivasan Vikas Krishna Scott Holmes IBM Almaden Research Center
B
0018-9162/05/$20.00 © 2005 IEEE
usiness process transformation defines a new level of business optimization that manifests as a range of industry-specific initiatives that bring processes, people, and information together to optimize efficiency. For example, BPT encompasses lights-out manufacturing, targeted treatment solutions, realtime risk management, and dynamic supply chains integrated with variable pricing. This new optimization level is possible because the Web has assumed the role of a common infrastructure. Although the notion of business process management evolved over several decades, BPM gained real momentum during the 1990s through several trends, including business reengineering and process mapping.1 Despite these advances, organizations have become increasingly aware that they must transform further to realize the full potential of managing their operations as a series of interconnected processes—an awareness that has culminated in BPT’s evolution into a defined area. BPT initiatives can be complex, distributed, and expensive. Organizations understand that measuring the performance of such initiatives through metrics-driven management is important. Managers must be able to prove that the initiatives are justified by continuously benchmarking process execution performance.2 To examine how BPT can optimize an organization’s processes, we describe a corporate initiative that was developed within IBM’s supply chain organization to transform the import compliance process that supports the company’s global logis-
tics. The initiative sought to give IBM greater awareness of regulatory compliance exceptions— information critical to the corporation and its importing partners. The project team defined a BPT initiative to horizontally integrate the process with the people, information, and IT infrastructure. The technology brought to bear on the problem included a content management infrastructure powered by a smart document gateway at every participating location.
BPT AT IBM In a large corporation such as IBM, which has multiple divisions and business processes, introducing a new Web-based system to optimize a business process and replace an existing paper-based system requires a complex cost-benefit model. The department responsible for the process can increase efficiency by automating the process and moving to a paperless system. Such a transformation can reduce labor costs and decrease the complexity and completion time for executing the process. This local benefit alone may not, however, justify the investment the department must make to implement the transformation. Therefore, developers must be able to benchmark the system’s effect on the organization as a whole, generate metrics to track the process’s operations, and quantify its efficiencies. Given the Web’s pervasiveness, many BPT solution components introduce Web interfaces to support both intranet and Internet applications. Web usage mining—the application of data mining techniques to dis-
Published by the IEEE Computer Society
March 2005
61
cover usage patterns from Web data—has been an active area of research and commercializaThe implementation tion. Often, such mining provides insight that benchmarks helps optimize the site for increased customer global and loyalty and e-business effectiveness. Applicross-organizational cations of Web usage mining include usage characterization, Web site performance imWeb-based process provement, personalization, adaptive site modtransformations. ification, and market intelligence.3 However, such Web usage tools frequently do not provide holistic visibility into the global end-to-end business process—and they certainly fall short of encompassing the horizontal integration people, process, information, and IT infrastructures. A single Web application provides only one component in the overall business process that it supports. In this case, a business activity management solution is more appropriate. BAM encompasses the real-time reporting, analysis, and alerting of significant business events, accomplished by gathering data, key performance indicators, and business events from multiple applications.4 BAM enhances the transformation of a business by letting its managers access execution statistics aggregated in process context so that they can get a holistic view of a business activity. This view can then be used for operational and strategic decision support.5 Benchmarks can be used to account for the system’s return on investment, thus driving iterative optimization systems and processes. As the “Building a Foundation for Transformation” sidebar describes, IBM’s supply chain organization has leveraged work in the BPT and BAM areas to transform its import compliance operations process for supporting global logistics within the company. The process uses a unified Web infrastructure to coordinate the movement of goods, data, and documents in a global supply chain that connects suppliers, importers, customs brokers, and freight forwarders from 80 different countries. Because local Web log analysis of each application, without the context of an overall business process, does not yield meaningful business metrics, IBM also implemented a global-content-management repository managed by a middle tier to support several Web-based applications that all participants use. The IBM BAM implementation focuses on system efficiency analysis by benchmarking global and cross-organizational Web-based process transformations. The implementation achieves this by • defining a conceptual process model for the Web-based transformation, 62
Computer
• using the process model to identify businessactivity-monitoring metrics, • automatically computing BAM metrics by correlating distributed Web logs, and • applying the metrics to benchmark the transformed process.
IMPORT COMPLIANCE Importing goods into the US involves a complex logistics process with many compliance requirements that have received heightened focus since 9/11. Accurately declaring all the information associated with a shipment to various government agencies has become mandatory. IBM has been using a fairly manual and paper-intensive process that can lead to several errors that must be corrected after the shipment of goods crosses the US border. With the transformed solution for import compliance, we sought to streamline IBM’s global logistics operations by providing visibility into the global import process. Government agencies in each country have established requirements in the form of laws, regulations, and procedures that govern the importation of goods such as products, parts, and supplies. Failure to comply with these requirements, prohibitions, or restrictions can result in civil or criminal penalties or the loss of a company’s right to import. These requirements apply to goods from both IBM and non-IBM supply sources into the US. Complying with the export policies for 80 different countries with different regulations presents a complex challenge. Although the transactional data flows electronically via electronic data interchange integration, the document handling remains entirely manual. IBM sends the documents and information about the shipments to various participants via telephone, e-mail, or faxes. Paperwork is handled during the entire import compliance process, then digitized at the end of a postentry process primarily for archiving and records retention. Coordinating transactional data with the supporting documents is essential in global logistics operations. Documents such as supplier invoices, shipping instructions, and packaging lists related to a global supply-chain operation must be managed at the ports to provide visibility into the shipment. Frequently, hard copies of these documents must accompany the goods during transportation. For worldwide logistics operations across several nodes dealing with different brokers, this process entails data duplication, relaying information from documents into disconnected document repositories, manual reconciliation of logistics operations
Building a Foundation for Transformation Our work for the IBM transformation initiative leverages and builds upon previous work in the areas of business process transformation, business activity management, document workflow, automatic metrics computation, and business process benchmarking. The fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance—such as cost, quality, service, and speed—began in the early 1990s.1 A recent IDC study explains the motivation for BPT as opposed to conventional outsourcing.2 The notion of relating document management to workflow has been prevalent for several decades, and many document management systems incorporate this feature. Other researchers3 used a case study involving order processing at a machine tool company to provide tools and methods for addressing problems in integrated document and workflow management. Their contribution is a process definition language designed to make a document-oriented tool with a workflow engine more efficient. The idea of using active document properties to document management applications4 avoids traditional hierarchical storage mechanisms, reflects document categorizations meaningful to user tasks, and provides a means to integrate the perspectives of multiple individuals within a uniform interaction framework. FileNet’s document management requirements, together with its impact on a relational database’s system usage, have also been examined.5 The automatic computation of metrics draws upon Web log analysis techniques. Here, path analysis usually provides the basis for many Web analytics tools. Path analysis seeks to help understand how a visitor navigates a given Web site. Ultimately, it serves to classify visits as a success or failure against certain business objectives, such as making a sale, and can serve to guide Web site redesign. Pattern discovery from Web logs draws upon methods and algorithms developed from several fields, including statistics, data mining, machine learning, and pattern recognition.3,6-9 Many Web traffic analysis tools produce a periodic report containing statistical information such as the most frequently accessed pages, each page’s average view time, or average path length through a site. Despite its lack of analytical depth, this type of knowledge can be useful for improving system performance, enhancing system security, facilitating site modification, and providing support for marketing decisions. Many commercial products are based on this type of analysis.10 Several projects9,11-13 have focused on Web-usage mining in general, without specifically focusing on Web mining techniques. The value of monitoring a business’s key performance
with accounting systems, and poor visibility into the global supply-chain operation.
Current import process As an example import process, we describe the current practice for a specific manufacturing lane.
indicators has been recognized at least since the formalization of metrics-based management in the early 1990s. Implementation of a system for doing so was hindered by the lack of integration in disparate information systems. With the introduction of enterprise application integration and BPM, corporate data can cross boundaries between departments and organizations, making this data available to be stored, mined, and used for the application of business rules as part of a BPT initiative. These initiatives commonly call for the development of BAM implementations that handle exception management and alert systems,7 rather than focusing on generating and accessing metrics to benchmark the system itself. The use of BAM metrics to drive allocation of information technology resources in the context of autonomic computing focuses on how systems could react to generated metrics but not on the generation of BAM metrics. References 1. M. Hammer and J. Champy, Business Reengineering: The Radical Cure for the Enterprise, Campus, 1994. 2. CapGemini, “Transformational Outsourcing: From BPO to BPT;” www.capgemini.com/outsourcing/media/FromBPOtoBTP.pdf. 3. M.S. Chen, J. Hart, and P.S. Yu, “Data Mining: An Overview from a Database Perspective,” IEEE Trans. Knowledge and Data Engineering, vol. 8, no. 6, 1996, pp. 866-883. 4. P. Dourish et al., “Extending Document Management Systems with User-Specific Active Properties,” ACM Trans. Information Systems, vol. 18, no. 2, 2000, pp. 140-170. 5. D. Whelan, “FileNet Integrated Document Management Database Usage and Numbers,” Proc. 1998 ACM SIGMOD Int’l Conf. Management of Data, ACM Press, vol. 27, no. 2, 1998, p. 533. 6. R. Agrawal and R. Srikant, “Fast Algorithms for Mining Association Rules,” Proc. 20th VLDB Conf., 1994, pp. 487-499. 7. R. Agrawal and R. Srikant, “Mining Sequential Patterns,” Proc. 11th Int’l Conf. Data Eng. (ICDE 95), IEEE CS Press, 1995, pp. 3-14. 8. M. Berry and G. Linoff, Data Mining Techniques for Marketing, Sales, and Customer Support, Wiley, 1997. 9. T.W. Yan et al., “From User Access Patterns to Dynamic Hypertext Linking,” Computer Networks and ISDN Systems, vol. 28, nos. 7-11, 1996, pp. 1007-1014. 10. Truste, “Trust Matters”; www.truste.org. 11. M. Spiliopoulou and L.C. Faulstich, “Wum: A Tool for Web Utilization Analysis,” Proc. EDBT Workshop (WebDB 98), Springer-Verlag, 1999, pp. 184-203. 12. M. Spiliopoulou, “Web Usage Mining for Web Site Evaluation,” Comm. ACM, Aug. 2000, pp. 127-134. 13. W. Kun-lung, P.S. Yu, and A. Ballman, “Speed-Tracer: A Web Usage Mining and Analysis Tool,” IBM Systems J., vol. 37, no. 1, 1998, pp. 89-105.
In this lane, goods originate in IBM Mexico and are trailer-driven into the US at Laredo, Texas. Here, the US customs broker clears the shipment through customs, then ships the goods via a freight broker to the customer and mails hard-copy documents associated with the shipment to IBM Boulder March 2005
63
Figure 1. Current import process for IBM’s Mexico Lane. Goods originate at IBM Mexico and are driven into the US, where the US customs broker clears the shipment through customs, ships the goods to the customer, and mails hardcopy documents associated with the shipment to IBM Boulder for archiving. Figure 2. Transformed import process for IBM’s Mexico Lane. A smart-document gateway coordinates three parallel flows— transactional data, goods, and coordinated documents—to provide access to transactional data and the corresponding documents throughout the shipping process.
64
Process
System
Order
SAP
Delivery due list
SSCI warehouse/ packaging
Invoicing/ billing
Pre-alerts
Shipping
SAP
SAP
SAP IDDE EDI
SAP Lotus Notes
IDDE/ IDM
Paper generated
Brokerage – US customs broker, Laredo/Denver Carrier – FedEx, UPS, Menlo Invoices printed 1–IBM purposes 4–Mexican customs 1–Carrier 1–Auditor
Data
Goods
Additional documents (IBM) Packing list Pallet label Shipping label Instruction card for broker
SAP
IDDE
X-Series
Mexico
Documents
Additional documents (carrier) Manifest
IDM US importing location Paperless document basket
SAP invoices sent to workbasket
Boulder can begin work on invoice
Goods moved by UPS/FedEx
US customs broker pulls invoices
US customs broker clears customs
IDM notifies builder of clearance
Boulder archives completed entry documents
• The freight carrier picks up the goods, which are untraceable until the carrier arrives at the US border for customs clearance. • The US customs broker handles the paper documents, clears customs, adds customs clearance forms, retains copies for its records, and sends the documents to the goods’ destination at IBM Boulder. • The goods’ recipient receives the final set of paper documents along with the goods. Personnel then batch scan the documents and, for legal reasons, retain these records for seven years.
for archiving. Figure 1 details the current process and supporting systems. The process flow involves several steps. The orders originate in the enterprise resource planning system at the supplier, IBM Mexico, and are sent to the warehouse for packaging. The staff generates the relevant invoice documents in the ERP system, sends pre-alerts to the customs broker via phone or e-mail, then ships the goods into the US. At this point, the US customs broker creates the entry in the proprietary IBM import system, generates the documents relevant to the clearance process, and clears the shipment through customs. Note the parallel flow of data and documents that accompanies the process, as Figure 1 shows. Once IBM Mexico has generated the ERP system invoices, all parties generate all subsequent documents manually, on paper, and make all corrections and further data entries by hand. All transactional data resides in the ERP systems and is managed electronically. The typical user scenario thus plays out as follows:
Because it lacks electronic documentation, the current process involves data entry duplication, nth-generation document copies and faxes, extensive handling of paper documents, and poor visibility into the incoming shipments. Thus, the process suffers from several operational inefficiencies and problems:
• The supplier, IBM Mexico, prints hard-copy document invoices from its ERP system and ships them with the goods to the freight carrier.
• The US customs broker lacks detailed electronic shipment information when it clears customs for a specific trailer. Obtaining this
Computer
Secure Trade Lane for Worldwide Goods Exchange
information requires extensive manual communication between IBM Mexico, the US customs broker, and IBM Boulder. • The US customs broker manually keys in several data fields—as many as 50 in some cases— to create the customs clearance entry. • IBM Boulder lacks access to the shipment invoices or customs paperwork to assist with the clearance process. Questions can be answered only by phone or through manual updates.
Transformed import process Figure 2 shows the transformed process, which has three parallel coordinated flows. The transactional data continues to flow as shown in Figure 1, from the ERP system into the proprietary IDDE and IDM systems to interact with US Customs. The flow of goods also appears as a parallel flow, with X-Series product flowing from Mexico to the US importing location. The third parallel flow addresses the coordinated document management associated with the transactional data flow, supported by a content management infrastructure6 that can store, manage, and retrieve document collections corresponding to the business process. At each location—IBM Mexico, the US customs broker, and IBM Boulder—a smart-document gateway7 serves as an onramp to the content management infrastructure by capturing the document image—whether from paper, a desktop computer, or application-generated documents—at the source. The content management infrastructure serves as the paperless document workbasket shown in Figure 2. The transformed solution attempts to coordinate the movement of transactional data with the corresponding documents so that all participants in the process have access to the information as well as the documents needed to efficiently handle the import compliance process. The transformed process works as follows: IBM Mexico creates and adds invoices associated with each trailer into the shared document workbasket in real time, then sends e-mail alerts related to the shipment with a URL to the US customs broker and IBM Boulder. The invoices are sent to the shared document workbasket using a smart-document gateway that prompts the user for the relevant metadata required to support the entry process, such as invoice and trailer numbers. Even before the shipment reaches the US border, the US customs broker can use a Web interface to query the document workbasket and then prepare the entry form by obtaining the appropriate codes for clearing customs. IBM Boulder can also query
Supply chain performance challenges and ongoing worldwide legal initiatives and regulations show an imminent need for action to make the worldwide trade lane smart and secure. The Secure Trade Lane solution concept, developed by the IBM Research lab in Zurich, addresses these challenges and helps facilitate a more efficient and secure exchange of container-based goods shipped worldwide. Secure Trade Lane offers a complete solution that closes the gap between stakeholders, embedding efficiency and security across the value chain and providing critical data at every point of interaction with container-based goods. It addresses all the process, logical, and physical challenges inherent in an end-to-end, secure, trade logistics chain. It also provides trade stakeholders with a practical plan and the accompanying technology to support more secure goods tracking worldwide. If adopted across the supply chain, Secure Trade Lane can deliver a trusted string of information to help ensure secure status updates and real-time monitoring of a container, as well as tools for authorization, maintenance, and settling payments. This concept provides a reliable and verifiable means of collecting a trail of evidence about events concerning a container, from its origin to its final destination. Various stakeholders can use this evidence to optimize and simplify business processes and to perform risk analysis and assess container security and integrity. The evidence collected can include when and where the container was loaded, what goods the container held, the various entities—such as shippers, carriers, or port operators— that transported or handled the container, the container’s route, the presence of certain chemicals in the container, attempts to break into the container, and many additional criteria. However, for process optimization and risk analysis to be meaningful, the evidence collected must be reliable and verifiable. IBM’s proposed solution concept collects evidence in a tamper-resistant embedded controller that forms an integral part container. The TREC acts as a central control point that can authenticate the source of evidence and implement access control to it. The platform also integrates a variety of commercial electronic sensors to detect container events such as a door opening, shock, vibration, environmental changes, and location. The TREC offers a unique level of security and functionality that goes well beyond that achievable with standard radio-frequency ID technology.
the document repository and work with the US customs broker to obtain all relevant information. Once the US customs broker prepares the entry clearance form, it is scanned into the shared workbasket and linked with the shipment created by IBM Mexico. Now IBM Boulder has access to all entry information prior to the goods being shipped to the customer, which significantly reduces the entry rework. Equipping each three of the participants in the process with a document gateway server that can add documents to the shared document workbasket in real time provides global visibility to all steps in the import and customs clearance process. The document workbasket Web interface supports querying and retrieval documents based on a powerful data model that captures the full process semantics. As the “Secure Trade Lane for Worldwide Goods Exchange” sidebar describes, process transformaMarch 2005
65
Figure 3. Class diagram for process elements at each location. These elements display the data used to classify, store, and display the document appropriately.
Process (from DGProcess) <<0.1>> name : String <<0.1>> description : String
+verifications 0.n Notification (from DGProcess)
XMLDocument (from DGProcess)
+documents 1.n
0.n +otherDocument +actions 0.1
<<1.1>> text : String <<1.1>> class : String <<0.n>> arguments : String <<0.1>> name : String
Document (from DGProcess) <<1.1>> name : String <<1.1>> path : String <<1.1>> repositoryTarget : String <<0.1>> documentSet : String
+captureEntries 0.n Entry
ImageDocument 0.n
(from DGProcess) <<1.1>> resolution : String
+verificationEntries
tion projects such as ours are now playing a vital role in leveraging the power of digital technology to make commerce move faster and more securely while reducing labor costs.
BENCHMARKING IMPORT COMPLIANCE The first step in benchmarking involves defining the transformed process and identifying metrics relevant to BAM. Once we identify the metrics, we can compute them automatically by correlating multiple distributed Web logs.
Identifying BAM process elements Figure 3 shows the conceptual process elements incorporated into the smart-document gateway that serves as the onramp to the shared document workbasket at IBM Mexico, the US customs broker, and IBM Boulder. All documents flow through the gateway into the content management system and reside in the defined data model. Participants at each node can use the Web interface to query and retrieve the documents that the gateway adds. At each location, the process consists of a set of named document collections in which a document can be in either a desktop or XML format or a 66
Computer
(from DGProcess) <<1.1>> name : String <<1.1>> repositoryTargetAttribute : String
ManualEntry
OCREntry
(from DGProcess)
(from DGProcess)
<<1.1>> text : String <<1.1>> type : Integer
<<1.1>> startX : Integer <<1.1>> startY : Integer <<1.1>> endX : Integer <<1.1>> endY : Integer <<1.1>> page : Integer
scanned image. Each document has one or more entries associated with it that correspond to the data model attributes defined in the content management system. The system maps each document, together with its entries, to a target schema in the content management system. At each location, this conceptual process model captures documents and adds them to the content management back end. The captured documents’ time stamps at each location help to identify and compute the BAM metrics for the global process. For example, the elapsed time between when the gateway at IBM Mexico adds a document and when the gateway at the customs broker adds the customs entry document provides a key metric, the entry-creation time. This includes the time invested by the personnel at both ends to add and query documents from the shared workbasket. The system uses Web logs to compute this elapsed time. At this point, developers arrive at the BAM metrics definition manually by analyzing the coordinated process at each of the various locations that implement the described process elements. In the future, we anticipate being able to identify some key BAM metrics from the process model automatically.
Identifying BAM metrics
Metrics vector =
We have identified a set of quantitative and qualitative metrics relevant to the process. The quantitative metrics consist of the following: • Entry creation time—the difference between the time the goods originate at the origin and the time they clear customs at the customs broker. Because multiple carriers and brokers can participate in the global operation, we normalize this time by the number of nodes in the process. • Entry completion time—the elapsed time between when documents are ready at the origin and documents can be reviewed at the import process’s completion. • Number of missing documents—the number of entries identified in an audit check as having a missing document. • Number of data entry errors—instances when the data in the document does not match the document’s value. • Number of errors—the sum of various error types, such as classification and census errors and postentry adjustments. • Reduced rate with broker—the fee reduction per entry that the broker charges because IBM has implemented a simpler process. The qualitative BAM metrics include • document quality as a measure of fidelity, • relationship with partner and customs agencies, and • customer satisfaction. Qualitative BAM metrics are collected using a survey with a scale of 1 to 5 in which 5 is the best score and 1 is the worst.
Entry creation time Entry completion time Missing documents Data entry errors Classification errors Assist errors Post entry adjustments Census error Missing documents Average number of queries (daily) Average number of logins (daily)
lowing advantages over traditional URL and path analysis:
Figure 4. Vector for benchmarking BAM compliance. The metrics vector is comprised of attributes important to the business process. These attributes will be computed at runtime and stored in Web logs to support activity monitoring.
• a grammar-based framework, flexible enough to define many different BAM metrics types and tasks relatively simply; • deeper analysis of user actions using meaningful units of user interaction, tasks, and business-cost and value metrics; • finer-grained per-task metrics, rather than perURL or per-database-update metrics; • better process effectiveness measures, based on parameters such as time to perform a task and task frequency; and • Web and gateway logs for use in performing different kinds of descriptive statistical analyses—such as frequency, mean, and median— on variables such as age, document views, and entry-clearance time. BAM metrics analysis requires the following steps: • preprocess daily logs to extract statements and URLs of interest to the BAM metrics detection grammar; • detect patterns using the metrics grammar, then use the time stamps in the URL and gateway logs to compute the time taken to perform the task; • aggregate the number of queries, document the views, and plot daily graphs; and • plot trends over time across relevant BAM metrics.
Computing BAM metrics We calculated the quantitative BAM metrics by automatically correlating the set of process logs, which consisted of shared document workbasket Web logs and smart-document gateway logs from the three participating locations. To model and detect BAM metrics,8 we used a grammar-based framework that has declarative and processing stages. We used the Backus Naur9 notation to represent a regular grammar corresponding to the BAM metrics of interest, then we used a deterministic finite-state machine to parse logs against the grammar that encodes the BAM metrics of interest. This methodology has the fol-
Based on the identification of relevant metrics to the process, we identified the vector shown in Figure 4 as a standard way to benchmark the import compliance process within IBM.
PILOT RESULTS Table 1 summarizes the results of a six-week production pilot that compared the existing process with the transformed process. The results show details of specific errors associated with the logistics operations and their cycle time for resolution. The improvements in cycle times made a significant impact on the business process, while the idenMarch 2005
67
Table 1. Results of a six-week process-improvement production pilot. Measurement
Old process
New process
Entry creation time Census error resolution time Classification error resolution time Exception processing time (% reduction for census & classification errors) Postentry adjustments (weekly average) Missing documents Avg. number of queries (daily) Avg. number of logins (daily)
15 days 2 days 1 day N/A
25 hours 2 hours 2 hours 96%
16 25% missing N/A N/A
1 0% missing 30 14
tified metrics provided a basis for comparing different logistics operations. Specific trends in the pilot’s metrics can be studied by querying an import compliance dashboard to monitor, identify, and preempt the metrics’ impact on the user’s business. We used distributed Web logs to define and automatically compute a set of BAM metrics, then used the metrics to benchmark the new solution.
he production pilot implementation generated compelling results that demonstrated significant efficiencies gained in the overall import compliance process. Global rollout plans are under way to transform the import compliance process throughout the enterprise. The success of the internal implementations has led to interest from other third-party logistics companies and the distribution sector. These new opportunities are driving future research directions in terms of a business rules engine to manage the dynamic, complex rules associated with the movement of goods in a global supply chain. Easy-to-use tools to manage and define changing business rules, automatic metadata extraction, dashboards for monitoring document flows, exception management, and fault tolerance are some additional technologies necessary for production implementations. A global infrastructure that supports worldwide logistics operations with the technologies we’ve described will serve to benefit multiple stakeholders in the process—importers, freight forwarders, customs brokers, and customs organizations. The ultimate success of this initiative may depend not on the technology but on the ability of various stakeholders to partner in achieving the most streamlined operations. ■
T
References 1. M. Golfarelli, S. Rizzi, and L. Cella, “Beyond Data Warehousing: What’s Next in Business Intelligence?” Proc. 7th ACM Int’l Workshop Data Warehousing and OLAP, ACM Press, 2004, pp. 1-60.
68
Computer
2. T.H. Davenport, Process Innovation, Harvard Business School Press, 1993. 3. R. Cooley, P.-N. Tan, and J. Srivastava, Discovery of Interesting Usage Patterns from Web Data, tech. report TR 99-022, Univ. Minnesota, 1999. 4. H. Dresner, “Business Activity Monitoring: BAM Architecture,” Gartner Group; www.pikos.net/ documents/german/gartner.pdf. 5. L. Verner, “BPM: The Promise and the Challenge,” ACM Queue, Mar. 2004, pp. 82-91. 6. DB2 Content Management; www-306.ibm.com/ software/data/cm/. 7. V. Krishna and S. Srinivasan, “Towards Smarter Documents,” Proc. Conf. Information and Knowledge Management (CIKM 2004), ACM Press, 2004, pp. 634-641. 8. S. Srinivasan et al., “Grammar-Based Task Analysis of Web Logs,” Proc. Conf. Information and Knowledge Management (CIKM 2004), ACM Press, 2004, pp. 244-245. 9. N. Chomsky, “On Certain Formal Properties of Grammars,” Information and Control, vol. 2, no. 2, 1959, pp. 137-167.
Savitha Srinivasan is the manager of content management solutions at IBM Almaden Research Center. Her research interests include document handling, video analysis, and speech recognition. Srinivasan received an MS in computer science from Pace University. Contact her at savitha@ almaden.ibm.com. Vikas Krishna is a research software engineer at IBM Almaden Research Center. His research interests include automation of document-centric business processes, Eclipse tools, and Web-based service delivery. Krishna received an MS in computer engineering from Syracuse University. Contact him at
[email protected]. Scott Holmes is a solutions architect at IBM Almaden Research Center. His research interests include business process management and text analytics. Holmes received an MBA from Santa Clara University. Contact him at
[email protected].
ITCC 2005
CALL FOR PARTICIPATION International Conference on Information Technology April 11-13, 2005, Las Vegas, Nevada, USA
www.itcc.info General Chair Henry Selvaraj ECE Department University of Nevada, Las Vegas Las Vegas,NV 89154 Phone: +1 (702) 895-4184 Fax: +1 (702) 895-1115
[email protected]
Program Committee Chair Pradip Srimani, Clemson University
Finance Chair Venkatesan Muthukumar, UNLV
Registration Chair Emma Regentova, UNLV
Publicity Chair Ajit Abraham
Local Organizing Chair Yingtao Jiang, UNLV
Conf. Secretary Vasu Jolly, UNLV
Program Commitee Ajith Abraham Sheikh I. Ahamed Y. Alp Aslandogan Mario Cannataro Josep Domingo-Ferrer Paul Douglas Sumeet Dua Mohammad Eyadat Moses Garuba Nazli Goharian Ray Hashemi Elaine Lawrence Maria Mirto Luiza Mourelle Nadia Nedjah Emma Regentova Amanda Spink Johnson Thomas Shantaram Vasikarla
Charles Willow
Theme The rapid growth in information science and technology in general and the complexity and volume of multimedia data in particular have introduced new challenges for the research community. Of particular interest is the need for a concise representation, efficient manipulation, and fast transmission of multimedia data. Applications such as space science, telemedicine, military, and robotics deal with large volumes of data which need to be stored and processed in real time. The ITCC is an international forum which brings together researchers and practitioners working on different aspects of Information Technology. It is a gathering where the latest theoretical and technological advances on Information Technology are presented and discussed. All papers submitted to this conference were refereed by at least two independent referees.. The ITCC proceedings will be available at the time of the conference. The ITCC 2005 is sponsored by the IEEE Computer Society. The ITCC 2005 is being held in conjunction with a related conference- International Conference on Information Systems: New Generations (ISNG 2005) whose emphasis is primarily on information systems hardware, prototypes and architectures.
Tracks The conference Call for Paper attracted 524 papers from researchers, scientists, and practitioners from all over the world. After a stringent review process nearly 250 papers have been selected for presentation in the conference. The conference will also feature 30 poster presentations. The papers have been arranged into following tracks: 1) Web Search Technologies, A. Spink, University of Pittsburg, USA. 2) Data Coding and Compression, E. Regentova,, UNLV, Las Vegas, USA. 3) Information Assurance and Security, A. Abraham & J. Thomas, Oklahoma State Univ (Tulsa),USA. 4)New Trends in Image Processing, S. Vasikarla,, American InterContinental Univ., USA. 5)Data Mining, R. Hashemi, Armstrong Atlantic State University, USA. 6)Embedded Cryptographic Systems, N. Nedjah & L. Mourelle,, State University, Brazil. 7) Distributed and grid systems, M. Mirto,, University of Lecce, Italy. 8) Next-generation Web and Grid Systems, M. Cannataro,, University Magna Graecia of Catanzaro, Italy. 9) E-Gaming, J. Domingo-Ferrer, Rovira i Virgili University, Spain. 10) Bioinformatics, Y. A. Aslandogan & S. Dua, UT Arlington, Louisiana Tech Univ., USA. 11) Database Technology, M. Garuba, Howard University, USA. 12) Mobile Enterprise, E. Lawrence, University of Technology, Australia. 13) Information Retrieval, N. Goharian, Illinois Institute of Technology, USA. 14) Software Engineering, M. Eyadat,, California State University, USA. 15) Pervasive Computing, S. I. Ahamed,, Marquette University, USA. 16)Education: Curriculum, Applications and Research, P. Douglas,, Univ. of Westminster, UK (17) E-Commerce, C. Willow, Monmouth University, USA (18)Wireless Ad Hoc/Sensor Networks and Network Security", Y. Kim, M. Yang UNLV, USA.
Keynote Speakers The conference features keynote speeches from two distinguished scientists Vittal Rao (NSF, USA) and Mo. Jamshidi (University of New Mexico, Albuquerque, NM, USA).
CAREER OPPORTUNITIES 70
NewEnergy Associates is recruiting for the following positions at its facilities located in Atanta, GA: SYSTEMS ENGINEER. Work with clients to develop advanced production costing and market simulation models for electric power company. Research and develop methodology for Power System Transmission Security Constraint Dispatch, Optimal AC/DC Power Flow, Locational Marginal Price Calculation, Company Interchange Accounting, Unit Commitment, Com-
bined Cycle, Limited Fuel and Pumped Storage Unit Optimization Dispatch using FORTRAN 95, Visual Basic 6.0+, C++ and Java 2.0+ programming languages. Experience with MatLab, Dash’s LP and MIP package is a plus. Master or Ph.D. degree in EE and 2+ years related experience are required. CONSULTANT, Gas Strategy and Planning – Work with clients to develop advanced simulation models of natural gas decisions including the optimization and analysis of volumetric and
DEAN, COLLEGE OF ENGINEERING THE POSITION: The Dean provides support for the faculty of the College by creating a positive environment for teaching, scholarship, research and professional engagement, and service to the University and community. The Dean is responsible for the quality of academic programs and for managing the fiscal, human resources, and physical facilities of the College. Because of the learn-by-doing philosophy at Cal Poly, the Dean is responsible to support the current laboratory-based curricula and to support the development of new laboratories. The Dean is expected to build partnerships with alumni and the business community, and to seek supplemental financial support for both new and existing programs. The successful applicant should be prepared to demonstrate the leadership ability to distinguish the College of Engineering as a nationally prominent learning center that is reflective of the polytechnic character of the University. The Dean participates in the development of University-wide policy as a member of the Academic Deans’ Council and the President’s Strategic Management Group. The Dean is appointed by the President and reports directly to the Provost and Vice President for Academic Affairs. QUALIFICATIONS: An earned doctorate in one of the instructional areas within the College. Credentials appropriate for a tenured appointment at the rank of professor to include a distinguished record of teaching and scholarship. Successful record of academic and administrative experience encompassing human resources and fiscal management; a strong commitment to academic excellence; a demonstrated capacity for academic leadership and team building; commitment to fostering a technology-enhanced collaborative learning environment; capability to expand alliances with the private sector; experience in the design and continuous implementation of the strategic planning process; strong experience and a commitment to engage the College in a comprehensive program of advancement activities; ability to enhance and to work effectively with an ethnically and culturally diverse campus community and to address student needs in a multicultural educational environment. COMPENSATION: Salary is commensurate with the background and experience of the individual selected. All rights associated with the appointment are governed by the Management Personnel Plan adopted by the CSU Board of Trustees. THE COLLEGE: The College of Engineering is organized into the following departments: Aerospace Engineering, Civil and Environmental Engineering, Computer Science, Electrical Engineering, Industrial and Manufacturing Engineering, Materials Engineering, and Mechanical Engineering. The mission of the College of Engineering is to educate students for careers of service, leadership and distinction in engineering or other fields by using a participatory, learn-by-doing, “hands-on” laboratory, project-and design-centered approach. Nearly 130 full-time faculty members teach over 4,800 students enrolled in twelve baccalaureate and nine Master’s degree programs. The College is the largest undergraduate engineering college west of the Rockies and one of the nation’s premier institutions for undergraduate engineering education. Over half of all engineering courses have associated laboratories that provide the hands-on experience necessary to link theory with practice. In addition, students have the opportunity to participate in “real world” engineering problem solving through co-ops and internships with industry and government and through the senior project capstone design experience. Graduates are accustomed to working in diverse, goal-oriented teams. THE UNIVERSITY: Cal Poly is a state university with nearly 18,000 students. The University has a distinctive mission and is best known for its polytechnic programs. It also offers comprehensive curricula in the arts and sciences. One of the 23 campuses of The California State University, Cal Poly has built an exemplary reputation on its learnby-doing approach to the preparation of undergraduate and graduate students. The University is organized into seven colleges: Agriculture, Architecture and Environmental Design, Business, Education, Engineering, Liberal Arts, and Science and Mathematics. Nearly two-thirds of the University's students major in agriculture, architecture and environmental design, business, or engineering. Student quality is high, with applications significantly exceeding admissions. University families live in San Luis Obispo and nearby communities both on the coast and inland. San Luis Obispo, a city of 44,000, is located twelve miles from the Pacific Ocean and midway between San Francisco and Los Angeles on California’s scenic central coast. Excellent recreational facilities are available, and the area has an outstanding climate, with an average daily maximum temperature of 62.2 in January, 77.0 in August, and an annual average of 70.2. APPLICATIONS AND NOMINATIONS: The search committee will begin to review nominations and applications on February 25, 2005; and will continue to review them until the position is filled. The preferred start date for the position is September 1, 2005. Using the internet (go to http://www.calpolyjobs.org), candidates must complete electronically the on-line Cal Poly Management Employment Application and apply to Requisition Number 100477. In addition, each applicant must provide (either as attachments to the on-line application or sent by surface mail) the following documents: (1) cover letter; (2) detailed curriculum vitae or resume; (3) personal statement (two page maximum) of the applicant’s view on academic administration and the role and responsibilities of the faculty in a college of engineering; (4) salary history for the last five years; and (5) the names, addresses, and phone numbers of at least five references, including two from faculty. Please reference Requisition Number 100477 on all correspondence. Nominations and other correspondence should be addressed to: Dr. Warren J. Baker, President, c/o Academic Personnel Office, One Grand Avenue, California Polytechnic State University, San Luis Obispo, CA 93407 INQUIRIES AND ADDITIONAL INFORMATION: Contact Academic Personnel via E-mail:
[email protected]; FAX: (805) 756-5185; Phone: (805) 756-2844 Cal Poly is strongly committed to achieving excellence through cultural diversity. The University actively encourages applications and nominations of all qualified individuals. Equal Opportunity Employer
Computer
price uncertainty. Candidate will develop sophisticated mathematical models of demand as well as hub and basis price uncertainty using Monte Carlo and econometrics. Work includes valuation of natural gas portfolios using economic and energy risk measurement techniques. Master’s degree in Economics, Mathematics, or Engineering, with experience using energy industry software tools required. For consideration, please send resume to
[email protected]. No agents, please EOE.
ENGINEERING (TEST) MANAGER of SW Engineers. Oversee creation of test plans & test cases for Asic verification using Verilog, Fibre Channel, Gigabit Ethernet, & PCI/PCI-X/PCI-Express. Experience with L2/L3/L4 protocol testing & methodologies also required. Requires BSEE or equivalent & relevant experience. Send resume to Astute Networks, Inc. 16516 Via Esprillo Ste #200 San Diego, CA 92127 or email to nancy@astutenet works.com.
COMPUTER ADMINISTRATOR. Seeking a computer administrator to administer company’s Windows NT and SCO UNIX systems. Responsibilities include: insuring data integrity and security as it relates to system backup and data management, modifying users, rights and data structure, and, hardware maintenance, repairs and upgrades; administering LumberTrack, inventory management software by maintaining and setting up data fields for customers, vendors, mill accounts, terms codes, locations, product codes, ports, and canned notes, maintaining accounting interface rules tables, and, identifying and correcting system operating and setup problems; preparing and entering data into inventory management system, such as (i) lumber receipts, production runs and shipments for several Brazilian and Honduran inventory locations, reconciling foreign generated reports to data previously entered, reconciling various inventory, accounts receivable and expense accrual accounts; preparing reports by downloading data from the inventory management system via Cyberquery and Excel interfaces, such as (i) accounts receivable monthly reconciliation, (ii) quarterly sales data by destination country, (iii) quarterly sales data by market, (iv) quarterly sales data by sales department, (v) monthly schedule of shipping for insurance companies, (vi) vendor and customer address labels, (vii) audit accounts receivable confirmation; preparing operating data and special reports, as required, and provide personnel in all locations with information required by them to carry out their
&+,() 6&,(17,67
$ % 3HAW 2ESEARCH AND $EVELOPMENT
,QIRUPDWLRQ'LUHFWRUDWH 5RPH1HZ
$LU )RUFH5HVHDUFK/DERUDWRU\ 7KH$LU)RUFH5HVHDUFK/DERUDWRU\,QIRUPDWLRQ'LUHFWRUDWHLV VHHNLQJDQDWLRQDOOHDGHULQWKHÀHOGRI,QIRUPDWLRQ6\VWHPV6FLHQFH DQG7HFKQRORJ\IRUWKH&KLHI6FLHQWLVW67 SRVLWLRQLQWKH$LU)RUFH 5HVHDUFK/DERUDWRU\5RPH1< 7KH,QIRUPDWLRQ'LUHFWRUDWHFRQGXFWV86$)UHVHDUFKH[SORUDWRU\ DQGDGYDQFHGGHYHORSPHQWDFWLYLWLHVLQNQRZOHGJHEDVHGWHFKQRORJLHV FRPSXWHUVFLHQFHDQGWHFKQRORJ\FROODERUDWLYHHQYLURQPHQWVVLJQDO SURFHVVLQJLQIRUPDWLRQIXVLRQDQGH[SORLWDWLRQFRPPDQG FRQWURO GHFLVLRQVXSSRUWDHURVSDFHFRQQHFWLYLW\QHWZRUNLQJLQIRUPDWLRQ PDQDJHPHQWDQGF\EHURSHUDWLRQV7KH&KLHI6FLHQWLVWSURYLGHV VFLHQWLÀFOHDGHUVKLSDGYLFHDQGJXLGDQFHWKURXJKRXWWKHODERUDWRU\ RQUHVHDUFKSODQVDQGSURJUDPVLQFRUHDUHDDQGUHODWHGWHFKQRORJLHV 7KH&KLHI6FLHQWLVWVHUYHVWRIRFXVUHVHDUFKDQGGHYHORSPHQWHIIRUWV DVVRFLDWHGZLWKWKHLQWHUUHODWHGJURXSRIWHFKQRORJLHVDQGVWUHQJWKHQ WKHLQKRXVHDFWLYLWLHVRIWKHODERUDWRU\7KH&KLHI6FLHQWLVWFRQFHLYHV SODQVDQGDGYRFDWHVPDMRUUHVHDUFKDQGGHYHORSPHQWDFWLYLWLHV FRQVXOWVZLWKWKHODERUDWRU\GLUHFWRUWKHWHFKQRORJ\GLUHFWRUDQG VWDIIFRQFHUQLQJWKHWRWDOUHVHDUFKSURJUDPDQGUHVXOWVPRQLWRUV DQGJXLGHVWKHTXDOLW\RIVFLHQWLÀFDQGWHFKQLFDOUHVRXUFHVDQG SURYLGHVH[SHUWWHFKQLFDOFRQVXOWDWLRQWRRWKHU $)5/GLUHFWRUDWHV '2'DJHQFLHVXQLYHUVLWLHVDQGLQGXVWU\7KHSRVLWLRQUHTXLUHVDQ LQWHUQDWLRQDOO\UHFRJQL]HGDXWKRULW\LQLQIRUPDWLRQV\VWHPVVFLHQFH DQGWHFKQRORJ\ZLWKWKHDELOLW\WRFRQFHLYHDQGFRQGXFWDGYDQFHG UHVHDUFKDQGGHYHORSPHQW7KHLQFXPEHQWPXVWPDNHVLJQLÀFDQW FRQWULEXWLRQVWRWKHDGYDQFHPHQWRINQRZOHGJHLQWKHÀHOGDV HYLGHQFHGE\QXPHURXVLPSRUWDQWVFLHQWLÀFSXEOLFDWLRQVDQGE\ FLWDWLRQRIWKHZRUNE\RWKHUV 7KH FDQGLGDWH PXVW KDYH DW OHDVW WKUHH \HDUV RI VSHFLDOL]HG H[SHULHQFH ZLWKLQ WKH EURDG DUHD RI LQIRUPDWLRQ V\VWHPV VFLHQFH DQG WHFKQRORJ\ DV DSSOLHG WR DUHDV VXFK DV EDWWOHVSDFH DZDUHQHVV G\QDPLF SODQQLQJ DQG H[HFXWLRQ DQG JOREDO LQIRUPDWLRQ HQWHUSULVH ZLWK VSHFLÀF UHVHDUFK H[SHULHQFH LQ DUHDV WKDW VXSSRUW WKHVH EURDG WRSLFV VXFK DV LQIRUPDWLRQ IXVLRQ DQG H[SORLWDWLRQ SUHGLFWLYH EDWWOHVSDFH DZDUHQHVV LQIRUPDWLRQ DVVXUDQFH F\EHU RSHUDWLRQV FRPPXQLFDWLRQV QHWZRUNV HIIHFWV EDVHG RSHUDWLRQV FROODERUDWLYH HQWHUSULVHV PRGHOLQJ DQG VLPXODWLRQ LQWHOOLJHQW DJHQWV PDFKLQH UHDVRQLQJ LQIRUPDWLRQ PDQDJHPHQW RU LQWHOOLJHQW LQIRUPDWLRQ V\VWHPV $W OHDVW RQH \HDU RI WKLV UHVHDUFK H[SHULHQFH PXVW GHPRQVWUDWH WKDW WKH FDQGLGDWH KDV OHDGHUVKLS H[SHULHQFH LQ SODQQLQJ DQG H[HFXWLQJ GLIÀFXOW UHVHDUFK DFWLYLWLHV UHVXOWLQJ LQ RXWVWDQGLQJ DWWDLQPHQWV LQ LQIRUPDWLRQ V\VWHPV VFLHQFH DQG WHFKQRORJ\ RU SODQQLQJ DQG H[HFXWLQJ VSHFLDOL]HG SURJUDPV RI QDWLRQDO VLJQLÀFDQFH LQ H[SORUDWRU\ DQG DGYDQFHG GHYHORSPHQW RI LQIRUPDWLRQ V\VWHPV VFLHQFH DQG WHFKQRORJ\ ,QWKHSDVWVXFFHVVIXOFDQGLGDWHVIRU67SRVLWLRQVLQWKH$LU)RUFH KDYHKDGWR\HDUVRISURJUHVVLYHO\UHVSRQVLEOHUHVHDUFKZRUN DIWHUDWWDLQPHQWRID3K'ZKLFKVLJQLÀFDQWO\DGYDQFHVNQRZOHGJHLQ WKHÀHOG7KLVZRUNKDVLQFOXGHGDXWKRUVKLSRIQXPHURXVUHIHUHHG SXEOLFDWLRQVVXFKDVLQDUFKLYDOMRXUQDOVERRNVRUERRNFKDSWHUV QDWLRQDORULQWHUQDWLRQDOSHHUJURXSDFWLYLW\DQGUHFRJQLWLRQDVD )HOORZRIDSURIHVVLRQDOVRFLHW\ &DQGLGDWHVZLOOEHFRQVLGHUHGIRUGLUHFWKLUHHLWKHUDVDSHUPDQHQW VHQLRUOHYHOJRYHUQPHQWHPSOR\HHRUDVDWHPSRUDU\XSWR\HDUV DSSRLQWPHQWXQGHUWKH,QWHUJRYHUQPHQWDO3HUVRQQHO$FW,3$ ,QFXPEHQWZLOOHQMR\'9SURWRFROHTXLYDOHQWWRDRQHVWDUJHQHUDO RIÀFHU 6DODU\LVFRPPHQVXUDWHZLWKH[SHULHQFH([FHSWLRQDOO\ZHOO TXDOLÀHGFDQGLGDWHVRXWVLGHWKHJRYHUQPHQWPD\EHHOLJLEOHIRUD UHFUXLWPHQWERQXV3RVLWLRQLVORFDWHGLQWKHEHDXWLIXOIRXUVHDVRQ 0RKDZN9DOOH\LQFHQWUDO1HZ
7KHDQQRXQFHPHQWRSHQHG'HFHPEHUDQGZLOOFORVH RQ0DUFK$SSOLFDWLRQVPXVWEHUHFHLYHGE\WKH FORVLQJGDWHRUWKH\ZLOOQRWEHFRQVLGHUHG 7KHRIÀFLDO2IÀFHRI3HUVRQQHOPDQDJHPHQWDQQRXQFHPHQW ZLWKLQVWUXFWLRQVIRUFRPSOHWLQJ\RXUDSSOLFDWLRQFDQEHIRXQG DWZZZRSPJRYDQQRXQFHPHQWQXPEHU$)67 $GGLWLRQDOLQIRUPDWLRQRQWKLVDQQRXQFHPHQWPD\EHREWDLQHGE\ FDOOLQJWKH6HQLRU/HDGHU0DQDJHPHQW2IÀFHDW
3YSTEMS !RCHITECTS AND !3)# %NGINEERS 3PECIALIZED 3UPERCOMPUTER FOR #OMPUTATIONAL $RUG $ESIGN %XTRAORDINARILY GIFTED SYSTEMS ARCHITECTS AND !3)# DESIGN AND VERIlCATION ENGINEERS ARE SOUGHT TO PARTICIPATE IN THE DEVELOPMENT OF A SPECIAL PURPOSE SUPERCOMPUTER DESIGNED TO FUNDAMENTALLY TRANSFORM THE PROCESS OF DRUG DISCOVERY WITHIN THE PHARMACEUTICAL INDUSTRY 4HIS EARLY STAGE RAP IDLY GROWING PROJECT IS BEING lNANCED BY THE $ % 3HAW GROUP AN INVESTMENT AND TECHNOLOGY DEVELOPMENT lRM WITH MORE THAN 53 BILLION IN AGGREGATE CAPITAL 4HE PROJECT WAS INITIATED BY THE lRMS FOUNDER $R $AVID % 3HAW AND OPERATES UNDER HIS DIRECT SCIENTIlC LEADERSHIP 4HIS PROJECT AIMS TO COMBINE AN INNOVATIVE MASSIVELY PARALLEL ARCHITECTURE INCORPORATING NANOMETER hSYSTEM ON A CHIPv !3)#S WITH NOVEL MATHEMATICAL TECHNIQUES AND GROUNDBREAKING ALGORITHMIC ADVANCES IN COMPUTATIONAL BIOCHEMISTRY TO DIRECT UNPRECEDENTED COMPUTATIONAL POWER TOWARD THE SOLUTION OF KEY SCIENTIlC AND TECHNICAL PROBLEMS IN THE lELD OF MOLECULAR DESIGN 3UCCESSFUL CANDIDATES WILL BE WORKING CLOSELY WITH A NUMBER OF THE WORLDS LEADING COMPUTATIONAL CHEMISTS AND BIOLOGISTS AND WILL HAVE THE OPPORTUNITY NOT ONLY TO PARTICIPATE IN AN EXCITING ENTREPRE NEURIAL VENTURE WITH CONSIDERABLE ECONOMIC POTENTIAL BUT TO MAKE FUNDAMENTAL CONTRIBUTIONS WITHIN THE lELDS OF BIOLOGY CHEMISTRY AND MEDICINE 4HE CANDIDATES WE SEEK WILL BE UNUSUALLY INTELLIGENT AND ACCOMPLISHED WITH A DEMONSTRATED ABILITY TO DESIGN AND IMPLEMENT COMPLEX HIGH PERFORMANCE HARDWARE SOLUTIONS BASED ON THE LATEST SEMI CUSTOM TECHNOLOGIES 7E ARE PREPARED TO REWARD EXCEPTIONALLY WELL QUALIlED INDIVIDUALS WITH ABOVE MARKET COMPENSATION 0LEASE SEND RESUME ALONG WITH '0!S STANDARDIZED TEST SCORES 3!4 '2% AND COMPENSATION HISTORY TO IEEECOMPUTER DESRADDESHAWCOM $ % 3HAW 2ESEARCH AND $EVELOPMENT ,,# DOES NOT DISCRIMINATE IN EMPLOYMENT MATTERS ON THE BASIS OF RACE COLOR RELIGION GENDER NATIONAL ORIGIN AGE MILITARY SERVICE ELIGIBILITY VETERAN STATUS SEXUAL ORIENTATION MARITAL STATUS DISABILITY OR ANY OTHER PROTECTED CLASS
7KH$LU)RUFHLVDQ(TXDO2SSRUWXQLW\(PSOR\HU
71
assigned responsibilities; and, identifying and preparing special research projects. The successful candidate must possess a bachelor’s degree in computer systems engineering and 2 years of experience as a computer administrator. Hours: 8:00 a.m. - 5:00 p.m. 35+ hours per week. Salary: $36,000.00 per year. Contact LA Office of Employment Security, Job Order 135535, 1530 Thalia Street, New Orleans, LA 70130-4426.
SENIOR SECURITY CONSULTANT. Requires a B.A. in Business plus 2yrs exp in job offered or 2yrs exp as Sftwre Engineer or Consultant. Exp. must include extensive background providing support, advice/guidance on the correct application of security solutions or techniques including exp. developing recommendations for complete business solutions or technical security applications. Must have demonstrated ability to research technologies available in the security solutions area; evidence overall understanding of various IT environments to assess appropriate security technologies. Desgn, devlp & implmnt solutions using advanced techniques & tools applicable to areas of resource provisioning mgmt (RPM), permissions mgmt infrastructure (PMI) & security infrastructure assessments across multiple lines of business. Provide leadership to jr. consultants in the security practice including assisting them w/resolving routine prblms. Function as technical and/or project lead. Assist jr. consultants w/designs, q.c. & test procedures for security solutions. Apply technical security expertise to support devlpmnt of technology architecture & total systems solution. Perform competitive analyses. Lead technical portion of security demonstrations internally/externally. Position reqs extensive travel to customer sites. Salary $69,056/yr. Send resume to: Dept. of Workforce Svcs., Attn: Erlinda Anderson, J.O.# 8173219, 140 E. 300 So. SLC, UT 84111.
UNIVERSITY OF WATERLOO, Associate Director of Software Engineering. The Software Engineering Board invites applications for a five-year, definite-term position as Associate Director at the rank of Lecturer, with the possibility of the appointment being converted to a permanent position. A candidate must possess a graduate degree, preferably a Ph.D., in Software Engineering, Computer Science, or Computer Engineering, and must be willing to seek Ontario registration as a Professional Engineer. The candidate must also demonstrate an aptitude for outstanding teaching in software engineering or related area. Industrial experience is desirable. The appointment could start as early as May 2005. Applications will be considered until the position is filled. At Waterloo, Software Engineering (http://www.softeng.uwaterloo.
72
Computer
ca/) is a professional undergraduate program jointly offered by the Department of Electrical and Computer Engineering (http://www.ece.uwaterloo.ca/) and the School of Computer Science (http:// www.cs.uwaterloo.ca/). The faculty are international leaders in software-engineering research, and the program attracts many of the best students in the country, admitting more than one hundred students each year. Excellent offices, laboratories, and computing facilities, and supportive staff provide for a productive work environment. The role of the Associate Director is to help administer the Software Engineering program. Primary duties include teaching, academic advising, promoting the program, and coordinating administrative tasks with counterparts in Computer Science and in Electrical and Computer Engineering. Scholarly activities, such as professional development and/or participation in research, are also expected. The University of Waterloo encourages applications from all qualified individuals, including women, members of visible minorities, native peoples, and persons with disabilities. All qualified candidates are encouraged to apply; however, Canadian citizens and permanent residents will be given priority. Applications should be sent by electronic mail to se-director@ uwaterloo.ca, or by post to Dr. Joanne Atlee, Director of Software Engineering, University of Waterloo, Waterloo, Ontario Canada N2L 3G1. An application should include a curriculum vitae, statement of career objectives, and the names and email addresses for at least three referees. Applicants should ask their referees to forward letters of reference to the address above. Applications will be considered as soon as they are complete, and as long as the position is available.
UNIVERSITY OF WEST FLORIDA. Position #11829. Opportunity to shape the future of the department and the careers of 17 bright, primarily Junior-level faculty. Position available as early as August 2005, but open until filled. Successful candidate must hold the Ph.D. in Computer Science or a closely related discipline, be eligible for tenure, possess strong leadership skills, and have a record of achievement in teaching, academic research, and service. Field of expertise is open, and salary is competitive and commensurate with experience. The University of West Florida has a student population of 10,000 and is situated on a picturesque thousand-acre nature preserve at the northern edge of Pensacola, a semi-urban area with a population of 300,000. The department consists of 600 undergraduate majors enrolled in BS degrees in Computer Science, Computer Information Systems, and Interdisciplinary Information Technology, and 50 MS majors with specializations in either Computer Science or Software Engineering. The primary focus
of the department is teaching, however, all faculty members are expected to conduct scholarly research. Faculty research interests include artificial intelligence, computers in education, database systems, image processing, networks, operating systems, pattern recognition, software engineering, simulation, and theory of computation. The department has a close association with the Institute of Human and Machine Cognition (IHMC), a research institute in downtown Pensacola. A police background screening is required. For more information about UWF, visit our website at www.uwf.edu. Application Procedures: Applicants must apply online at https://jobs.uwf.edu. Be prepared to attach a curriculum vitae and letter of application/interest to the online application. Send three (3) sealed letters of recommendation and official transcripts to Dr. Leo terHaar, Selection Committee Chair, Department of Computer Science, 79/102, University of West Florida, 11000 University Parkway, Pensacola, FL 32514. Questions may be addressed to
[email protected]. Application Deadline: Position is open until filled. Preference will be given to those who apply by May 31, 2005. UWF is an Equal Opportunity/Access/Affirmative Action Employer
ENGINEER, ASIC VERIFICATION. To work with RTL engineers to validate complex ASIC designs. Requires BSEE, relevant experience verifying systems-on-a-chip, Vera, Verilog, C, Linux, PERL. Send resume to Astute Networks, Inc. 16516 Via Esprillo Ste #200 San Diego, CA 92127 or email to
[email protected].
UNIVERSITY OF ALBERTA. The Department of Computing Science at the University of Alberta is seeking qualified individuals to fill a position at the level of assistant professor in the areas of image/signal processing and algorithm design. This is a soft-tenure track position. The initial appointment will be for four years, and continuation is subject to availability of funding. The first probationary period is normally 4 years (unless credit for previous service is granted) and a second probationary period must be 2 years. This position is in support of a major research initiative, funded by the federal and provincial governments and industrial partners, on developing intelligent sensing technologies for monitoring oil sand mining operations (see www.cs.ual berta.ca/~cims). Candidates should have a Ph.D. in CS or EE, with specialization in image and signal processing or computer vision. Preference will be given to applicants with knowledge and experience in adaptive image/signal processing, stochastic and multi-scale techniques for image modeling and analysis, and sensor fusion (intensity/range) algorithms. Working with an NSERC industrial research
chair, the candidate is expected to establish his research program, develop novel solutions to practical industrial problems, and supervise students at both the graduate and undergraduate level. The position will also require teaching at a reduced load. Strong communication skills, project management, inter-personal skills, and team leadership are important qualities. Competition will remain open until a suitable candidate is found. Find further details about us at www.cs.ual berta.ca. To apply send your curriculum vita and the names and addresses of three referees to: Iris Everitt, Administrative Assistant, Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada T6G 2E8 or everitt@cs. ualberta.ca. All qualified candidates are encouraged to apply, however, Canadian and permanent residents will be given priority. The University of Alberta hires on the basis of merit. We are committed to the principle of equity of employment. We welcome diversity and encourage applications from all qualified women and men, including persons with disabilities, members of visible minorities, and Aboriginal persons.
SUBMISSION DETAILS: Rates are $290.00 per column inch ($300 minimum). Eight lines per column inch and average five typeset words per line. Send copy at least one month prior to publication date to: Marian Anderson, Classified Advertising, Computer Magazine, 10662 Los Vaqueros Circle, PO Box 3014, Los Alamitos, CA 90720-1314; (714) 821-8380; fax (714) 821-4010. Email: man
[email protected].
In order to conform to the Age Discrimination in Employment Act and to discourage age discrimination, Computer may reject any advertisement containing any of these phrases or similar ones: “…recent college grads…,” “…1-4 years maximum experience…,” “…up to 5 years experience,” or “…10 years maximum experience.” Computer reserves the right to append to any advertisement without specific notice to the advertiser. Experience ranges are suggested minimum requirements, not maximums. Computer assumes that since advertisers have been notified of this policy in advance, they agree that any experience requirements, whether stated as ranges or otherwise, will be construed by the reader as minimum requirements only. Computer encourages employers to offer salaries that are competitive, but occasionally a salary may be offered that is significantly below currently acceptable levels. In such cases the reader may wish to inquire of the employer whether extenuating circumstances apply.
ADVERTISER / PRODUCT INDEX MARCH 2005 Advertiser
Page Number
Air Force Research Laboratory Cal Polytechnic State University D.E. Shaw & Company HiPC 2005 ICSE 2005 IEEE Computer Society Membership IPDPS 2005 ITCC 2005 John Wiley & Sons, Inc. LCN 2005 Seapine Software SPECTS 2005 WCRE 2005 & WICSA2005 Classified Advertising
71 70 71 25 74 90-92 Cover 2 69 5 11 Cover 4 32 Cover 3 70-73
Advertising Personnel
Marion Delaney IEEE Media, Advertising Director Phone: +1 212 419 7766 Fax: +1 212 419 7589 Email:
[email protected]
Sandy Brown IEEE Computer Society, Business Development Manager Phone: +1 714 821 8380 Fax: +1 714 821 4010 Email:
[email protected]
Marian Anderson Advertising Coordinator Phone: +1 714 821 8380 Fax: +1 714 821 4010 Email:
[email protected]
Advertising Sales Representatives Mid Atlantic (product/recruitment) Dawn Becker Phone: +1 732 772 0160 Fax: +1 732 772 0161 Email:
[email protected] New England (product) Jody Estabrook Phone: +1 978 244 0192 Fax: +1 978 244 0103 Email:
[email protected] New England (recruitment) Robert Zwick Phone: +1 212 419 7765 Fax: +1 212 419 7570 Email:
[email protected] Connecticut (product) Stan Greenfield Phone: +1 203 938 2418 Fax: +1 203 938 3211 Email:
[email protected]
Midwest (product) Dave Jones Phone: +1 708 442 5633 Fax: +1 708 442 7620 Email:
[email protected] Will Hamilton Phone: +1 269 381 2156 Fax: +1 269 381 2556 Email:
[email protected] Joe DiNardo Phone: +1 440 248 2456 Fax: +1 440 248 2594 Email:
[email protected] Southeast (recruitment) Thomas M. Flynn Phone: +1 770 645 2944 Fax: +1 770 993 4423 Email:
[email protected]
Midwest/Southwest (recruitment) Darcy Giovingo Phone: +1 847 498-4520 Fax: +1 847 498-5911 Email:
[email protected]
Northwest/Southern CA (recruitment) Tim Matteson Phone: +1 310 836 4064 Fax: +1 310 836 4067 Email:
[email protected]
Southwest (product) Josh Mayer Phone: +1 972 423 5507 Fax: +1 972 423 6858 Email:
[email protected]
Southeast (product) Bob Doran Phone: +1 770 587 9421 Fax: +1 770 587 9501 Email:
[email protected]
Northwest (product) Peter D. Scott Phone: +1 415 421-7950 Fax: +1 415 398-4156 Email:
[email protected]
Japan Tim Matteson Phone: +1 310 836 4064 Fax: +1 310 836 4067 Email:
[email protected]
Southern CA (product) Marshall Rubin Phone: +1 818 888 2407 Fax: +1 818 888 4907 Email:
[email protected]
Europe (product/recruitment) Hilary Turnbull Phone: +44 1875 825700 Fax: +44 1875 825701 Email:
[email protected]
March 2005
73
Research Track Chairs
Call for Professional Engagement
William Griswold UC San Diego, USA Bashar Nuseibeh The Open University, UK
Experience Reports Track Chairs
27th International Conference on Software Engineering
Constance Heitmeyer Naval Research Lab, USA Klaus Pohl U. of Duisburg-Essen, Germany
St Louis, Missouri, USA, 15-21 May 2005
Educational Reports Track Chairs
Sponsored by ACM SIGSOFT and IEEE Computer Society-TCSE
Paola Inverardi U. of L’Aquila,Italy Mehdi Jazayeri TU. of Vienna, Austria and U. of Lugano, Switzerland
http://www.icse-conferences.org/2005/
State of the Art David Garlan Carnegie Mellon U., USA Jeff Kramer Imperial College, UK
State of the Practice Wolfgang Emmerich University College London, UK David Rosenblum University College London, UK
Extending the Discipline John Mylopoulos U. of Toronto, Canada Ian Sommerville Lancaster U., UK
Panels Jeffrey Magee Imperial College, UK Kevin Sullivan U. of Virginia, USA
Workshops & Co-located Events Andre van der Hoek UC Irvine, USA Gian Pietro Picco Politecnico di Milano, Italy
Tutorials Martin Glinz U. of Zurich, Switzerland
Software Everywhere is the theme of ICSE 2005. It acknowledges the increasingly important role software plays in the life of our society through the technology that sustains it. The theme also highlights the growing level of responsibility our profession is expected to assume and the need to reach out to other disciplines that have an impact upon or benefit from software engineering know-how. You are invited to be part of this extraordinary event and to take advantage of a program designed to stimulate, educate, promote intellectual exchanges, plant seeds of innovation, and encourage early adoption of new software engineering technologies. Both the technical and social program has been designed so as to create an environment leading to exciting technical interactions and new collaborative initiatives among the conference participants. ICSE is recognized as the premier forum for researchers, practitioners, and educators to present and discuss the most recent ideas, innovations, trends, experiences, and concerns in the field of software engineering. This year’s program builds upon a tradition of excellence that goes back almost three decades. It is discriminating in its choices and rich in its offerings. The program includes:
Jens Jahnke U. of Victoria, Canada
Research Demos Prem Devanbu UC Davis, USA Cecilia Mascolo University College London, UK
Exhibits Rose Gamble U. of Tulsa, USA Rick Kazman SEI & U. of Hawaii, USA
Doctoral Symposium Gail Murphy U. of British Columbia, Canada Kumiyo Nakakoji U. of Tokyo, Japan
Most Influential Paper David Notkin U. of Washington, USA
New Faculty Symposium Leon Osterweil U. of Massachusetts, USA
Midwest Consortium Matthew Dwyer U. of Nebraska, USA
High Schools Liaison Kenneth J. Goldman Washington U., St. Louis, USA Christine Roman St. Louis Science Center, USA
Student Volunteers Bruce McMillin U. of Missouri - Rolla, USA
Student Research Competition
• • • • • • • • • • • • • •
44 research papers selected from among 313 submissions 14 experience reports selected from among 72 submissions 22 position talks on education and training eight formal research demonstrations and eight informal ones 12 invited talks a talk by the recipients of the ICSE-17 Most Influential Paper award 19 workshops covering a wide range of topics in an a highly interactive setting 16 tutorials offering exceptional opportunities for training and the acquisition of new skills a panel to examine the notion of an emerging science of design for software technology symposia addressing the needs of new faculty and doctoral students a retrospective of developments in empirical software engineering the first meeting of the Midwest Software Engineering Consortium a Regional Information Technology Summit. four co-located conferences (SoftVis, ProSIM, IWPC, CBSE)
The three conference days are associated with three major themes (state of the art, extending the discipline, and state of the practice) that are reflected in the keynote and the invited talks being presented that day. The three keynote speakers are internationally recognized individuals who will contribute very diverse points of view to the conference discourse:
Grigore Rosu U. of Illinois, Urbana-Champaign, USA
Sponsorship Sol Shatz U. of Illinois, Chicago, USA
Treasurer
• • •
Luca Cardelli (disruptive forces in programming technology)—state of the art Richard Florida (rise and flight of the creative class)—extending the discipline Erich Gamma (agile, open source, distributed, and on-time software)—state of the practice
Judith Stafford Tufts U., USA
Publicity, Design & Advertising Daniela Damian U. of Victoria, Canada Will Tracz Lockheed Martin, USA
Proceedings Frank Maurer U. of Calgary, Canada
Local Arrangements Chris Gill Washington U., St. Louis, USA
St. Louis welcomes the conference in the elegant setting of the Adam’s Mark Hotel on the Mississippi riverfront and in the shadow of a monumental feat of engineering, the St. Louis Gateway Arch—the venue for the conference reception. The starting point for the historical Lewis and Clark expedition and a cradle of jazz, the region offers visitors a wide range of tourist and entertainment opportunities for both individuals and families with children. An unprecedented level of corporate support from the St. Louis business community and others ensures that the meeting will be remembered through its impact on the profession and the regions, as well as a wonderful social event.
Webmaster Amy Murphy U. of Lugano, Switzerland
Press Liaison Alexander Wolf U. of Colorado, USA
The Conference Web Site offers up-to-date news on conference events, hotel reservations, registration, tourist information, travel, etc. General Chair Gruia-Catalin Roman, Washington U. in St. Louis, USA
Program Chairs William Griswold, UC San Diego, USA Bashar Nuseibeh, The Open University, UK
COMPUTER SOCIETY CONNECTION
New CSDP Testing Sites Open in 2005 he IEEE Computer Society recently announced the addition of 31 new Certified Software Development Professional exam administration centers in Western Europe, Central Asia, and the Balkans. The CSDP certification program is unique in the software engineering field, offering exposure to recent advances in engineering theory, gains in employment distinction, and career opportunities. Experienced software developers who desire external validation of their skills are invited to take the exam. The IEEE Computer Society CSDP credential offers developers the opportunity to demonstrate their understanding of software engineering practice. The 180-question, 3.5-hour CSDP examination is intended for mid-level professionals and carries the brand, reputation, and standards of the IEEE Computer Society.
T
CSDP TEST DETAILS CSDP candidates must hold a baccalaureate degree and must have at least two years of software engineering experience within the four-year period prior to the application. Candidates must also have a total of at least 9,000 hours of relevant experience. CSDP certificate holders are required to renew their certification every three years by completing 30 units of professional development work and submitting a $150 recertification fee.
The CSDP examination consists of 180 multiple-choice questions gleaned from 11 topic areas, including software construction, maintenance, and quality. Exam questions are based on concepts that should be familiar to engineers with six or more years of experience. CSDP examinations are administered by Prometric, which performs live, computer-based testing at hundreds of locations throughout the world. In addition to the 31 new test sites, the CSDP exam is offered at locations in Asia, Europe, India, North America, and South America.
SPECIAL PREP COURSE AND TESTING OPPORTUNITIES The IEEE Computer Society provides several opportunities to prepare for the CSDP exam. In addition to recommended books and online coursework, CSDP organizers have arranged for instructor-led tutorials in the com-
BROAD-BASED CERTIFICATION Product-specific requirements form the foundation of many recent technical certification programs. For example, an expert in Novell, Microsoft, or Linux systems can seek a certificate that reflects expertise in those particular environments. Other technical certification programs are often driven by project- or occupation-specific requirements. The IEEE Computer Society has recognized the need for one broad, objective certification program that acknowledges a level of advanced skill in all facets of software development. The skills tested during the CSDP exam process are not vendor-specific and should prove relevant far into the future. CSDP certification not only serves to further the careers of those who take the test; it also provides a real measure of return-on-investment for a project manager or employer.
Rutgers’ James L. Flanagan Receives 2005 IEEE Medal of Honor The IEEE has bestowed its prestigious 2005 Medal of Honor upon James L. Flanagan, who recently retired from his position as vice president of research and director of the Center for Advanced Information Processing (CAIP) at New Jersey’s Rutgers University. Flanagan’s award cites his “…sustained leadership and outstanding contributions in speech technology.” Flanagan joined Rutgers in 1990 after a 30-year career at Bell Labs, where he directed research in speech recognition, speech synthesis, digital coding, electroacoustics, robotics, and artificial intelligence. During his 14 years as director of CAIP, Flanagan worked to promote cooperation among academia, industry, and government in computer applications research. Flanagan’s other honors include the 1986 IEEE Edison Medal, the National Medal of Science, and election to the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences. In 2004, the IEEE established the James L. Flanagan Speech and Audio Processing Award, which honors an outstanding contribution to the advancement of speech or audio signal processing by an individual or small team. Nominations for the 2006 IEEE Medal of Honor are due by 1 July. Nomination forms are available at www.ieee.org/about/awards/sums/ mohsum.htm.
Published by the IEEE Computer Society
March 2005
75
Computer Society Connection
ing months. Candidates can participate in a CSDP training class at the 2005 Systems and Software Technology Conference (SSTC) in Salt Lake City. Conducting the course, set for 16-18 April, will be author Richard Thayer, the original developer of training materials for the CSDP. Following Thayer’s onsite course, IEEE Computer Society officials will administer a CSDP examination at SSTC on 22 April. The prep course and exam are priced at $300 each for conference registrants. To take the CSDP exam at SSTC, first register for the conference at www.stc-online. org, then complete and submit the CSDP application form by 7 April. Thayer will also lead a three-day version of the course on three separate occasions: 21-23 March in Los Angeles; 30 March–1 April in San Francisco; and
5-7 April in Portland, Oregon. Fees for the West Coast courses are $995, with a guaranteed pass-or-don’t-pay refund policy. See www.wyyzzk.com for details. For software engineers in other parts of the world, the Computer Society also offers a CSDP training course called Software Engineering Overview in the Distance Learning Campus. The course, available to members for $395 at www.computer.org/certification/ DistanceLearning, provides a comprehensive review of essential software engineering principles. pplications for the Spring 2005 testing window, which is open from 1 April to 30 June, are due by 15 April. For the Fall 2005 testing window, which is open from 1 September to 30 November, applications are
A
due by 1 September. CSDP application and examination fees are $450 for IEEE or Computer Society members and $550 for nonmembers. Recent federal action has classified CSDP testing fees as reimbursable to veterans under the GI Bill. Two to three weeks after an application is accepted, approved candidates will be mailed an authorization to test. Candidates must receive the authorization before scheduling an appointment to take the exam. Further application information is available at www. computer.org/certification/apply.htm. ■
Editor: Bob Ward, Computer;
[email protected]
CSIDC Participating Schools for 2005 The goal of the Sixth Annual Computer Society International Design Competition (CSIDC) is to advance excellence in education by having student teams design and implement computer-based solutions to real-world problems. CSIDC teams work over the bulk of an academic year to build their systems, following a process that mimics the evolution of a commercial product. By mid-January, each team supplies a working title and, by mid-February, teams submit interim reports to a panel of CSIDC judges. Volunteers from academia and industry judge the reports for adherence to contest rules and for competitive viability. Based on these reports, the judges select the projects that seem most likely to have commercial potential and notify teams of their standing by 11 March. Teams that remain in contention must prepare a final report by 23 April. Participants in CSIDC 2005 come from every part of the globe and are listed by IEEE region.
76
Computer
Region 1
Region 5
DeVry Institute of Technology, Long Island City State University of New York, Potsdam Union College, Schenectady Worcester Polytechnic Institute
Front Range Community College Louisiana State University, Baton Rouge University of Denver
Region 6
Denison University
California State Polytechnic University, Pomona California State University, Long Beach
Region 3
Region 7
Florida Atlantic University Florida Gulf Coast University Georgia Southern University Morehouse College North Carolina State University
Carleton University Polytechnic School of Montreal University of British Columbia
Region 2
Region 4 Anoka Ramsey Community College Bradley University DePaul University Iowa State University Lake Superior State University Purdue University, Calumet University of Nebraska, Lincoln Wayne State University
Region 8 American University of Beirut (2 teams) American University of Sharjah Aptech Computer Education, Abuja Aristotle University of Thessaloniki Athens University of Economics and Business Budapest Polytechnic Cairo University Cape Verde University College for Teachers of Technology, Tel Aviv
Cosmos High School, Windhoek Eastern Mediterranean University Fatih University Hantoub High School Ibadan Polytechnic Institute of Computer Studies and Services Iran University of Science & Technology Kharkiv National University of Radioelectronics Middle East Technical University Modern College of Business & Science Moscow Institute of Physics and Technologies Polytechnical University of Bucharest Poznan University of Technology Shalom IT Center College Slovak University of Technology Technical University of Braunschweig Technical University of Iasi Technical University of Plovdiv University of Coimbra University of Jordan University of Kocaeli University of Lagos University of Pretoria Yaba College of Technology Yildiz Technical University
Region 9 Autonomous University of Aguascalientes Bangladesh University of Engineering and Technology Catholic University of Andres Bello Greater National University of San Marcos Industrial University of Santander Military Institute of Engineering Military School of Engineering, La Paz National Technological University, Cordoba Pontifical University of Bolivariana, Medellin Technological Institute of Merida University of Cauca University of Francisco Jose de Caldas
Region 10 Banaras Hindu University Beijing University of Posts and Telecommunications Beijing University of Technology Dwarkadas J. Sanghvi College of Engineering Fr. Conceicao Rodrigues College of Engineering ICFAI Institute of Science and Technology, Hyderabad Indian Institute of Technology, Guwahati Indian Institute of Technology, Kanpur Indian Institute of Technology, Kharagpur Institute of Technology, Varanasi International Institute of Information Technology Jawaharlal Nehru Engineering College Jaypee Institute of Information Technology, Noida Kathmandu Engineering College Kongu Engineering College Lahore University of Management Sciences Malaysia University of Technology Meiji University Motilal Nehru National Institute of Technology Nanyang Technological University National Chiao Tung University National Taipei University of Technology
National Taiwan University National University of Computer and Emerging Sciences, Karachi National University of Computer and Emerging Sciences, Lahore Nepal College of Information Technology Nepal Engineering College Northern Taiwan Institute of Science and Technology Shri Guru Gobind Singhji Institute of Engineering and Technology Sikkim Manipal University Sir Syed University of Engineering and Technology Sri Siva Subramaniya Nadar College of Engineering Sri Venkateswara College of Engineering Thadomal Shahani Engineering College Thigarajar College of Engineering Tribhuvan University, Pulchowk University of Engineering and Technology, Lahore University of Visvesvaraya Usha Mittal Institute of Technology Vasavi College of Engineering Vision International College of Computer & Management Sciences Vivekanand Education Society Institute of Technology
RENEW your IEEE Computer Society membership for... ✔ 12 issues of Computer ✔ Access to 350 distance learning course modules ✔ Access to the IEEE Computer Society online bookshelf ✔ Membership in your local Society chapter http://www.ieee.org/renewal March 2005
77
Computer Society Connection
CSIDC 2005 Selects Contestants tudent teams from undergraduate institutions around the world have begun the battle for a top slot at the June 2005 Computer Society International Design Competition Finals in Washington, D.C. Since 2000, CSIDC’s first year, the number of participating teams has continued to expand due to increasing international interest and to a simplified contest structure. This year’s competition attracted 300 entries from more than 100 institutions. University of Teesside professor Alan Clements has chaired CSIDC since 2001. Currently, teams operate under a $400 spending limit that serves to discourage the use of sophisticated system peripherals.
S
COMPETITION Teams competing in CSIDC use a
PC, laptop, handheld computer, or other off-the-shelf device combined with additional low-cost hardware and software to create a computer-based solution to a real-world problem. A primary goal of the competition is to encourage student teams to create projects that perform a socially useful function. CSIDC 2004 winner, Poland’s Poznan University of Technology, met last year’s challenge of “Making the World a Safer Place” by creating “Lifetch,” a GPS-based wilderness tracking and rescue system. Teams from Poznan University have finished in the top three at all but one of the past four CSIDC events, including a first-place finish in 2001. The theme of CSIDC 2005 is “Going Beyond the Boundaries.” Contest organizers encourage innova-
IEEE Computer Society Seeks Merwin Scholarship Applications by 31 May The IEEE Computer Society encourages active members of its student branches to apply for the 2005-2006 Richard E. Merwin Student Scholarship. The scholarship honors a past president of the Computer Society and recognizes leaders in Society student branch chapters who show promise in their academic and professional efforts. Up to 10 scholarships of $4,000 each are available, paid in four quarterly installments that begin in September. Winners of the Merwin Scholarship serve as IEEE Computer Society Student Ambassadors for their IEEE regions. Student Ambassadors collect and disseminate information to Computer Society student chapters in their region and serve as liaisons to the Chapters Activities Board. Active members of Computer Society student branch chapters who are juniors, seniors, or graduate students in electrical or computer engineering, computer science, or a computer-related field of engineering are eligible to apply. Applicants must be full-time students and are required to have a minimum 2.5 GPA. Merwin Scholarship applications are due by 31 May. Other awards and scholarships that the Computer Society offers to students include the Lance Stafford Larson best paper contest and the Upsilon Pi Epsilon/Computer Society Award for Academic Excellence, which is administered jointly by the IEEE Computer Society and the Upsilon Pi Epsilon international honor society. For more information Computer Society student scholarships and awards, visit www.computer.org/students/schlrshp.htm.
78
Computer
tive entries that have real-world applications extending well beyond the confines of digital technology. Teams taking part in this year’s competition are listed in the “CSIDC Participating Teams for 2005” sidebar. A change in the rules for this year’s event has opened the competition to allow teams to recruit one member from a discipline outside computing. Early in the year, CSIDC teams are required to submit an interim report on their projects. A team of judges evaluates the interim reports and invites the most promising teams to submit a final report that details the project in its completed form. Because only one team from any institution can submit a final report, colleges or universities with more than one team in play must devise internal methods of determining which team will represent the school. After reviewing the final reports, CSIDC officials will announce the top 10 projects on 24 May, inviting four members from each of the 10 teams, along with their faculty mentors, to participate in the 2729 June CSIDC 2005 World Finals in Washington, D.C. At the CSIDC World Finals, teams demonstrate their projects in formal multimedia presentations and interactive Q&A sessions. Judges review the finalists’ entries for originality, technical excellence, social usefulness, evidence of teamwork, feasibility and practicality, system integrity, and quality, including the caliber of presentation materials and delivery.
PRIZES Changes to the 2005 competition have allocated more prize money to the top 10 finalists. Members of the firstplace team at the CSIDC World Finals will split a $20,000 cash prize. Members of the second- and third-place teams will divide $15,000 and $10,000 prizes, respectively. Each of the remaining seven teams receives an honorable
mention and a $2,500 shared prize. All finalists also receive a complimentary one-year individual subscription to any IEEE Computer Society magazine. In addition to the main awards, teams who place in the top 10 at CSIDC 2005 will be eligible for two
special prizes. The $3,000 Microsoft Multimedia Award goes to the team whose presentation makes the most interesting, innovative, exciting, and appropriate use of multimedia. The $3,000 Microsoft Award for Software Engineering recognizes the project that
best exemplifies the application of good software engineering principles to the design and testing of a device prototype. CSIDC is sponsored by Microsoft. For more information on CSIDC, visit www.computer.org/csidc/.■
Three Award Nominations Due by 31 July he nearly two dozen honors granted each year by the IEEE Computer Society include three awards that recognize individuals and faculty groups for their outstanding contributions to computer science, engineering, and education. In 1992, the Computer Society established the Sidney Fernbach Memorial Award to recognize individuals who have made notable strides in developing applications for high-performance computing. Sidney Fernbach was a pioneer in the use of high-performance computers for solving large computational problems. Nominations for the honor are evaluated by awards committees associated with the SC 2005 high-performance computing, networking, and storage conference. The Fernbach award winner receives a certificate of recognition and a $2,000 honorarium. The Seymour Cray Computer Science and Engineering Award recognizes individuals whose innovative contributions to high-performance computing systems best reflect the creative spirit of supercomputing pioneer Seymour Cray. Recipients of the Cray Award receive a crystal memento, an illuminated certificate, and a $10,000 honorarium. Recent Cray honorees include John Hennessy, Monty Denneau, and Burton J. Smith. Winners of both the Cray and Fernbach awards accept their honors during a special ceremony at SC. The Computer Society also awards the annual Undergraduate Teaching Award in Computer Science & Engineering to a professor or faculty group who demonstrate an enduring and sig-
T
nificant commitment to undergraduate education through teaching and professional service. The award can also acknowledge efforts to increase the Society’s visibility. Honorees receive a plaque and a $2,000 honorarium. The IEEE Computer Society awards program recognizes technical achievements, contributions to engineering
education, and service to the Society or the profession. Nominations for the Fernbach, Cray, and Undergraduate Teaching Awards are due by 31 July. Nominations for most other Society awards have a 31 October deadline. To obtain nomination materials for any IEEE Computer Society award, visit www.computer.org/awards/. ■
Computer Society and IEEE Foundation Sponsor More Than $15,000 in Intel ISEF Prizes Each year, both the IEEE Computer Society and the IEEE Foundation sponsor special awards for outstanding high school students at the Intel International Science and Engineering Fair, which takes place this year from 8-14 May in Phoenix, Arizona. At ISEF, students from grades nine through 12 compete for more than $3 million in scholarships, scientific trips, tuition grants, and scientific equipment. The annual event draws competitors from more than 40 countries, making it the world’s largest international high school science and engineering competition. At ISEF, the Computer Society typically sponsors six to eight individual and team awards that range from $300 to $700. Computer Society winners at ISEF receive a framed certificate and a one-year free subscription to an IEEE Computer Society magazine of their choice. A group photo of the winners will be published in an upcoming issue of Computer. For the sixth year, the IEEE Foundation will sponsor an IEEE Presidents’ Scholarship at ISEF. IEEE 2005 President-elect Michael Lightner will present this year’s scholarship in recognition of an outstanding achievement in the research and presentation of engineering knowledge in electrical engineering, information technology, or other IEEE field of interest. The winner will receive $2,500 during each of four years of undergraduate study, as well as an IEEE student membership and student society membership. A framed certificate and an engraved plaque accompany the award. For further information about the IEEE Presidents’ Scholarship, visit www.ieee.org/education/precollege/scholarship/index.html. To learn more about Intel ISEF, see www.sciserv.org/.
March 2005
79
CALL AND CALENDAR
CALLS FOR IEEE CS PUBLICATIONS IEEE Internet Computing invites contributions for a November/ December 2005 special issue on security for P2P systems and ad hoc networks. Topics include key management, access control, secure MAC protocols, performance and security trade-offs, and denial of service. Manuscripts are due by 1 April. See the complete call at www.computer. org/internet/call4ppr.htm#v9n6. For an October/November 2005 special issue on artificial intelligence and homeland security, IEEE Intelligent Systems is encouraging submissions of practical and novel AI technologies, techniques, methods, and systems. Submissions on all research areas relating to both AI and national security are welcome. Topics include bioterrorism tracking, alerting, and analysis; criminal data mining; deception detection systems; and crime and intelligence visualization. Manuscripts are due by 1 April. See the complete call at www.computer. org/intelligent/cfp16.htm.
OTHER CALLS ISESE 2005, ACM-IEEE 4th Int’l Symp. on Empirical Software Eng., 1718 Nov., Noosa Heads, Australia. Papers due 4 Apr. http://attend.it.uts. edu.au/isese2005/cfp.htm
Submission Instructions The Call and Calendar section lists conferences, symposia, and workshops that the IEEE Computer Society sponsors or cooperates in presenting. Complete instructions for submitting conference or call listings are available at www.computer. org/conferences/submission.htm. A more complete listing of upcoming computer-related confeences is available at www.computer.org/ conferences/.
80
Computer
ICDM 2005, 5th IEEE Int’l. Conf on Data Mining, 26-30 Nov., New Orleans. Papers due 1 Jun. www.cacs. louisiana.edu/~icdm05/cfp.html HiPC 2005, 12th IEEE Int’l Conf. on High-Performance Computing, 18-21 Dec., Goa, India. Papers due 2 May. www.hipc.org/hipc2005/papers.html
CALENDAR APRIL 2005 2-3 Apr: SLIP 2005, Int’l Workshop on System-Level Interconnect Prediction, San Francisco. www.sliponline.org/ 3-8 Apr: SEW-29, 29th IEEE/NASA Software Eng. Workshop, Greenbelt, Md. http://sel.gsfc.nasa.gov/ 4-5 Apr: ECBS 2005, 12th Ann. IEEE Int’l Conf. and Workshop on Eng. of Computer-Based Systems (with SEW29), Greenbelt, Md. http://abe.eng.uts. edu.au/ECBS2005/ 4-8 Apr: ISADS 2005, 7th Int’l Symp. on Autonomous Decentralized Systems, Chengdu, China. http://isads05. swjtu.edu.cn/ 4-8 Apr: IPDPS 2005, Int’l Parallel & Distributed Processing Symp., Denver, Colo. www.ipdps.org/ 5-8 Apr: ICDE 2005, 21st Int’l Conf. on Data Eng., Tokyo. http://icde2005. is.tsukuba.ac.jp/ 6-7 Apr: EDPS 2005, Electronic Design Process Symp., Monterey, Calif. www. eda.org/edps/ 7-9 Apr: IPCCC 2005, 24th IEEE Int’l Performance Computing & Communications Conf., Phoenix, Ariz. www. ipccc.org/ 10-13 Apr: ITSW 2005, 12th Int’l Test Synthesis Workshop, Santa Barbara, Calif. www.tttc-itsw.org/ Published by the IEEE Computer Society
11-13 Apr: ITCC 2005, Int’l Conf. on IT Coding and Computing, Las Vegas. www.itcc.info/ 11-14 Apr: MSST 2005, 22nd IEEE Conf. on Mass Storage Systems and Technologies, Monterey, Calif. www. storageconference.org/ 13-16 Apr: DDECS 2005, 8th IEEE Workshop on Design & Diagnostics of Electronic Circuits & Systems, Sopron, Hungary. http://sauron.inf.mit.bme. hu/DDECS05.nsf 17-20 Apr: FCCM 2005, IEEE Symp. on Field-Programmable Custom Computing Machines, Napa, Calif. www. fccm.org/ 18-20 Apr: CSEE&T 2005, 18th Conf. on Software Eng. Education & Training, Ottawa, Canada. www.site. uottawa.ca/cseet2005/ 20-22 Apr: Cool Chips VIII, Int’l Symp. on Low-Power & High-Speed Chips, Yokohama, Japan. www. coolchips.org/
MAY 2005 1 May: DBT 2005, IEEE Int’l Workshop on Current & Defect-Based Testing (with VTS-05), Rancho Mirage, Calif. www.cs.colostate.edu/ ~malaiya/dbt.html 1 May: NANOARCH 2005, IEEE Int’l Workshop on Design & Test of Defect-Tolerant Nanoscale Architectures (with VTS-05), Rancho Mirage, Calif. www.nanoarch.org/ 1-5 May: VTS 2005, 23rd IEEE VLSI Test Symposium, Rancho Mirage, Calif. www.tttc-vts.org/ 9-12 May: CCGrid 2005, 5th IEEE Int’l Symp. on Cluster Computing & the Grid, Cardiff, UK. www.cs.cf.ac. uk/ccgrid2005/ 10-13 May: SPI 2005, IEEE 9th Work-
shop on Signal Propagation on Interconnects, Garmisch-Partenkirchen, Germany. www.spi.uni-hannover.de/ 11-13 May: NATW 2005, IEEE 14th North Atlantic Test Workshop, Essex Junction, Vt. www.ee.duke.edu/ NATW/ 15-16 May: IWPC 2005, 13th Int’l Workshop on Program Comprehension (with ICSE), St. Louis, Mo. www.ieee-iwpc.org/iwpc2005/ 15-21 May: ICSE 2005, 27th Int’l Conf. on Software Eng., St. Louis, Mo. www. cs.wustl.edu/icse05/Home/index.shtml 16-19 May: ISEE 2005, IEEE Int’l Symp. on Electronics & the Environment, New Orleans. www.regconnect. com/content/isee/ 18-20 May: ISORC 2005, 8th IEEE Int’l Symp. on Object-Oriented RealTime Distributed Computing, Seattle. http://shay.ecn.purdue.edu/~isorc05/ 18-21 May: ISMVL 2005, 35th Int’l Symp. on Multiple-Valued Logic, Calgary, Canada. www.enel.ucalgary. ca/ISMVL2005/ 22-25 May: ETS 2005, 10th European Test Symp., Tallinn, Estonia. http:// deepthought.ttu.ee/ati/ETS/ 25-26 May: EBTW 2005, European Board Test Workshop (with ETS 2005), Tallinn, Estonia. www.molesystems. com/EBTW05/ 30-31 May: EMNETS-II 2005, 2nd IEEE Workshop on Embedded Networked Sensors, Sydney, Australia. www.cse.unsw.edu.au/~emnet/
JUNE 2005 1-3 June: PADS 2005, 19th ACM/ IEEE/SCS Workshop on Principles of Advanced & Distributed Simulation, Monterey, Calif. www.pads-workshop. org/pads2005/index.html
6-8 June: Policy 2005, IEEE 6th Int’l Workshop on Policies for Distributed Systems & Networks, Stockholm. www.policy-workshop.org/2005/
20-26 June: CVPR 2005, IEEE Int’l Conf. on Computer Vision & Pattern Recognition, San Diego, Calif. www. cs.duke.edu/cvpr2005/
6-9 June: ICDCS 2005, 25th Int’l Conf. on Distributed Computing Systems, Columbus, Ohio. www.cse. ohio-state.edu/icdcs05/
22-24 June: CGI 2005, Computer Graphics Int’l Conf. & Workshops, Stony Brook, N.Y. www.cs.stonybrook. edu/~cgi05/
7-11 June: JCDL 2005, IEEE/ACM Joint Conf. on Digital Libraries, Denver, Colo. www.jcdl2005.org/
23-24 June: CBMS 2005, 18th IEEE Symp. on Computer-Based Medical Systems, Dublin, Ireland. www.cs.tcd. ie/research_groups/mlg/CBMS2005/ index.html
12-13 June: MSE 2005, Int’l Conf. on Microelectronic Systems Education (with DAC), Anaheim, Calif. www. mseconference.org/ 12-15 June: COMPLEXITY 2005, 20th Ann. IEEE Conf. on Computational Complexity, San Jose, Calif. www.computationalcomplexity.org/ 13-16 June: ICAC 2005, 2nd IEEE Int’l Conf. on Autonomic Computing, Seattle. www.autonomic-conference. org/ 13-16 June: WOWMOM 2005, Int’l Symp. on A World of Wireless, Mobile, & Multimedia Networks, Taormina, Italy. http://cnd.iit.cnr.it/ wowmom2005/ 13-17 June: SMI 2005, Int’l Conf. on Shape Modeling & Applications, Cambridge, Mass. www.shapemodeling. org/ 16-20 June: ICECCS 2005, Int’l Conf. on Eng. of Complex Computer Systems, Shanghai. www.cs.sjtu.edu. cn/iceccs2005/ 19-24 June: Int’l Symp. on Emergence of Globally Distributed Data, Sardinia, Italy. www.storageconference. org/ 20-22 June: CSFW 2005, 18th IEEE Computer Security Foundations Workshop, Aix-en-Provence, France. www. lif.univ-mrs.fr/CSFW18/
26-29 June: LICS 2005, 20th Ann. IEEE Symp. on Logic in Computer Science, Chicago. http://homepages. inf.ed.ac.uk/als/lics/lics05/
2005 IEEE International Symposium on Electronics and the Environment The 2005 IEEE International Symposium on Electronics and the Environment provides an opportunity for government representatives and innovators in the electronics and electronics recycling industries to share the latest operational and management strategies, business practices, and regulatory concerns. This year’s program will feature advances in design, manufacturing, research, marketing, recycling practice, and policy making. Conference organizers have included a student poster competition in this year’s slate of events. ISEE 2005, set for 16-19 May in New Orleans, is presented by the IEEE Computer Society’s Technical Committee on Electronics and the Environment, in conjunction with the International Association of Electronic Recyclers (IAER). For more details, including exhibitor information, visit www.regconnect. com/content/isee/.
March 2005
81
Call and Calendar
27-29 June: ARITH-17, 17th IEEE Symp. on Computer Arithmetic, Cape Cod, Mass. http://arith17.polito.it/ 27-29 June: CollaborateCom 2005, 1st IEEE Int’l Conf. on Collaborative Computing: Networking, Applications, & Worksharing, Cape Cod, Mass. www.collaboratecom.org/
19-22 July: CEC 2005, 7th Int’l IEEE Conf. on E-Commerce Technology, Munich. http://cec05.in.tum.de/
27-30 June: ISCC 2005, 10th IEEE Symp. on Computers & Communication, Cartagena, Spain. www.comsoc. org/iscc/2005/
20-21 July: WRTLT 2005, Workshop on RTL & High-Level Testing, Harbin, China. http://wrtlt05.hit.edu.cn/
28 June-1 July: DSN 2005, Int’l Conf. on Dependable Systems & Networks, Yokohama, Japan. www.dsn.org/
20-22 July: ICPADS 2005, 11th Int’l Conf. on Parallel & Distributed Systems, Fukuoka, Japan. www.takilab. k.dendai.ac.jp/conf/icpads/2005/
30 June-1 July: DCOSS 2005, Int’l Conf. on Distributed Computing in Sensor Systems, Marina del Rey, Calif. www.dcoss.org/
JULY 2005 5-8 July: ICALT 2005, 5th IEEE Int’l Conf. on Advanced Learning Technologies, Kaohsiung, Taiwan. www. ask.iti.gr/icalt/2005/ 6-8 July: ICME 2005, IEEE Int’l Conf. on Multimedia & Expo, Amsterdam. www.icme2005.com/ 11-14 July: ICPS 2005, IEEE Int’l Conf. on Pervasive Services, Santorini, Greece. www.icps2005.cs.ucr.edu 11-14 July: MemoCode 2005: 3rd ACM/IEEE Conf. on Formal Methods and Models for Codesign, Verona, Italy. www.irisa.fr/manifestations/ 2005/MEMOCODE/
82
18-19 July: WMCS 2005: 2nd IEEE Int’l. Workshop on Mobile Commerce and Services (with CEC-05), Munich. www.mobile.ifi.lmu.de/Conferences/ wmcs05/
23-25 July: ASAP 2005, 16th IEEE Int’l Conf. on Application-Specific Systems, Architectures, & Processors, Samos, Greece. www.ece.uvic.ca/ asap2005/ 24 July: CLADE 2005, Workshop on Challenges of Large Applications in Distributed Environments (with HPDC-14), Research Triangle Park, N.C. www.cs.umd.edu/CLADE2005/ 24-27 July: HPDC-14, 14th IEEE Int’l Symp. on High-Performance Distributed Computing, Research Triangle Park, N.C. www.caip.rutgers. edu/hpdc2005/ 27-29 July: NCA 2005, 4th IEEE Int’l Symp. on Network Computing & Applications, Cambridge, Mass. www. ieee-nca.org/
8-10 Aug: ICCI 2005, 4th IEEE Int’l Conf. on Cognitive Informatics, Irvine, Calif. www.enel.ucalgary.ca/ ICCI2005/ 8-11 Aug: CSB 2005, IEEE Computational Systems Bioinformatics Conf., Palo Alto, Calif. http://conferences. computer.org/bioinformatics/ 22-24 Aug: TABLETOP 2005, IEEE Int’l Workshop on Horizontal Interactive Human-Computer Systems, Mawson Lakes, Australia. Contact
[email protected]. 29 Aug.-2 Sept: RE 2005, 13th IEEE Int’l Requirements Eng. Conf., Paris. http://crinfo.univ-paris1.fr/RE05/
SEPTEMBER 2005 7-9 Sept: SEFM 2005, 3rd IEEE Int’l Conf. on Software Eng. & Formal Methods, Koblenz, Germany. http:// sefm2005.uni-koblenz.de/ 12-14 Sept: IWCW 2005, 10th Int’l Workshop on Web Content Caching & Distribution, Sophia Antipolis, France. http://2005.iwcw.org/ 15-16 Sept: AVSS 2005, Conf. on Advanced Video & Signal-Based Surveillance, Como, Italy. www-dsp. elet.polimi.it/avss2005/ 19-22 Sept: Metrics 2005, 11th IEEE Int’l Software Metrics Symp., Como, Italy. http://metrics2005.di.uniba.it/
AUGUST 2005
12-15 July: ICWS 2005, 3rd IEEE Int’l Conf. on Web Services, Orlando, Fla. http://conferences.computer.org/icws/ 2005/
2-4 Aug: ICCNMC 2005, Int’l Conf. on Computer Networks & Mobile Computing, Zhangjiajie, China. www. iccnmc.org/
19-22 Sept: WI-IAT 2005, IEEE/ WIC/ACM Int’l Joint Conf. on Web Intelligence & Intelligent Agent Technology, Compiegne, France. www. comp.hkbu.edu.hk/WI05
12-15 July: SCC 2005, IEEE Int’l Conf. on Services Computing (with ICWS 2005), Orlando, Fla. http:// conferences.computer.org/scc/2005/
4-5 Aug: MTDT 2005, IEEE Int’l Workshop on Memory Technology, Design, & Testing, Taipei. http://ats04. ee.nthu.edu.tw/~mtdt/
19-23 Sept: EDOC 2005, 9th Int’l Conf. on Enterprise Computing, Enschede, Netherlands. http://edoc2005. ctit.utwente.nl/
Computer
PRODUCTS
Patchkeeper Simplifies Network Security Executive Software has released a new patch management module that lets system administrators automate the maintenance and deployment of Microsoft patches across an entire Windows network. Patchkeeper provides an easy-to-use, centralized solution that can be configured according to patch criticality and type, patch certification, operating system, application, and more. It offers automatic e-mail notifications to an unlimited number of recipients whenever missing patches are detected. Patchkeeper can be used either as a stand-alone solution or as part of the company’s Sitekeeper 3.5 systems management software suite; www. executive.com.
Improved MenuStrip for Mac OS X MacPowerUser Software has upgraded MenuStrip, its menu-bar enhancement utility for Mac OS X. With the new MenuStrip 3.0.2 features, users can create custom menus and add shortcuts to applications or documents by simply dragging and dropping a folder or group of files. The menu clock also has more date and time display options, and MenuStrip supports several new plugins: Contacts Menu 1.0, which provides easy access to addresses, e-mail, and phone numbers; Safari Menu 1.0, which keeps track of the most frequent and recent Web sites visited with Safari; and TunesControl 1.0, which facilitates iTunes playback. A trial version of MenuStrip 3.0.2 is available for free download, while a license for the full version costs $24.95; www.macpoweruser.com.
faster, and more cost-effective to deploy than traditional client-server LMS solutions. Because GeoMaestro is a hosted service, there is no hardware or software to install. Organizations can simultaneously launch thousands of AICC, SCORM, and ADA Section 508-compliant titles as well as noncompliant courseware; integrate asynchronous online courses, live synchronous events, and instructor-led training; create tests and surveys; develop, assign, and track personalized learning plans; and measure learning results. The system also features an easy-touse 3D GUI available in multiple languages, reusable learning object technology, class and event scheduling with dynamic resource conflict prevention, and numerous collaborative communication tools. Visit www.geolearning.com for more information.
Synchronous Buck-Boost Regulator for Handhelds The LTC3442, a synchronous, fixedfrequency, buck-boost DC/DC converter from Linear Technology Corp., is designed to optimize battery runtime for single-cell Li-Ion, multicell alkaline, or NiMH battery-powered applica-
tions. The converter can deliver up to 1.2 A of continuous output current at efficiencies as high as 95 percent. Its small implementation footprint is ideal for space-constrained environments such as cell phones, PDAs, wireless and DSL modems, and digital cameras. The LTC3442 is available in a 4 mm × 3 mm DFN package. Pricing starts at $3.95 each in 1,000-piece quantities; www.linear.com.
Mobile Application Builder Adds Desktop Option DDH Software has added a Windows desktop option to its HanDBase Online Runtime Builder for use in creating freestanding relational database runtime applications for Palm OS and Pocket PC devices. The desktop version offers full support for editing databases, filtering, searching, browsing, importing and exporting data to other file formats, and bidirectional data synchronization between handheld and desktop database management systems. Users who have already purchased a license for either the Palm OS or Windows Mobile version of HanDBase Online Runtime Builder can create Windows desktop solutions for an additional $299. The licensing cost for unlimited, royalty-free distribution of
ASP-Based Learning Management GeoLearning Inc. has released GeoMaestro, a 100-percent Webbased e-learning management system that the company claims is easier, Please send new product announcements to
[email protected].
MacPowerUser Software’s new version of MenuStrip makes it easier to create customizable drop-down menus. Published by the IEEE Computer Society
March 2005
83
Products
applications to run on both desktop and handheld platforms is $999 when purchased at the same time; www. ddhsoftware.com.
Automated Broadband Services Provisioning Enhanced Telecommunications Inc.’s CableBridge is a stand-alone, Unixbased tool designed to integrate premium broadband services with existing cable, utility, and telephone billing systems. Key benefits include convergent service billing; automated provisioning of subscription video, pay-per-view, video-on-demand, and cable-modem services; support for digital and analog systems; and comprehensive subscriber and device management. For more information, visit www. etisoftware.com.
IBM Offers Entry-Level OpenPower Server IBM has released a POWER5 processor-based server tailored for the Linux environment and designed to deliver high-end performance at an entry-level
84
Computer
price. The rack-mount eServer OpenPower 710 is available in single or 1.65GHz-processor configurations and offers up to 32 Gbytes of RAM, more than 570 Gbytes of internal storage, three 64-bit PCI-X slots, dual Ethernet 10/100/1000 Mbps controllers, hotplug power supplies and cooling, and extensive I/O options, as well as the OpenPower series’ advanced virtualization features. The IBM eServer OpenPower 710 starts at $3,449 for a one-way system and $3,995 for a two-way system; www. ibm.com.
Stylus Studio Revamps XML IDE Stylus Studio offers a number of new tools, features, and enhancements in the latest version of its XML intregrated development environment. Stylus Studio 6 XML Professional Edition, Release 2, includes full support for EDI-to-XML mapping with the new Convert-to-XML legacy data integration tool, an enhanced XSLT 2.0 editor and debugger, updated XQuery 1.0 support, a new XML Schema editor, improved XML-to-XML
mapping tools, a new XML grid view, and complete integration with Mark Logic Content Interaction Server 2.2 and Sleepycat Berkeley DB XML 2.0. Stylus Studio 6 XML Professional Edition, Release 2, is available for a free 30-day trial download, with prices starting at $495 for a single-user license and $240 for an upgrade license; www. stylusstudio.com.
VoIP Software Interface for Mac and Linux Zultys Technologies’ desktop callhandling application, Media Exchange Interface for End Users, now runs on Mac OS X, Red Hat Linux, and SuSE Linux as well as Windows 2003 and XP. Used in conjunction with the company’s SIP-based MX250 IP PBX, MXIE supports multiple languages and provides users, operators, and call center agents with real-time access to presence, IM, chat, voice mail, and faxes through a simple GUI. A single license fee costs $45 to $65 depending on quantity; www.zultys. com.
PURPOSE The IEEE Computer Society is
PUBLICATIONS AND ACTIVITIES
the world’s largest association of computing professionals, and is the leading provider of technical information in the field.
Computer. The flaship publication of the
IEEE Computer Society, Computer publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications.
MEMBERSHIP Members receive the
monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEB SITE
The IEEE Computer Society’s Web site, at www.computer.org, offers information and samples from the society’s publications and conferences, as well as a broad range of information about technical committees, standards, student activities, and more. OMBUDSMAN Members experiencing prob-
lems—magazine delivery, membership status, or unresolved complaints—may write to the ombudsman at the Publications Office or send an e-mail to
[email protected]. CHAPTERS Regular and student chapters
worldwide provide the opportunity to interact with colleagues, hear technical experts, and serve the local professional community.
Periodicals. The society publishes 15 AVAILABLE INFORMATION
GERALD L. ENGEL*
Computer Science & Engineering Univ. of Connecticut, Stamford 1 University Place Stamford, CT 06901-2315 Phone: +1 203 251 8431 Fax: +1 203 251 8592
[email protected] President-Elect: DEBORAH M. COOPER* Past President: CARL K. CHANG*
Term Expiring 2005: Oscar N. Garcia, Mark A. Grant, Michel Israel, Rohit Kapur, Stephen B. Seidman, Kathleen M. Swigger, Makoto Takizawa Term Expiring 2006: Mark Christensen, Alan Clements, Annie Combelles, Ann Q. Gates, James D. Isaak, Susan A. Mengel, Bill N. Schilit Term Expiring 2007: Jean M. Bacon, George V. Cybenko, Richard A. Kemmerer, Susan K. (Kathy) Land, Itaru Mimura, Brian M. O’Connell, Christina M. Schober Next Board Meeting: 11 Mar. 2005, Portland, OR
EXECUTIVE
STAFF
Executive Director: DAVID W. HENNAGE Assoc. Executive Director: ANNE MARIE KELLY Publisher: ANGELA BURGESS Assistant Publisher: DICK PRICE Director, Administration: VIOLET S. DOAN Director, Information Technology & Services: ROBERT G. CARE Director, Business & Product Development: PETER TURNER
The IEEE Computer Society Conference Publishing Services publishes more than 175 titles every year. Standards Working Groups. More
than 150 groups produce IEEE standards used throughout the world. Technical Committees. TCs provide professional interaction in over 30 technical areas and directly influence computer engineering conferences and publications. Conferences/Education. The society
To check membership status or report a change of address, call the IEEE toll-free number, +1 800 678 4333. Direct all other Computer Society-related questions to the Publications Office.
VP, Publications: MICHAEL R. WILLIAMS (1ST VP)* VP, Electronic Products and Services: JAMES W. MOORE (2ND VP)* VP, Chapters Activities: CHRISTINA M. SCHOBER* VP, Conferences and Tutorials: YERVANT ZORIAN† VP, Educational Activities: MURALI VARANASI†
BOARD OF GOVERNORS
Conference Proceedings, Tutorial Texts, Standards Documents.
• Membership applications • Publications catalog • Draft standards and order forms • Technical committee list • Technical committee application • Chapter start-up procedures • Student scholarship information • Volunteer leaders/staff directory • IEEE senior member grade application (requires 10 years practice and significant performance in five of those 10)
E X E C U T I V E President:
magazines and 14 research transactions. Refer to membership application or request information as noted at left.
To obtain more information on any of the following, contact the Publications Office:
holds about 150 conferences each year and sponsors many educational activities, including computing science accreditation.
C O M M I T T E E VP, Standards Activities: SUSAN K. (KATHY) LAND*
2005–2006 IEEE Division VIII Director: STEPHEN L. DIAMOND† 2005 IEEE Division V Director-Elect: OSCAR N. GARCIA*
VP, Technical Activities: STEPHANIE M. WHITE† Secretary: STEPHEN B. SEIDMAN*
Computer Editor in Chief: DORIS L. CARVER†
Treasurer: RANGACHAR KASTURI†
Executive Director: DAVID W. HENNAGE†
2004–2005 IEEE Division V Director: GENE F. HOFFNAGLE†
COMPUTER SOCIETY O F F I C E S Headquarters Office 1730 Massachusetts Ave. NW Washington, DC 20036-1992 Phone: +1 202 371 0101 • Fax: +1 202 728 9614 E-mail:
[email protected] Publications Office 10662 Los Vaqueros Cir., PO Box 3014 Los Alamitos, CA 90720-1314 Phone:+1 714 821 8380 E-mail:
[email protected] Membership and Publication Orders: Phone: +1 800 272 6657 Fax: +1 714 821 4641 E-mail:
[email protected] Asia/Pacific Office Watanabe Building 1-4-2 Minami-Aoyama,Minato-ku, Tokyo107-0062, Japan Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 E-mail:
[email protected]
* voting member of the Board of Governors † nonvoting member of the Board of Governors
IEEE
OFFICERS
President: W. CLEON ANDERSON President-Elect: MICHAEL R. LIGHTNER Past President: ARTHUR W. WINSTON Executive Director: TBD Secretary: MOHAMED EL-HAWARY Treasurer: JOSEPH V. LILLIE VP, Educational Activities: MOSHE KAM VP, Publication Services and Products: LEAH H. JAMIESON VP, Regional Activities: MARC T. APTER VP, Standards Association: JAMES T. CARLO VP, Technical Activities: RALPH W. WYNDRUM JR. IEEE Division V Director: GENE F. HOFFNAGLE IEEE Division VIII Director: STEPHEN L. DIAMOND President, IEEE-USA: GERARD A. ALPHONSE
BOOKSHELF
ross-Platform .NET Development: Using Mono, Portable. NET, and Microsoft .NET, Mark Easton and Jason King. This book examines the advantages of building portable, cross-platform .NET code and claims that even those only vaguely familiar with .NET can learn how to run .NET code on different platforms: Linux, Unix, Mac OS X, and Windows. The authors fill the book with code samples and acquired expertise, providing the foundation for a wellrounded skill set. The book catalogs the pitfalls, gotchas, and speed bumps that crop up during .NET implementations, then provides a roadmap for navigating around them. Apress; www.apress.com; 1-59059330-8; 560 pp.; $49.99.
C
mperfect C++: Practical Solutions for Real-Life Programming, Matthew Wilson. According to the author, although C++ is a marvelous language, it’s not perfect. Along with describing what’s wrong with C++, he also offers practical techniques and tools for writing code that’s more robust, flexible, efficient, and maintainable. The author shows readers how to tame C++’s complexity, cut through its vast array of paradigms, take back control over the code, and get far better results. Long-time C++ developers can use this book to help see their programming challenges in new ways—and illuminate powerful techniques they may never have tried. Those newer to C++ will learn principles that will make them more effective in all of their projects. An accompanying CD-ROM contains a variety of C++ compilers, libraries, test programs, tools, and utilities, as well as the author’s related journal articles. Addison-Wesley Professional; www. awprofessional.com; 0-321-22877-4; 624 pp.; $44.99.
I
L 86
ogic in Computer Science: Modeling and Reasoning about Systems, Computer
2nd ed., Michael Huth and Mark Ryan. The recent development of powerful tools for verifying hardware and software systems has fostered an increasing demand for training in basic methods of formal reasoning. Students, in particular, need to gain proficiency in logic-based verification methods. The second edition of this textbook addresses both those requirements by providing a clear introduction to formal reasoning that is both relevant to the needs of modern computer science and rigorous enough for practical application. Improvements to the first edition include sections on SAT solvers, existential and universal second-order logic, micromodels, programming by contract, and total correctness. The coverage of model checking has been substantially updated and exercises have been added. Internet support provides teachers with worked solutions for all exercises and gives students model solutions to some exercises. Cambridge University Press; www. cambridge.org; 0-521-54310-X; 440 pp.; $55.00. ata Hiding Fundamentals and Applications: Content Security in Digital Multimedia, Husrev T. Sencar, Mahalingam Ramkumar, and Ali N. Akansu. Sophisticated multimedia technologies make it possible for the Internet to accommodate a rapidly growing audience with a full range of services and efficient delivery methods. Although the Internet now puts communication, education, commerce, and socialization at our fingertips, its
D
Published by the IEEE Computer Society
rapid growth has raised some weighty security concerns with respect to multimedia content. The authors provide a theoretical framework for data hiding in a signalprocessing context; realistic applications in secure, multimedia delivery; data hiding for proof of ownership; and data hiding algorithms for image and video watermarking. Elsevier Academic Press; http://books. elsevier.com/; 0-12-047144-2; 272 pp.; $69.95. earest Neighbor Search: A Database Perspective, Apostolos N. Papadopoulos and Yannis Manolopoulos. Computationally intensive modern applications require the storage and manipulation of voluminous traditional and nontraditional data sets. Emerging application domains such as geographical, medical, and multimedia information systems; online analytical processing; and data mining have diverse requirements with respect to the information and operations they must support. From the database perspective, new techniques and tools must be developed that increase processing efficiency. The authors discuss query processing techniques for nearest-neighbor queries, provide both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and evaluate numerous applications of nearest-neighbor queries. This book is suitable for researchers, postgraduate students, and practitioners in computer science who are concerned with nearest-neighbor search and related issues. Springer; www.springeronline.com; 0-387-22963-9; 170 pp.; $115.
N
Editor: Michael J. Lutz, Rochester Institute of Technology, Rochester, NY; mikelutz@mail. rit.edu. Send press releases and new books to Computer, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; fax +1 714 821 4010;
[email protected].
EMBEDDED COMPUTING
Building the Software Radio
gram to design and build advanced software radios. JTRS has designed sophisticated radios that use software extensively to create high-performance, flexible communication devices. But these radios have yet to reach the compact form factors that characterize cell phones and other wireless devices. There is, as of yet, no software radio equivalent of the popular Motorola Razr V3 cell phone. Power consumption also remains a problem.
Wayne Wolf, Princeton University
P
eople have been working on software radio for about 10 years. Software radio is just what it sounds like—a radio that uses software to perform many of the signal processing tasks that analog circuits traditionally handle. Software radio could turn out to be a paradigm shift for communication systems. The US Defense Advanced Projects Research Agency (DARPA) kicked off research into software radios to solve military problems, but software radios can help solve some important problems in commercial communication systems as well. Software radio offers the advantage of putting many traditionally hard functions in modules whose characteristics can be changed while the radio is running. For example, rather than tuning a circuit to filter in only a certain frequency band, developers can use software to provide more flexible filtering that could change as the radio operates.
FUNCTIONALITY THROUGH FLEXIBILITY A more flexible radio could provide many useful functions—including the widely discussed bandwidth harvesting. Today, the FCC licenses radios to operate in certain bands. However, because most radios don’t operate all the time, this results in a significant amount of licensed bandwidth remaining unused at any given time. A more flexible radio could scan the spectrum and find unused frequencies. It could then reconfigure itself to oper-
Embedded computing could be the key to designing next-generation software-driven programmable radios. ate at those frequencies and to do so in ways that minimize its interference with operators licensed for that band. More flexible programmable radios could also be more secure against attacks on radio systems. In addition to detecting attacks, a programmable radio could reconfigure itself to respond to them. The radio network would be much more secure if radios could respond quickly and invisibly to security problems. Software radios can also respond to changes in operating conditions by, for example, changing their modulation schemes. Rather than having a circuit that generates a particular fixed waveform, a more flexible radio could synthesize the waveform required for a particular set of conditions. A radio that generates and detects many types of waveforms could operate in a much broader range of locales and environmental conditions.
FEASIBILITY All this sounds great, but is it possible? Significant progress has been made in developing software radios. The US Department of Defense has established the Joint Tactical Radio System pro-
Clearly, software radio has promise, but how do we advance from current technology to the small, battery-operated device we used in those bad-old analog radios? Embedded system design techniques—both hardware and software—will help design more efficient software radios. The keys to designing high-performance, efficient embedded systems include a heterogeneous architecture specialized to the task at hand and application-specific software stacks that provide efficient interfaces based on a few general-purpose primitives. Unfortunately, the term software radio conjures up images of a mainframe connected to an antenna. That isn’t a realistic architecture. Uniprocessors are fast, but not fast enough to function as high-frequency radios. And big processors certainly aren’t cheap enough or energy-efficient enough to support realistic implementations of portable software radios. Consider the relative rates of radios and CPUs. Improved high-frequency amplifiers operate at increasingly higher frequencies. Let’s arbitrarily choose 500 MHz as the carrier frequency, even March 2005
87
Embedded Computing
though that isn’t the highest frequency at which modern radios operate. Let’s assume that we want to use direct conversion methods, so we have to obey Nyquist’s theorem, which tells us that we need two samples per cycle. That means our software radio must operate at 1 billion samples per second. What about the processor? Let’s be aggressive here to make a more optimistic comparison for software radios. Let’s assume that our processor runs at 4 GHz and that it can perform four operations per cycle. Further, although we know it won’t actually happen, let’s be unbelievably optimistic and assume that the software radio algorithms provide 100 percent utilization of the CPU—the memory system and pipeline operate perfectly so that we can use every operation on every clock cycle. These assumptions mean that our theoretical device performs 16 billion operations per second. At a sample rate of 1 gigasample per second, that means the device can do only 16 operations per sample. That’s not much at all, and it isn’t enough to implement any sort of realistic radio, even if we can come up with the perfect software to make our high-performance CPU operate ideally. Some might think that subword parallelism will come to the rescue. Today’s CPUs let us use those long 32bit words in smaller groups, so we can perform two 16-bit data operations for one cycle. If we assume that the software radio needs 16 bits per sample— a common modern design point—it can perform 32 operations per sample. That’s not enough. How many operations per sample must a radio do? Even ignoring the protocol functions performed in the baseband, it must filter and modulate/demodulate the signal while also performing error correction. Most radios must both transmit and receive. Any way you count it, that’s a lot more than 32 operations per sample. If we use analog filters for some of the initial stages and operate the software radio at lower frequencies, we have more headroom to perform use88
Computer
ful work on operations and that is the technique that has been used in today’s successful software radios. But that also makes the radio somewhat less flexible. When we factor in power consumption, the outlook becomes even more grim. Trevor Mudge and his coauthors used mobile supercomputing as an example in their Embedded Computing column discussion of current trends in computer architectures and power consumption (Todd Austin et al., “Mobile Supercomputers,” Computer, May 2004, pp. 81-83).
Heterogeneous architectures generally provide more power efficiency at a lower cost than homogeneous architectures. These authors pointed out that modern general-purpose CPUs, such as Intel architecture machines, consume 100 to 1,000 times more energy than they must to provide realistic battery life in mobile applications. Generalpurpose processors burn somewhere between 10 and 100 watts, whereas Mudge and colleagues estimated that a mobile system of the near future should consume only 75 mW to provide useful battery lifetimes, even assuming that batteries continue to improve over the next few years. So it’s easy to see that a uniprocessor can’t keep up with the performance required for software radio, nor is it likely to be able to provide the necessary performance at low enough energy levels to offer a useful portable system.
HETEROGENEOUS MULTIPROCESSORS So what sort of hardware architectures do we use, and what sort of software architectures do we provide, to implement radio functions on those platforms? Heterogeneous multiprocessors provide the architecture of choice in other
application areas, such as mobile multimedia, so it seems reasonable to pursue them for radios as well. Multiprocessors make sense because radios, like other signal-processing applications, exhibit task-level parallelism that can easily be mapped onto a multiprocessor. The radio’s block diagram naturally suggests a collection of processors that perform operations in different stages. Multiprocessors make it easier to meet real-time deadlines. The debate about software radio architectures gets interesting when we focus on the question of exactly how heterogeneous this set of processors should be. Heterogeneous architectures generally provide more power efficiency at a lower cost than homogeneous architectures because the block diagram has already been mapped onto the multiprocessor, putting different types of functions onto different processors. When the processors are specialized to the tasks they run, they become more efficient. One way to make a processor more specialized is to make it something other than a CPU. When dealing with a fixed function, we can build a unit that does just that one function. But there are many intermediate points between a von Neumann machine and a single-function box. For example, we could build digital filter structures with programmable coefficients. We wouldn’t want to use these parameterizable blocks to build all of the radio, but we could certainly build several of the higher-level functions with more specialized hardware and thus come up with a more energy-efficient radio. Strictly speaking, because using software implies fetching instructions from a memory, this would be a programmable radio rather than a software radio.
SOFTWARE FOCUS The other aspect of the programmable/software radio system that deserves attention is the software itself. How do we organize the software that performs the radio functions, and what inter-
faces do we provide to the applications that use software radios? Once again, let’s get some pointers by looking at what other application domains do. The Open Mobile Application Processor Interfaces standard for mobile multimedia, created by Texas Instruments and ST Microelectronics, built an interface for software applications that run on mobile multimedia devices such as third-generation cell phones. OMAPI is application-specific—it defines an interface only for multimedia functions. But OMAPI is designed to make it easier to move multimedia applications from one platform to another, so long as the platforms meet the OMAPI standard. The OMAPI designers intended these implementations to be optimized for the platform on which they run. General interfaces have their advantages, particularly in fast-changing software systems or when large programs from outside vendors must be interfaced. But the overhead is more noticeable in real-time embedded sys-
tems than in non-real-time systems like servers. It’s even more noticeable in low-power systems. Using a specialized, application-specific interface might not be quite as flexible, but it provides important benefits in realtime, low-power systems. The OMAPI standard provides two important lessons. First, in addition to abstracting software, an application programming interface can provide an interface to a hardware function. The API is even more useful if the function it represents can be implemented in either software or hardware. Second, an API designed for a particular application domain can provide a general interface to that domain. The generality of the API’s internals isn’t as important as the generality of the interface to the applications it wants to support. Model-based software is one methodology used in embedded computing that can help us design efficient middleware for software radio functions. We build abstract models of the software and apply synthesis algorithms that generate efficient implementations. Model-based
software design allows us to quickly build efficient, specialized software.
n both the hardware and software sides, a bit of application tailoring goes a long way. If we weren’t worried about energy consumption in particular, we might be willing to work with a pool of processors controlled by a collection of general-purpose software. But if we want software/programmable radios to be used in the broadest possible range of applications, energy consumption becomes an important design metric. Embedded computing provides some useful tips on how to design nextgeneration software-driven programmable radios. But that still leaves lots of room for creativity in issues such as programmable blocks for key radio functions. ■
O
Wayne Wolf is a professor of electrical engineering at Princeton University. Contact him at
[email protected].
Look inside Computer in 2005... outlook: looking ahead to future technologies January nanoscale design February smart things and places March beyond Internet April virtualization May computing and education June multimedia July real-time systems August To submit an article for intelligent search September publication in software systems October Computer, see our author guidelines at power-aware computing November www.computer.org/ information security December computer/author.htm. March 2005
89
2005 IEEE Computer Society Professional Membership/Subscription Application Membership and periodical subscriptions are annualized to and expire on 31 December 2005. Pay full or half-year rate depending upon the date of receipt by the IEEE Computer Society as indicated below.
Membership Options*
FULL YEAR HALF YEAR Applications received Applications received 16 Aug 04 - 28 Feb 05 1 Mar 05 - 15 Aug 05
All prices are quoted in U.S. dollars
1 I do not belong to the IEEE, and I want to join just the Computer Society $ 102 ❑
$51 ❑
2 I want to join both the Computer Society and the IEEE: I reside in the United States I reside in Canada I reside in Africa/Europe/Middle East I reside in Latin America I reside in Asia/Pacific
$195 ❑ $175 ❑ $171 ❑ $164 ❑ $165 ❑
$98 ❑ $88 ❑ $86 ❑ $82 ❑ $83 ❑
3 I already belong to the IEEE, and I want to join the Computer Society.
$ 44 ❑
$22 ❑
(IEEE members need only furnish name, address, and IEEE number with payment.)
Are you now or were you ever a member of the IEEE? Yes ❑ No ❑ If yes, provide member number if known: _______________
ISSUES PER YEAR
Add Periodicals** BEST DEAL
IEEE Computer Society Digital Library Computing in Science and Engineering IEEE Computer Graphics and Applications IEEE Design & Test of Computers IEEE Intelligent Systems IEEE Internet Computing IT Professional IEEE Micro IEEE MultiMedia IEEE Pervasive Computing IEEE Security & Privacy IEEE Software IEEE/ACM Transactions on Computational Biology and Bioinformatics IEEE/ACM Transactions on Networking† IEEE Transactions on: Computers Dependable and Secure Computing Information Technology in Biomedicine† Knowledge and Data Engineering Mobile Computing Multimedia† NanoBioscience† Parallel and Distributed Systems Pattern Analysis and Machine Intelligence Software Engineering Visualization and Computer Graphics VLSI Systems† IEEE Annals of the History of Computing
FULL YEAR Applications received 16 Aug 04 - 28 Feb 05 PRINT
ELECTRONIC
COMBO
Payment required with application
Membership fee $ Periodicals total $ Applicable sales tax*** $ Total $
__________ __________ __________ __________
Enclosed: ❑ Check/Money Order**** Charge my: ❑ MasterCard ❑ Visa ❑ American Express ❑ Diner’s Club ___________________________________________ Card number
HALF YEAR Applications received 1 Mar 05 - 15 Aug 05 PRINT
ELECTRONIC
COMBO
n/a n/a $118 ❑ 6 $42 ❑ $40 ❑ 6 $39 ❑ $31 ❑ 6 $37 ❑ $30 ❑ 6 $37 ❑ $30 ❑ 6 $39 ❑ $31 ❑ 6 $40 ❑ $32 ❑ 6 $37 ❑ $30 ❑ 4 $35 ❑ $28 ❑ 4 $41 ❑ $33 ❑ 6 $41 ❑ $33 ❑ 6 $44 ❑ $35 ❑
n/a $55 ❑ $51 ❑ $48 ❑ $48 ❑ $51 ❑ $52 ❑ $48 ❑ $46 ❑ $53 ❑ $53 ❑ $57 ❑
n/a $21 ❑ $20 ❑ $19 ❑ $19 ❑ $20 ❑ $20 ❑ $19 ❑ $18 ❑ $21 ❑ $21 ❑ $22 ❑
$59 ❑ $20 ❑ $16 ❑ $15 ❑ $15 ❑ $16 ❑ $16 ❑ $15 ❑ $14 ❑ $17 ❑ $17 ❑ $18 ❑
n/a $28 ❑ $26 ❑ $24 ❑ $24 ❑ $26 ❑ $26 ❑ $24 ❑ $23 ❑ $27 ❑ $27 ❑ $29 ❑
4 $35 ❑ 6 $44 ❑
$28 ❑ $33 ❑
$46 ❑ $55 ❑
$18 ❑ $22 ❑
$14 ❑ $17 ❑
$23 ❑ $28 ❑
$41 ❑ $31 ❑ $45 ❑ $43 ❑ $32 ❑ n/a $40 ❑ $40 ❑ $44 ❑ $38 ❑ $34 ❑ n/a $31 ❑
$33 ❑ $25 ❑ $35 ❑ $34 ❑ $26 ❑ n/a $30 ❑ $32 ❑ $35 ❑ $30 ❑ $27 ❑ n/a $25 ❑
$53 ❑ $40 ❑ $54 ❑ $56 ❑ $42 ❑ $40 ❑ $48 ❑ $52 ❑ $57 ❑ $49 ❑ $44 ❑ $28 ❑ $40 ❑
$21 ❑ $16 ❑ $23 ❑ $22 ❑ $16 ❑ n/a $20 ❑ $20 ❑ $22 ❑ $19 ❑ $17 ❑ n/a $16 ❑
$17 ❑ $13 ❑ n/a $17 ❑ $13 ❑ n/a n/a $16 ❑ $18 ❑ $15 ❑ $14 ❑ n/a $13 ❑
$27 ❑ $20 ❑ $27 ❑ $28 ❑ $21 ❑ n/a $24 ❑ $26 ❑ $29 ❑ $25 ❑ $22 ❑ $14 ❑ $20 ❑
12 4 4 12 6 6 4 12 12 12 6 12 4
Payment Information
Choose PRINT for paper issues delivered via normal postal channels. Choose ELECTRONIC for 2005 online access to all issues published from 1988 forward. Choose COMBO for both print and electronic.
___________________________________________ Expiration date (month/year) ___________________________________________ Signature
USA-only include 5-digit billing zip code
■■■■■
* Member dues include $19 for a 12-month subscription to Computer. ** Periodicals purchased at member prices are for the member’s personal use only. *** Canadian residents add 15% HST or 7% GST to total. AL, AZ, CO, DC, NM, and WV add sales tax to all periodicals. GA, IN, KY, MD, and MO add sales tax to print and combo periodicals. NY add sales tax to electronic and combo periodicals. European Union residents add VAT tax to electronic periodicals. **** Payable to the IEEE in U.S. dollars drawn on a U.S. bank account. Please include member name and number (if known) on your check. † Not part of the IEEE Computer Society Digital Library. Electronic access is through www.ieee.org/ieeexplore.
For fastest service, apply online at www.computer.org/join NOTE: In order for us to process your application, you must complete and return BOTH sides of this form to the office nearest you:
Asia/Pacific Office IEEE Computer Society Watanabe Bldg. 1-4-2 Minami-Aoyama Minato-ku, Tokyo 107-0062 Japan Phone: +81 3 3408 3118 Fax: +81 3 3408 3553 E-mail:
[email protected]
Publications Office IEEE Computer Society 10662 Los Vaqueros Circle PO Box 3014 Los Alamitos, CA 90720-1314 USA Phone: +1 800 272 6657 (USA and Canada) Phone: +1 714 821 8380 (worldwide) Fax: +1 714 821 4641 E-mail:
[email protected] Allow up to 8 weeks to complete application processing. Allow a minimum of 6 to 10 weeks for delivery of print periodicals.
STANDARDS
Public Opinion’s Influence on Voting System Technology Herb Deutsch, IEEE P1583 Committee
T
he US general election in 2000 represents a turning point in elections history. A laborious count and analysis of what was statistically a tie vote in Florida decided the highly scrutinized contest for US president. Simultaneously, voting system standards continued evolving, spurred in part by the introduction of new, high-power technologies. These factors, coupled with an unprecedented level of public scrutiny, changed nearly all aspects of the election process. With its recounts, interpretation of voter intent, and presumed problems related to punch-card voting, the 2000 presidential vote triggered the passage of the Help America Vote Act (HAVA) and the massive trend toward direct recording electronic voting systems (DREs). The election process, which had always been taken for granted, now faced intense scrutiny from the media, computer scientists, conspiracy theorists, advocacy groups, and the general public.
FEC STANDARDS In the months prior to the 2002 election, the US Federal Election Commission approved new voting system standards designed to ensure that election equipment certified for purchase by participating states would be accurate, reliable, and dependable. Adoption of the FEC 2002 standards started
vendors. The FEC 2002 standards require that an entire end-to-end system receive one overall certification. With previous standards, each subsystem could be tested and certified separately. In addition, the 2002 standards made all indicated software source code structure requirements mandatory; previous standards listed them as advisory.
State variations In the 1990s, most states only certified tabulators, and most only required certification for newly introduced machines. Hardware and firmware versions were not usually recorded,
The controversial 2000 US election, combined with other factors, is influencing the development of electronic voting equipment. a domino effect that changed election equipment certifications across the US. The 2002 standards, like the previous voting systems standards set in 1990, cover the election process from end to end. The standards encompass • front-end software, including administrative databases, election specific definitions, ballot layouts, and tabulator setups; • tabulators and their hardware and firmware, including both centralcount and precinct-count versions of punch-card and mark-sense optical scan systems as well as DREs; and • the back-end software for results accumulation and reporting. The 2002 standards expanded on requirements for the front- and backend portions of the overall election system and described in more detail usability features that DREs must provide. But two other changes have had the greatest impact on voting system
and upgrades did not require certification. Some states did record new versions, but most only required notification of the update’s improvements. Others also required certification of accumulation and reporting systems, and a rare few required a full system certification and recorded all subsystem versions. This variation among state requirements, even those mandating qualification to the 1990 standards by an independent testing authority (ITA) as a prerequisite to state certification, dovetailed with the 1990-standards approach since each subsystem and tabulator was independently tested and approved. When the 2002 standards was adopted in states that previously had only certified tabulators and subsystems, units and systems needing to be upgraded had difficulty complying with the system certification approach. Many officials in these states did not understand the requirements for receiving a certification number from the March 2005
93
Standards
National Association of State Election Directors, which only added to the problem. For a system to receive a NASED number identifying it as 2002 compliant, every subsystem had to be 2002 compliant. Officials had believed that 2002 “shingles” could be issued to tabulators alone. In many states, this whole-system certification requirement prevented upgrades to previously certified systems.
Implications Virtually all the main election system vendors had systems deployed that were tested and certified to meet the 1990 standards. Although the source code in these systems had passed inspection, many systems did not meet all the format requirements that became mandatory when the 2002 standards went into effect. Under the 1990 standards, systems were required to have correct functional structure, but the documentation conditions were advisory. In most instances, making the source code comply with the 2002 standards required a total rewrite. Doing so risked the loss of working functions without any enduser benefit. In addition, the 2002 standards set new usability requirements on the interface the DREs present to voters who have vision limitations less severe than complete blindness. These related to screen display colors, contrast, and text size—also known as the zoom requirement. Systems certified to meet the 1990 standards did not have this capability, which made incorporating these features nontrivial. Further, the 2002 standards did not clearly describe whether a voter must be allowed to select a color and change the contrast or whether text sizes had to be continuously adjustable.
TECHNOLOGY AND PERCEPTION Voting machines of earlier design— both paper-based tabulators and DREs—use far less capable computer 94
Computer
microprocessors than those available today, and they only support minimal memory capacity. For example, the commonly-used Zilog Z80 microprocessor has a memory limit of 64 Kbytes. These units have no operating system and use small firmware written in assembly language. The limitations of the microprocessors and programs used in these machines made many of today’s security concerns—viruses, surreptitious code, routines to subvert a percentage of votes from one candidate to another—inconceivable.
The average person’s experiences with PCs fed the perception that voting machines were vulnerable to attack. With the advance of the PC and Intelbased microprocessors from the original 8086 to the Pentium 4, available program memory increased beyond the largest industrial mass storage systems of the 1980s and 1990s. The use of these microprocessors in modern voting machines created the perception that voting machines could be susceptible to attack. The average person’s experiences with viruses, worms, program crashes, file corruption, frequent forced reboots, and even ease of program downloads have reinforced the opinion that voting machines must harbor the same vulnerabilities. That many voting machines use Microsoft Windows and other OSs people run on their home PCs also fed this perception. Because DREs did not produce physical ballots for human review and all audits of DRE performance were electronic, these systems are especially suspect. Many believed that because vendors pay testing authorities and because the proprietary program source code is unavailable for public inspection, ITA testing and certification could not be trusted.
When a surreptitiously acquired copy of Diebold’s DRE source code was found to be flawed in function and to contain many security risks, some concluded that DREs in general could not be trusted and required a paper trail to make them usable. In the wake of the Diebold source code exposure and other occurrences, many states that previously had not done so chose to adopt the FEC standards and the ITA process. Other states that had only certified tabulators now required that the full system be certified and the version identification of the approved components recorded. States that accepted the NASED approval now used that as a prerequisite to certification and added their own testing to the approval process. In addition, states that had previously certified DREs chose to add a mandatory voter-verified paper audit trail (VVPAT) to the DRE certification requirement. Some even required that this paper trail be electronically readable. Finally, public scrutiny on all aspects of elections caused many states to start performing full audits. These audits covered all installed voting equipment and software versions without regard to the process by which the installed equipment was certified. As the election climate changed, interpretations of the FEC 2002 standards became more stringent. At the same time, new state certification rules prevented vendors from providing upgrades that would correct bugs and provide enhancements to existing systems. Yet these enhanced systems were built from the same source code as the previously certified systems and had the same overall characteristics in all other aspects. To vendors, certifications went from a tangential effort to a main development focus.
IEEE P1583 STANDARDS Although the FEC 2002 standards were hailed as an improvement over the 1990 document, many still criticize the 2002 requirements as inadequate.
In the fall of 2001, in reaction to the 2000 US election, the IEEE P1583 Voting System Standards committee was formed. Over time, the P1583 committee began to build upon the FEC 2002 standards, expanding on usability and security, considered the weakest areas in the FEC 2002 standards. Not as extensive as the FEC 2002 standards, P1583’s scope encompassed only the voting equipment used in polling places—mainly DREs. Spurred by increased public scrutiny of voting, others joined the committee, presenting new opinions and challenges. The committee began to confront issues such as how to treat COTS hardware and software and handle VVPATs, what constitutes a secure DRE system, whether to permit the use of wireless technology in voting systems, and how to handle the new accessible ballot-printing voting devices that do not tabulate. Unless a vendor modified the code, the FEC 2002 standards essentially exempted COTS from evaluation other than as part of the system’s functional testing. However, one group within the P1583 committee viewed COTS components—and especially their exemption from source code analysis—as the biggest security risk for voting systems. Some believed that a VVPAT should be required for any DRE, while others felt that it was a disadvantage in terms of cost, usability, and reliability. The committee’s compromise was to include VVPAT specifications as an option and, because its requirement is a matter of policy, to support states that require them as well as ones that don’t. Similarly, some within the committee perceived wireless connectivity as a major security risk. But some systems currently use wireless technologies for unofficial results transmission after the polls close. After considering all these issues, the P1583 committee will make a new version of the draft available for committee ballot and approval. HAVA led to the creation of the Election Assistance Commission and
mandated that this group, in conjunction with NIST, should create new voting system standards by July 2005. The EAC’s newly established Technical Guidelines Development Committee has a very short timetable within which to create recommendations for the new standards. Members of the P1583 committee hope that, after the three-year effort, the NIST and EAC will adopt the standard. Meanwhile, the vendor community is scrambling to upgrade their systems to comply with the FEC 2002 standards while providing enhancements to DREs that will meet some new state-specific VVPAT requirements.
Many now view systems without VVPATs as security deficient. But the growing requirement for VVPATs imposes administrative, reliability, and secrecy limitations. Using VVPATs might cause some of the new accessible ballot-printing voting devices, combined with paper-ballot tabulators, to become the systems of choice. By the 2006 elections, HAVA should be in full effect. Compliance with the new certification requirements might be so costly then that it might hinder DRE use nationwide. Technology, certification, and public opinion will decide the preferred election systems for US voters. ■
ven with new standards, persistent concerns may prevent DREs from becoming the preferred voting systems throughout the US. For example, HAVA requires that every polling place have voting units for the visually handicapped and that voters be protected from incorrect vote selections either by notification or prevention. Modern DREs satisfy both of these requirements but may still be seen as undesirable due to security and confidence issues.
Herb Deutsch is a software product manager at Election Systems & Software. He is a member of the IEEE and the IEEE Standards Association and chair of the P1583 committee. Contact him at hdeutsch@ ieee.org.
E
C
Editor: Jack Cole, US Army Research Laboratory’s Information Assurance Center,
[email protected]; http://msstc.org/cole.
omputer Wants You
Computer is always looking for interesting editorial content. In addition to our theme articles, we have other feature sections such as Perspectives, Computing Practices, and Research Features as well as numerous columns to which you can contribute. Check out our author guidelines at www.computer.org/computer/author.htm for more information about how to contribute to your magazine.
March 2005
95
IT S Y S TEMS PERSPECTIV ES
The Winner’s Curse in High Tech G. Anandalingam and Henry C. Lucas Jr. University of Maryland, College Park
I
n October 2000, Electronic Data Systems won a $7 billion contract to modernize and network more than 360,000 desktop computers for the US Navy and the US Marine Corps. Initially a cause for celebration, the ambitious project to create a Navy-Marine Corps Intranet (NMCI) has instead turned out to be a winner’s curse for EDS. The company wrote off $334 million of the contract in 2003, and another $375 million in 2004. EDS now faces a liquidity crisis, and some blame NMCI for placing the firm’s future in jeopardy. EDS isn’t alone. Due to both psychological factors and misaligned market incentives, the winner’s curse, described in “The Winner’s Curse” sidebar, was rampant in the 1990s. Senior management was obsessed with winning at all costs, and companies and their shareholders ended up losing enormous sums of money. Our recent book, Beware the Winner’s Curse: Victories that Can Sink You and Your Company (Oxford Univ. Press, 2004), outlines a number of cases in which companies experienced the winner’s curse, presents industry-specific ideas, and offers a general framework for improving management decision making in these circumstances. The winner’s curse is especially prevalent in technology given the importance and size of this economic sector. The US wireless spectrum auction fiasco and Lucent Technologies’ disastrous acquisitions of several optical networking startups are two examples of this phenomenon.
96
Computer
com companies ended up spending over four times more than the spectrum was worth. The US spectrum auctions’ success did not go unnoticed. Many European countries, which had relied on a much slower evaluation process for allocating licenses (known derogatorily as “beauty contests”), also got into the business of auctioning off radio spectra with similarly dramatic results. In fact, such auctions provided nearly 5 per-
The tendency to overvalue an asset is especially prevalent in the technology sector. THE SPECTRUM AUCTION FIASCO On 25 July 1994, the US Federal Communications Commission (FCC) began auctioning licenses for underused radio spectra. The first auction, which sold 10 licenses for frequencies that could be used to enhance paging services, raised $617 million in only five days. The FCC was so enthralled by these results that it continued auctioning spectra for various other uses including broadband personal communications services, wireless data, and mobile fax. By 2000, spectrum auctions had raised an astonishing $42 billion. This far exceeded the US Congressional Budget Office projection of $10 billion, an amount that even the telecommunications industry had previously regarded as inflated. “There is no rational methodology on which the $10 billion was calculated,” declared BellSouth chairman John Clendenin, while Bert Roberts, chairman of MCI, said “The government is smoking something to think that they are going to get $10 billion for these licenses” (J. McMillan, Reinventing the Bazaar: The Natural History of Markets, W.W. Norton & Co., 2002). Nevertheless, these and other telePublished by the IEEE Computer Society
cent of the annual budget of several European governments. The auction of third-generation mobile-phone licenses in 2000 in the UK netted more than $35 billion and, according to the Financial Times, constituted “the world’s largest concerted transfer of money from the corporate sector to state coffers.” However, not to be outdone, the German government raised $46 billion from its own 3G spectrum auction later that year. The telecom companies that won spectrum licenses in both the US and Europe became too financially stressed to actually build the 3G networks. Their credit ratings became so low that banks were no longer willing to lend them money. Consequently, a number of promising mobile applications lack the infrastructure for deployment.
LUCENT IS BITTEN BY THE CURSE In 1996, Lucent Technologies sought to expand its optical networking presence by acquiring a number of companies. During its first three years, the strategy paid off: Share prices increased fivefold, and revenues jumped threefold. Based on this financial success, Lucent went shopping. By 2001, the company had completed 38 acquisitions totaling more than $46 billion,
including $24 billion for Ascend Communications, which positioned Lucent as the leading data-networking equipment supplier. In July 1999, the company bought Nexabit Networks, a start-up developer of ultra-high-speed routers for optical networking systems, for about 14 million shares of Lucent common stock. The acquisition, valued at about $900 million, was the largest in history of a “prerevenue” company that had not actually sold anything. Why did Lucent pay so much? At the time, Cisco Systems was trading at 22 times its revenue. Based on a cursory analysis, Lucent was convinced that Nexabit could ship out $40 million worth of “boxes” in 2000, and multiplying this by 22 gave $880 million, which rounded up to $900 million. Within a year, Lucent came to realize that, in rushing to obtain the world’s fastest router, it had greatly overestimated Nexabit’s value. In March 2002, the company announced a new switching unit to enhance the technologies obtained from Nexabit. Yet, less than eight months later, citing unexpected cutbacks in network equipment spending by big phone and Internet companies, Lucent cancelled the product. Lucent was bitten not once, but twice, by the winner’s curse. In 1999, the company invested in another promising start-up, Ignitus Communications, that focused on network edge solutions for metropolitan networks. In March 2000, Lucent purchased the remaining stake in Ignitus and folded it into the Optical Networking Group. Two months later, in a move that surprised both Lucent customers and Ignitus, Lucent purchased Chromatis Networks for $4.5 billion. Chromatis was developing a high-speed switch whose architecture was widely thought to be identical to Ignitus’ primary offering. Soon after, Lucent folded Ignitus into Chromatis and eliminated the Ignitus product from its portfolio. In announcing the acquisition, Lucent CEO Richard McGinn boasted that, “With Chromatis, Lucent is one
The Winner’s Curse The phrase “winner’s curse” originated in the late 1960s, when the US government auctioned leases for oil tracts in the Gulf of Mexico. Some engineers considered the wide disparity in bids and observed that the winners often paid too much for their tracts—what initially appeared to be a victory later turned out to be a curse. Theoretically, oil should be worth the market price to each bidder. However, uncertainty arises in estimating how much oil is in each tract. A company that, say, plans to bid on 20 tracts would expect its estimates to be high on some tracts and low on others. If it wins a lease on all 20 tracts, on average it probably is spending the right amount. However, if the bidder only wins the tracts it values most highly and loses the others, it ends up overpaying for its winning purchases. In general, you face a potential winner’s curse in any situation in which you, and possibly others, are bidding for an asset whose value can only be estimated in advance. In the case of a technology solution for a customer, such as a new system or major services contract, you start believing in the most optimistic project completion scenario. You win by being the low bidder but end up suffering when your estimate turns out to be too low. Tyco, Worldcom, Enron, ImClone, and other organizations suffered the winner’s curse after dramatically overpaying for assets that turned out to be worth far less than first estimated. For example, former Tyco CEO Dennis Kozlowski bought CIT Financial for a whopping $9.5 billion without considering the possible downside. Tyco ended up selling CIT Financial in less than two years for $4.6 billion, resulting in a net loss of almost $5 billion.
step closer to bringing the speed and power of fiber optics all the way to a customer’s desktop.” However, Chromatis’ technologies failed to meet the expectations of Lucent’s strategic planning team. In August 2001, Lucent closed Chromatis and dismissed all 150 of its employees. oth personality and poorly understood market forces can influence decision makers in all types of industries, including the high-tech industry, to overvalue a deal. Our book provides concrete strategies for avoiding the winner’s curse by directly addressing these factors and adopting new evaluation techniques. It’s critical to have structures in place, including an independent board of directors, to rein in the “imperial CEO” and curb the pursuit of corporate victory at all cost. Some entity within the organization must have the authority to say, “We can’t spend any more than this,” or “We can’t offer a lower bid than this.” Without such a
B
check, coupled with better system and life-cycle analysis, companies seeking the next great technology could inadvertently sow the seeds of their own destruction. ■ G. Anandalingam is the Ralph J. Tyser Professor of Management Science in the Robert H. Smith School of Business and the Institute for Systems Research at the University of Maryland, College Park. Contact him at ganand@rhsmith. umd.edu. Henry Lucas Jr. is the Robert H. Smith Professor of Information Systems in the Decision and Information Technologies Department of the Robert H. Smith School of Business at the University of Maryland, College Park. Contact him at
[email protected].
Editor: Richard G. Mathieu, Dept. of Decision Sciences and MIS, St. Louis University, St. Louis, MO;
[email protected]
March 2005
97
THE PROFESSION
An Open-Secret Voting System
Although this technology appears to be widely accepted, suspicions that similar machines might be used to rig an election cannot be dismissed. The case for using electronic voting machines in the US was not helped when, in 2000, the president of a company that manufactures such machines stated in a political fundraising letter that he was “committed to helping Ohio deliver its electoral votes to the president [George W. Bush] next year” (Julie Carr Smyth, “Voting Machine
Thomas K. Johnson, T.K. Johnson and Associates
I
n the 2000 US presidential election, the vote counting in Florida became an international spectacle, replete with hanging chads and Supreme Court intervention. Eventually, lawmakers passed legislation intended to improve the election process’s integrity. Clearly, however, much room for improvement remains. In a close race for governor, Washington state performed an automated recount that gave a mere 42-vote lead to the top contender. After a manual recount of nearly 3 million votes, the lead changed, and officials declared the other top candidate the winner (Ralph Thomas, “Gregoire Declared Governor-Elect, but Rossi Wants New Vote,” Seattle Times Olympia Bureau; http://seattletimes.nwsource.com/html/ localnews/2002135074_rossi30m.html). This incident is just one of many stories concerning voting irregularities across the US. Tales from various precincts in the battleground state of Ohio raised questions about whether the November vote totals truly reflect the will of the people. The Bush administration, certain in its belief that democracy is the best form of government, remains intent on spreading US-style democracy to Iraq, Afghanistan, and possibly other nations. Considering the voting difficulties the US is experiencing, it might well be advised to get its own house in order before attempting to export democracy. Certainly, there must be a better way of casting and counting votes than the methods now in use. We must take
100
Computer
The right combination of technologies could raise public confidence in the electoral process. technologies that already exist, use their strengths, overcome their weaknesses, and implement a reliable and trustworthy system.
CURRENT TECHNOLOGIES Electronic voting machines allow software configuration of the ballots, adapt to a voter’s first language, and offer a touch-screen interface’s ease of use. These machines also make it easy to change a selection before pressing a final accept button. But as a means for counting votes, computer-based devices raise suspicions as to what exactly is going on inside the black box. It’s not hard to imagine all kinds of software irregularities, intended or otherwise, that might cause machine tallies to be skewed. Thus, mistrust of such machines runs rampant. Since 1996, Brazil has been using electronic voting machines to record voters’ choices in elections for president, state governors, and legislators (Holli Riebeek, “Brazil Holds All-Electronic National Election,” IEEE Spectrum; www.spectrum.ieee.org/WEBONLY/ resource/nov02/nbraz.html). These devices even display photographs of the candidates. Published by the IEEE Computer Society
Controversy,” The Cleveland Plain Dealer; www.commondreams.org/ headlines03/0828-08.htm). Mark-sense documents provide a more tangible form of ballot. These ballots, optically scanned and tallied by computer, offer a simple user interface and provide a paper trail that facilitates recounting by machine or by hand. Unlike the widely used punchedcard ballots, humans usually can easily read and verify mark-sense ballots. However, the optical technology requires that each voter carefully fill in small circles with a pencil. A voter with arthritis or poor eyesight might struggle with this process or simply stay home on election day. The counting of these ballots, although usually accurate, could be made more transparent. The Internet’s strengths include easy access and broad public acceptance, but like the touch-screen voting machine, Internet software raises questions about security and integrity.
BEST OF BOTH WORLDS Suppose we combined the flexibility of touch-screen voting with the ease of human readability and validation that Continued on page 98
The Profession Continued from page 100
mark-sense ballots offer. Add to these ingredients the Internet’s wide-open access and we can produce a fair, secure, and transparent system. The existing touch-screen machines could be utilized as the voter interface at the local polling place. But rather than internally counting each vote, the machines could connect to inexpensive laser printers that would produce mark-sense ballot sheets with the circles filled in, recording each individual voter’s choices on paper. Voters could easily see whether the printed form accurately reflects their choices. The white paper stock could bear a special watermark to reduce the possibility of fraud by ballot box stuffing. In addition to the filled-in marksense information, each printed sheet could bear a unique, randomly chosen ballot identification number. The number would be encoded on a grid of mark-sense circles: one column of circles for each digit of the ID, with the circles numbered 0 through 9 in each column. The machine’s software could generate this ballot ID number or—to assure the ID’s uniqueness—retrieve the number from a central server interfaced to all voting machines within a region. After checking the printed ballot for accuracy, the voter could request a take-home copy. The paper stock used for copies could be of a different color, but with the same special watermark as the original. Voters could use this copy to verify that their ballots were correctly recorded and counted.
Scanned, uploaded, and tallied Once the polls close, all the official white-paper ballots could be scanned with the scanners already used in many polling places. The scanning process would produce estimated vote totals but not official tallies. The primary product of the scanning process would be a computer file containing all the raw data from each individual marksense form. This could be a plain text file in XML format. A single ballot might be encoded as follows: 98
Computer
396138756 <State> Ohio Geauga Troy 2 <Elector> Jones <Elector> Smith <State> Johnson Yes NoMark <SchoolBoard> Arnold Chase NoMark The XML-encoded raw data files would be accumulated on inexpensive computers attached to the scanners. These files would then be transmitted electronically to a central computer that serves a defined geographical region—say a county or province. The central vote tabulation computer would check each file for duplicates using the XML-encoded ID tags, then tally the official results.
Web verification In addition to validation and counting, the computer would also make the complete XML files available on a Web
site. Voters could access the site, type in their unique ballot ID number, and view an image of the cast ballot. The onscreen image would be a re-creation of the paper ballot generated from the raw ballot data. Voters could view the XML-encoded form to verify that the data in the file matches their ballot copy, providing an unprecedented degree of verification that each voter’s choice really found its way into the final tally. All ballots would be accessible, but as long as a voter’s unique ballot ID remained secret, the secrecy of individual ballots would not be compromised. In addition to individual ballot access, anyone would be allowed to download the entire raw data file using an FTP service. It might be necessary to put some restrictions on who could log on to the FTP server simply to avoid overloading it. Select groups such as voters’ rights organizations or political party officials might be given passwords for a primary FTP site. Mirror sites could be used to provide FTP service to the general public. With the entire raw data file available to anyone, it would be possible to count the votes as many times, and with as many different programs, as anyone might want. This would provide a high degree of transparency for the counting process. Open source voting programs might themselves be available on Web sites such as those for universities, citizen organizations, and professional organizations like the IEEE. Furthermore, anyone could write a vote-counting program. If the tallies from two different counting programs differed by a single vote in just one race, the reason for the discrepancy would be open to public scrutiny because both the data and programs would be available for all to see. By providing a voter-verifiable audit trail from the polling place to the final tally, this scheme addresses the major weakness in existing voting systems recently noted by William Arbaugh (“The Real Risk of Digital Voting?” Computer, Dec. 2004, pp. 124-125).
OBSTACLES One potential problem with this system is that voters might look at their printed mark-sense sheets and insist they had made some different choice on the touch screen. This situation will likely arise only with faulty equipment, faulty software, or a confused voter. In any event, it could be dealt with easily. The touch-screen equipment would be programmed to permit just one printing of a voter’s ballot. If the voter were to allege an error in the printing—or a printer’s mechanical failure—an election official could destroy the spoiled ballot and enter a password to allow a reprint. The voter would be permitted to make amendments to his or her selections before making another printout of the mark-sense sheet. It might be possible, however, to avoid the optical scanning step altogether if the touch-screen terminal transmits the voter’s selections directly to a central server. The same XML encoding scheme could be used to record the raw data. This would also avoid errors that the optical scanning step might introduce. Instead, optical scanning might be used only as a backup in case the ballot recording server failed. This brings us back to the issue of public trust: Would voters trust a system in which the marked paper ballot had been relegated to a backup role only? The public might need time and experience using the scanned forms as official ballots before it accepts a phasing out of the scanning step. As long as each voter receives a printed form to check against the official election data, eliminating the scanning process seems feasible. Election officials within each voting jurisdiction could determine the various implementation details within this framework. For example, the unique ballot ID numbers might be chosen in various ways: They might be unique within a given number range for each county or congressional district, or statewide. The choice of equipment vendors would likewise be made
within each voting jurisdiction. Traditional manually marked optical scan ballots could be used as absentee ballot forms. The absentee voter might access a Web site or obtain a unique ballot ID number by phoning an automated server. After encoding the ID number by filling in circles on a grid, the voter could mail the completed paper ballot in a sealed envelope. The data from the absentee ballots would eventually be recorded in the same XML file used for all other voters in a given jurisdiction.
ew if any technologies are 100 percent foolproof. Many problems with punched card ballots, for example, have resulted from poll workers failing to simply clean out the accumulated chads in the punch apparatus (Neville Holmes, “US Electoral Reform: The Obvious Obligation,” Computer, Feb. 2001, pp. 128, 126-127). The system I have described could fail too if, for example, the laser printer toner cartridges were not replaced as needed. A
F
spilled cup of coffee might ruin scores of paper ballots before they had been scanned. Other unfortunate scenarios aren’t hard to imagine. While not a panacea, a system along these lines might go a long way toward raising public confidence in the electoral process. Because most of the technology already exists, the cost of implementation should not be excessive. A discussion of this type of system, and various alternative technologies, might lead to an even better solution to the centuries-old problem of tabulating votes, an essential process in every democracy. ■ Thomas K. Johnson is the president of T.K. Johnson and Associates. Contact him at
[email protected].
Editor: Neville Holmes, School of Computing, University of Tasmania;
[email protected]. Links to further material are at www.comp.utas. edu.au/users/nholmes/prfsn.
SCHOLARSHIP MONEY FOR STUDENT LEADERS Student members active inIEEE Computer Society chapters are eligible for the Richard E. Merwin Student Scholarship. Up to ten $4,000 scholarships are available.
Application deadline: 31 May
Investing in Students www.computer.org/students/ March 2005
99