software construction Editors: Andy Hunt and Dave Thomas ■ The Pragmatic Programmers a n d y @ p r a g m a t i c p r o g r a m m e r. c o m ■ d a v e @ p r a g m a t i c p r o g r a m m e r. c o m
State Machines Dave Thomas and Andy Hunt
W
e are surrounded by real-world state machines: ballpoint pen retractor mechanisms, vending machines, washing-machine controllers, digital watches. They are a trivial but underused technology that can simplify how we implement programs that must track how they got to their current state before handling a new
event. However, many programmers feel that state machines are only useful when they’re developing communication protocol stacks, which is not an everyday activity. This is unfortunate. State machines can be appropriate in surprising circumstances. Correctly applied, they will result in faster, more modular, less coupled, and easier to maintain code. State machines make it easy to eliminate duplication, honoring the DRY principle.1 They also let you write more expressive code, because you can specify intent and implementation independently. These are all good, pragmatic, reasons to investigate them further, so let’s look at some simple state machine implementations and problems they can solve. 10
IEEE SOFTWARE
November/December 2002
Stating the obvious A state machine is a system with a set of unique states. One state is special—it represents the system’s initial state. One or more of the other states are final states; when an event causes us to reach one of these the state machine exits. States are connected by transitions. Each transition is labeled with the name of an input event. When that event occurs, we follow the corresponding transition from the current state to arrive at the new state. State machines are often represented as diagrams with the states shown as circles and the transitions as labeled arrows between the states. Figure 1 shows a simple state machine that exits when a set of coin tosses results in a head, then a tail, and then another head. It starts in state S0. If we toss a tail, the transition loops back and we stay in S0; otherwise, we move on to S1. This state moves to S2 if we toss a tail next. From there we move on the S3, a final state, if we see another head. This type of state machine is sometimes called a deterministic finite state machine or automaton. The graph in Figure 1 is a state transition diagram. Using state machines A state machine is useful whenever we have a program that handles input events and has multiple states depending on those events. These situations arise frequently in communications, parsing, emulations, and handling user input. You can spot a program that’s a candidate for a state machine by looking for code that contains either deeply nested if statements or many flag variables. You can eliminate the flags and flatten the nesting using a state machine. 0740-7459/02/$17.00 © 2002 IEEE
SOFTWARE CONSTRUCTION
A while ago, Dave wrote a simple Web-based order-handling system. Customers could pay by check or purchase order. When orders were initially entered, a confirmation was mailed. When payment was received, the products were shipped. If that payment was a purchase order, the program generated an invoice and tracked its subsequent payment status. Because events could occur weeks apart, the status had to be tracked in a database. After a while, the code started to get messy and handling the special cases began to get ugly. So, Dave reimplemented the code as a simple state machine. This state machine ended up having a dozen or so states and perhaps 15 action routines to deal with transitions between these states. The resulting code was a lot clearer (and a lot shorter). And when the customer changed the application’s business rules, typically Dave just changed a few entries in the table that defined the state transitions. However, that’s a fairly complex example. Let’s look at something simpler, such as a program that counts words in text. Here the input events are the characters we read, and the states are “W: in a word” and “S: not in a word.” We can increment the word count whenever we transition from S to W. Figure 2 shows the state transition diagram. Note that we’ve added a semantic action to one of the transitions—we increment a count on the S → W transition. On its own, this example might not be particularly compelling—the code required to implement the state machine word counter is probably about the same size as the conventional version. However, say our requirement changed slightly— our client tells us that the program should now handle HTML files, ignoring any text between “<” and “>”. We also deal with quoted strings, so that “Now is the
for all good people” should count seven words, and correctly ignore the “>” in the quoted string. If we had taken the conventional approach,
Tail Start
Head
S0
Tail
S1
S2
Head
S3
Head
Tail
Figure 1. A state machines that exits for a head-tail-head sequence of coin tosses.
Start
Figure 2. A state transition diagram that counts words in our text input.
!isalpha(ch) eof
S
eof
isalpha(ch) [count++]
!isalpha(ch)
eof
W
isalpha(ch)
*
Start
* ch == ’<’
ch == ’ ” ’
S
Cmd ch == ’>’
!isalpha(ch) [count++]
*
Str ch == ’ ” ’
* ch == ’<’
W
isalpha(ch)
Figure 3. A state machine that counts words in HTML. We omitted the eof transitions (shown in Figure 2) for clarity.
we’d now need flags for “skipping a command” and “in a quoted string.” With the state machine, it is a simple extension, shown in Figure 3. Implementing state machines For very simple state machines, we find it is easiest to implement the states and transitions manually. We use a variable to keep the current state and update it as events happen. Typi-
cally we’ll have a case statement to handle the different states or events. However, once we start becoming more complex, we convert the state transition diagram to a 2D table. The table is indexed by the current state and the input event, returning the resulting next state. It’s convenient to include an action code in each table entry too, because this tells us what to do on each transition. These table November/December 2002
IEEE SOFTWARE
11
SOFTWARE CONSTRUCTION
How to Reach Us Writers For detailed information on submitting articles, write for our Editorial Guidelines (software@ computer.org) or access http://computer.org/ software/author.htm. Letters to the Editor Send letters to Editor, IEEE Software 10662 Los Vaqueros Circle Los Alamitos, CA 90720
[email protected] Please provide an email address or daytime phone number with your letter. On the Web Access http://computer.org/software for information about IEEE Software. Subscribe Visit http://computer.org/subscribe. Subscription Change of Address Send change-of-address requests for magazine subscriptions to
[email protected]. Be sure to specify IEEE Software. Membership Change of Address Send change-of-address requests for IEEE and Computer Society membership to
[email protected]. Missing or Damaged Copies If you are missing an issue or you received a damaged copy, contact
[email protected]. Reprints of Articles For price information or to order reprints, send email to
[email protected] or fax +1 714 821 4010. Reprint Permission To obtain permission to reprint an article, contact William Hagen, IEEE Copyrights and Trademarks Manager, at
[email protected].
12
IEEE SOFTWARE
November/December 2002
entries are conveniently represented as structures or simple data-only classes. A more sophisticated implementation could replace these table entries with objects that include the behavior to be performed. Applications coded this way often have a trivial main loop: while eventPending(){ event = getNextEvent(); entry = transitions [currentState][event]; entry.executeAction(); currentState = entry.nextState(); }
Once we have a program in this form, we can easily change it as new requirements come along. For example, if our client suddenly wants us to count HTML commands, we merely add a new action to the S → Cmd and W → Cmd transitions in the table. If we notice that we’re handling HTML comments incorrectly, we just add a couple of new states and update the table accordingly—the main program doesn’t change at all. More sophisticated implementations Once you get into the realm of large state machines, maintaining the state table manually becomes an error-prone chore. Rather than coding the table directly in our implementation language, we normally write a plain text file containing a simpler representation and use this to generate code. For the HTML word counter, our state file might start something like: S: LT → (CMD NONE), WORD → (W INC), default → (S NONE) W: LT → (CMD NONE), W → (W NONE), default → (S NONE)
Depending on the target language, we might then generate from this a header file containing the definitions of the states, events and actions, and a source file containing the transition table. The actions could be defined as
enumerations or possibly as a set of function pointers. Robert Martin of Object Mentor implemented state machine compilers for Java and C++ based on these principles. You can download them from www.objectmentor.com/ resources/downloads/index. State machines and object-oriented development If you’re working in an objectoriented environment, the same basic principles apply. However, you can also use classes to provide a clean interface to the thing being modeled. In Design Patterns,2 the Gang of Four present the State pattern. Their example is a TCP connection. As the connection changes state (presumably driven by an internal state transition system similar to the ones we discussed earlier), the connection object changes its behavior. When the connection is in the closed state, for example, a call to open it might succeed. However, if the connection is open already, the same call will be rejected. This is a tidy approach to managing the external interface to a state driven system.
S
tate machines are an underused tool. The next time you find yourself adding “just one more flag” to a complex program, take a step back and see if perhaps a state machine might handle the job better.
References 1. A. Hunt and D. Thomas, “Don’t Repeat Yourself,” The Pragmatic Programmer, AddisonWesley, Boston, 2000. 2. E. Gamma et al., Design Patterns, AddisonWesley, Boston, 1995.
Dave Thomas and Andy Hunt are partners in The
Pragmatic Programmers, LLC. They feel that software consultants who can’t program shouldn’t be consulting, so they keep current by developing complex software systems for their clients. Contact them via www.pragmaticprogrammer.com.
design Editor: Martin Fowler
■
T h o u g h t Wo r k s
■
[email protected]
Using Metadata Martin Fowler
I
occasionally come across people who describe their programming tasks as tedious, which is often the sign of a design problem. One common source of tedium is pulling data from an external source. You almost always do the same thing with the data, but because the data differs each time, it’s difficult to reduce such
tedious programming. This is when you should consider using metadata. To illustrate the approach, consider a simple design problem: build a module that will read data out of a simple file format into memory. One example of this file is a tabdelimited format with the first line containing the names of the fields (see Table 1).
class ExplicitReader... public String FileName; TextReader reader; static char[] SEPARATOR = {‘\t’}; public ExplicitReader (String fileName) { FileName = fileName; } public IList ReadBatsmen() { IList result = new ArrayList(); reader = File.OpenText (FileName); reader.ReadLine(); //skip header String line; while ((line = reader.ReadLine()) != null) { String[] items = line.Split(SEPARATOR); Batsman bat = new Batsman(); bat.Name = items[0]; bat.Matches = Int32.Parse(items[1]); bat.Innings = Int32.Parse(items[2]); bat.Runs = Int32.Parse(items[3]); result.Add(bat); } return result; } } public class public public public public
Batsman... String Name; int Matches; int Innings; int Runs;
Figure 1. A simple, explicit solution for reading data from a tab-delimited file. 0740-7459/02/$17.00 © 2002 IEEE
Explicit and implicit reads Figure 1 offers perhaps the most straightforward approach to this problem—reading each column of data into a record structure. As a program, it’s pretty simple, because it’s easy to read and to write. Trouble rears, however, if you have a lot of files to read. You have to write this program for each file, which is a tedious job, and tedium usually has a bad smell—indicating worse troubles. In this case, the trouble would be duplication—always something worth avoiding.
Table 1 Actual listing 1 Name Matches DG Bradman 52 RG Pollock 23 GA Headley 22 H Sutcliffe 54 AC Gilchrist 31 E Paynter 20 KF Barrington 82 ED Weekes 48 WR Hammond 85
November/December 2002
Innings 80 41 40 84 44 31 131 81 140
IEEE SOFTWARE
Runs 6,996 2,256 2,190 4,555 2,160 1,540 6,806 4,455 7,249
13
public class ImplicitReader... public String FileName; TextReader reader; static char[] SEPARATOR = {‘\t’};
Figure 2. An implicit design solution for reading in data from multiple files.
public ImplicitReader (String fileName) { FileName = fileName; } public IList Read() { IList result = new ArrayList(); reader = File.OpenText (FileName); IList headers = parseHeaders(); String line; while ((line = reader.ReadLine()) != null) { result.Add(parseLine(headers, line)); } return result; } IList parseHeaders() { IList result = new ArrayList(); String[] items = reader.ReadLine().Split(SEPARATOR); foreach (String s in items) result.Add(s); return result; } IDictionary parseLine (IList headers, String line) { String[] items = line.Split(SEPARATOR); IDictionary result = new Hashtable(); for (int i = 0; i < headers.Count; i++) result[headers[i]] = items[i]; return result; } abstract class AbstractReader { public AbstractReader (String fileName); FileName = fileName; } public String FileName; protected TextReader reader; protected static char[] SEPARATOR = {‘\t’};
Figure 3. An explicit design that uses substitution on the variable part of the program.
public IList Read() { IList result = new ArrayList(); reader = File.OpenText (FileName); skipHeader(); String line; while ((line = reader.ReadLine()) != null) { String[] items = line.Split(SEPARATOR); result.Add(doRead(items)); } return result; } private void skipHeader() { reader.ReadLine(); } protected abstract Object doRead (String[] items); } class ExplicitReader2 : AbstractReader ... public ExplicitReader2 (String fileName) : base (fileName){} override protected Object doRead(String[] items) { Batsman result = new Batsman(); result.Name = items[0]; result.Matches = Int32.Parse(items[1]); result.Innings = Int32.Parse(items[2]); result.Runs = Int32.Parse(items[3]); return result; }
Figure 2 offers one approach to avoiding this tedium, a generic way to read in any data from a file. The 14
IEEE SOFTWARE
November/December 2002
advantage is that this single program will read in any file, providing it follows the general format. If you have
a hundred of these kinds of files to read, writing a single program like this takes a lot less effort than writ-
public class ReflectiveReader ... public String FileName; TextReader reader; static char[] SEPARATOR = {‘\t’}; public Type ResultType; public ReflectiveReader (String fileName, Type resultType) { FileName = fileName; ResultType = resultType; } public IList Read() { IList result = new ArrayList(); reader = File.OpenText (FileName); IList headers = parseHeaders(); String line; while ((line = reader.ReadLine()) != null) { result.Add(parseLine(headers, line)); } return result; } IList parseHeaders() { IList result = new ArrayList(); String[] items = reader.ReadLine().Split(SEPARATOR); foreach (String s in items) result.Add(s); return result; } Object parseLine (IList headers, String line) { String[] items = line.Split(SEPARATOR); Object result = createResultObject(); for (int i = 0; i < headers.Count; i++) { FieldInfo field = ResultType.GetField((String)headers[i]); if (field == null) throw new Exception (“Unable to find field: “ + headers[i]); field.SetValue(result, Convert.ChangeType(items[i],field.FieldType)); } return result; } Object createResultObject() { Type[] constructorParams = {}; return ResultType.GetConstructor(constructorParams).Invoke(null); } } Figure 4. A reflective programming design.
ing an explicit program (as in Figure 1) for each file. The problem with this generic style is that it produces a dictionary, which is easy to access (especially when your language supports a simple index mechanism as C# does) but is not explicit. Consequently, you can’t just look at a file declaration to discover the possible fields you must deal with, as you can with the Batsmen class in Figure 1. Furthermore, you lose all type information. So, how can you have your explicit cake while eating only a small amount of code? One approach is to parameterize the assignment statements from Figure 1 by enclosing them in a single substitutable function. Figure 3 does this with the object-oriented style of an abstract superclass. In a more sophisticated programming language, you
could just pass the block of assignment statements in as a function argument. By parameterizing the assignment statements, you can reduce duplication. You can also reduce—but not eliminate—the tedium. All those assignments still must be written, both for reading and writing (if you are supporting both). However, by taking advantage of the metadata in both the target class and file structure, you can avoid writing any assignments at all. The metadata is available in two forms. The field heading at the top of the data file is a simple metadata that supplies the field names (XML tag names give the same information). If the target class’s fields match the data file’s names (or if we can make them match), we have enough to infer the assignments. If we can query the target
class’s metadata, we can also determine the types for the target class’s fields. This lets us handle the type conversions properly. Two ways of using the metadata We can use the metadata in two ways: reflective programming and code generation. The reflective programming approach leads us to a program that uses reflection at runtime to set field values in the target class (see Figure 4). Many modern platforms provide this kind of runtime reflection. The resulting reader class can read any file that conforms to the format and has a matching target class. The code generation style aims to generate a class that’s similar to the hand-written one in Figure 3. We can November/December 2002
IEEE SOFTWARE
15
public class ReaderGenerator ... String DataFileName; Type Target; String ClassName; TextWriter output; public void Run() { Console.WriteLine(output); output = new StringWriter(); writeClassHeader(); writeConstructor(); writeDoRun(); writeClassFooter(); Console.WriteLine(output); writeOutputFile(); } void writeClassHeader() { output.WriteLine(“using System;”); output.WriteLine(“namespace metadata”); output.WriteLine(“{“); output.WriteLine(String.Format(“class {0} : AbstractReader “, ClassName)); output.WriteLine(“{“); } void writeClassFooter() { output.WriteLine(“}”); output.WriteLine(“}”); } void writeConstructor() { output.Write(String.Format (“\t public {0} () : base (\”{1}\”)”, ClassName, DataFileName)); output.WriteLine(“{}”); } static char[] SEPARATOR = {‘\t’}; void writeDoRun() { output.WriteLine(“\toverride protected Object doRead(String[] items) {“); output.WriteLine(String.Format (“\t\t{0} result = new {0}();”, Target)); writeFieldAssignments(); output.WriteLine(“\t\treturn result;”); output.WriteLine(“\t}”); } void writeFieldAssignments() { TextReader dataReader = File.OpenText (DataFileName); String[] headers = dataReader.ReadLine().Split(SEPARATOR); dataReader.Close(); for (int i = 0; i < headers.Length; i++) { FieldInfo field = Target.GetField((String)headers[i]); if (field == null) throw new Exception (“Unknown Field: “ + headers[i]); output.WriteLine(String.Format( “\t\t result.{0} = ({1})Convert.ChangeType(“, headers[i], field.FieldType)); output.WriteLine(String.Format( “\t\t\titems[{0}],typeof({1}) );”, i, field.FieldType)); } } void writeOutputFile() { StreamWriter outFile = new StreamWriter(File.Create(ClassName + “.cs”)); outFile.Write(output); outFile.Close(); } } Figure 5. A generator.
use the style presented in Figure 1, because we don’t have to worry about duplication in the generated code. Figure 5 shows the kind of class we could use to perform the generation, and Fig16
IEEE SOFTWARE
November/December 2002
ure 6 shows the resulting class. Although I’m using the same language in this case, there’s no reason why the generator must be the same language as the class it’s generating—scripting
languages often make good languages for generation due to their powerful string handling. The generator also uses the language’s reflection capabilities to deter-
Figure 6. Example code that Figure 5 generated.
using System; namespace metadata { class ExplicitReader3 : AbstractReader { public ExplicitReader3 () : base (“batsmen.txt”){} override protected Object doRead(String[] items) { metadata.Batsman result = new metadata.Batsman(); result.Name = (System.String)Convert.ChangeType( items[0],typeof(System.String) ); result.Matches = (System.Int32)Convert.ChangeType( items[1],typeof(System.Int32) ); result.Innings = (System.Int32)Convert.ChangeType( items[2],typeof(System.Int32) ); result.Runs = (System.Int32)Convert.ChangeType( items[3],typeof(System.Int32) ); return result; } } }
mine the field types; however, it does it at compile time rather than at runtime. The generated classes don’t use the language’s reflection capabilities. Given these two styles of metadata-based programs, the obvious question is when to use each style. The reflective program offers a single compact class to carry out the mapping. There are, however, some disadvantages. Many people find reflection somewhat hard to use, and it might defeat some of your environment’s tooling, such as intelligent reference searches and automated refactorings. In addition, in some environments, reflective calls can be significantly slower than direct method calls. Generation also has its problems. You need discipline to ensure that developers don’t hand-edit the generated files. You must also ensure that genera-
tion is done with every significant change—the best way of doing this is to make it part of an automated build process. With many files, generation might lead to a larger code bulk, which might affect footprint and build times. I usually prefer generation to reflective programs, but you have to weigh your decision based on your concerns.
T
here’s also the question of whether to use metadata-based techniques at all. For something like this, I
wouldn’t bother for a few classes. I’d just use a technique to separate the varying code from the constant code. I can’t give a hard number for when it’s better to use metadata—it’s more a reflection of the degree to which the assignment’s monotony is affecting development.
Martin Fowler is the chief scientist for ThoughtWorks, an Internet systems delivery and consulting company. Contact him at
[email protected].
focus
guest editors’ introduction
Software Engineering as a Business Ann Miller, University of Missouri-Rolla Christof Ebert, Alcatel
o matter what business you are in, you are also part of the software business. Software makes the world go round—at ever increasing speeds. Computer-based, software-driven systems pervade today’s society. From avionics flight control to the ubiquitous computers in personal digital assistants and cellular phones, to automotive and consumer electronics, software provides features and functions in daily use.
N
Increasingly, system functionality is implemented in software. Where we used to split hardware from software, the business case entirely determines such boundaries now—what we best package at which level in which component, be it software or silicon. For example, a TV set in the 1970s had no software, whereas today its competitive advantages and the majority of engineering efforts are software-driven. The software business, however, has manifold challenges, ranging from the creation process and its inherent risks to direct balance sheet impacts. For example, the Standish Group found in its survey (2000 edition of the Chaos Report) that only 26 percent of the projects finished on time and within budget and a staggering 28 percent were canceled before de18
IEEE SOFTWARE
November/December 2002
0740-7459/02/$17.00 © 2002 IEEE
livery. Moreover, the remaining projects, which all finished late, over budget or both, delivered only a fraction of the planned functionality (www.standishgroup.com). Introducing a product to market late loses market share; canceling a product before it ever reaches the market sinks scarce R&D funds. Not only is software increasing in size, complexity, and percentage of functionality, it is increasing in contribution to the balance sheet and profit-and-loss statements. To make matters worse, requirements are easily and frequently changed. A recent study by the U S National Institute of Standards and Technology reports that insufficient software testing costs the US as much as US$59 billion a year and that up to US$22 billion of that could be saved if licensed software had just 50 percent fewer defects (NIST, The Economic Impacts of Inadequate Infrastructure for Software Testing, Washington, D.C., 2002; see also the News report on p. 97). This special issue is devoted to the business of software engineering. We explore some of the critical factors associated with succeeding in today’s high-tech software businesses and discuss skills, knowledge, and abilities that software practitioners need to improve their business decisionmaking capabilities. We illustrate business cases to supplement technical arguments for process and technology improvement. We also address how software engineering can help small and large businesses as well as start-ups. Every business needs good communication to be successful and reduce friction, whether it is from engineer to manager, manager to engineer, or engineer to engineer. It is easy for a company to relegate software to a low priority when it is focusing on other technologies in its products. Software engineers must speak out clearly and be heard and understood by management. Both sides must learn how to address each other’s real needs. Management doesn’t care for technical jargon, and engineers are easily confused with capitalization and depreciation questions about their software. The brief article on translating “software developer speak” to “management speak” and vice versa can help here. We must consistently set targets on each
level in the company and track them continuously against actual performance. We must manage changes in the business climate, or they will ripple through uncontrolled. And the committed targets must be followed through! We therefore present the Balanced Scorecard approach. Each company and software team can introduce this approach to focus on the right things and balance short-term (survival) needs with medium- to long-term investments and improvements. As the saying goes, time is money; therefore, wasted development time is lost revenue. Thus, processes and methods that improve our ability to deliver reliable, quality software are important. Security is a quickly growing software business. Recent examples for business models include the sale of hacker insurance, which works to keep corporate Web sites from defacement or denial-of-service attacks by hackers and protects databases, such as those maintaining credit card information. The more we share and network, the more we are exposed to attacks of all kinds. The exploding need for secure software and protection schemes for our business processes, end-to-end, indicate this impact. Our Point/Counterpoint discussion takes up one example from the security domain and illustrates two ways to approach software security and how that decision ripples into business decisions.
Introducing a product to market late loses market share; canceling a product before it ever reaches the market sinks scarce R&D funds.
W
e have not explicitly addressed the “technical career track” versus the “management ladder.” We believe that such discussions are individual choices. However, we do hope that all software practitioners value their stake in their respective software business decisions and that these pages offer ideas for increasing your return on investment in that business. The “Suggested Reading” sidebar offers more food for thought. Whether your customer is internal to your company or a traditional external client or user of your product, whether the product is shrink-wrapped and shipped or a service or embedded system, customer satisfaction is part of good business and of good software. Here’s to you and your customer. November/December 2002
IEEE SOFTWARE
19
Suggested Reading Books ■
Making the Software Business Case: Improvement by the Numbers by Donald J. Reifer, Addison-Wesley, Boston, 320 pp., ISBN 0-201-72887-7, 2001. This practical handbook shows you how to build an effective business case when you need to justify—and persuade management to accept—software change or improvement. Based on real-world scenarios, the book covers the most common situations that require business case analyses and explains specific techniques that have proved successful. The book provides examples of successful business cases; along the way, tables, tools, facts, figures, and metrics guide you through the entire analytic process. An excellent book to learn how to prepare and implement a business case and thus make software a successful business. ■
Software Product Management: Managing Software Development from Idea to Product to Marketing to Sales by Dan Condon, Aspatore Books, Boston, 256 pp., ISBN 1-58762-202-5, 2002. This book decodes the software product management process with an emphasis on coordinating the needs of stakeholders ranging from engineering, sales, and product support to technical writing and marketing. Based on real-world experience in managing the development of enterprise software, this book details how a team can work together smoothly to achieve their goal of releasing a superior software product on time. Although it’s not primarily about setting up a business, the book explains hands-on what is necessary in daily operational fights to succeed. ■
Secrets of Software Success: Management Insights from 100 Software Firms around the World by Detlev J. Hoch, Cyriac R. Roeding, Gert Purkert, and Sandro K. Lindner, Harvard Business School Press, Boston, 256 pp., ISBN 1-57851105-4, 1999. The book describes results from a McKinsey study about what’s driving the prosperity of the world’s best software companies and what’s responsible for the failure of others. It’s loaded with sharp insights and colorful anecdotes from leaders of companies such as Microsoft Germany, Keane Inc., BroadVision, Andersen Consulting, Oracle, Sun Microsystems, and Navision in Denmark. The authors conclude that business opportunities in the software arena remain strong. It thus serves as a huge collection of excellent lessons learned for those who are about to launch.
■
Winners, Losers & Microsoft: Competition and Antitrust in High Technology by Stephen E. Margolis and Stan Liebowitz, Independent Inst., Oakland, Calif., 288 pp., ISBN 0-94599980-1, 1999. This book is unfortunately titled. It is primarily about bringing real data and rigor to bear on many of the conventional “stories” about the economics of the new economy, rather than about the Microsoft antitrust situation. With some well-documented and original research, the authors conclude that Microsoft is as successful as it is for a simple reason: good products win. This book is also the best we have seen in its treatment of the overall economics of information technology standards. ■
Information Rules: A Strategic Guide to the Network Economy by Carl Shapiro and Hal R. Varian, Harvard Business School Press, Boston, 352 pp., ISBN 0-87584863-X, 1998. Information Rules is a blueprint for success and survival in today’s highly dynamic and competitive Internet economy. The authors posit that although technology changes, the laws of economics do not. They stress that we can learn much from success stories as well as past failures. This book offers models, concepts, and analysis that give readers a deeper understanding of the fundamental principles in today’s high-tech industries and enable them to craft winning strategies for tomorrow’s network economy.
The Internet ■ ■ ■ ■ ■ ■ ■ ■
■
Champions of Silicon Valley: Visionary Thinking from Today’s Technology Pioneers by Charles G. Sigismund, John Wiley & Sons, New York, 294 pp., ISBN 0-471-35346-9, 2000. An up-close, personal look at the high-tech industry’s most powerful venture capitalists, technologists, and entrepreneurs. This fascinating book goes beyond Silicon Valley’s glitz and glamor to tap into the energy and vision that turned it into the epicenter of global business. The book nicely links the various dimensions—from finance to technology to operations—that make software a successful business.
■ ■
■
www.dacs.dtic.mil: State-of-the-art software-related information and technical support www.processimprovement.com/resources/spm.htm: Project management resources; a good entry portal for further links www.pmi.org: Project Management Institute; includes excerpts from the Project Management Body of Knowledge www.construx.com: Software engineering tools, consulting, training, and more www.spr.com: Software Productivity Research; estimation, project management, metrics, training, and more www.construx.com/survivalguide: Steve McConnell’s Software Project Survival Guide Web site www.nnh.com: earned value Web site http://smallbusiness.yahoo.com: Yahoo’s popular portal for small businesses www.standishgroup.com: Entry point to the Standish Group’s reports and project summaries www.ipaustralia.gov.au/strategies/case/menu.htm: Intellectual property business, strategies, opportunities, protection, and licensing www.cio-dpi.gc.ca: Homepage of the Chief Information Officer Branch of the Treasury Board of Canada; good introductions into portfolio management and business cases for IT project
About the Authors Ann Miller is the Cynthia Tang Missouri Distinguished Professor of Computer Engineering at the University of Missouri-Rolla and chair of the NATO Information Systems Technology Panel. She has held several senior management and technical positions in industry and in government service. She is the IEEE Software associate editor in chief for management and is on the Administrative Committee of the IEEE Reliability Society. Contact her at the Univ. of Missouri-Rolla, 125 Emerson Electric Company Hall, Rolla, MO 65409-0040;
[email protected].
20
IEEE SOFTWARE
November/December 2002
Christof Ebert is director of the Chief Technology Office in charge
of software coordination and process improvement at Alcatel in Paris, where he drives R&D process change and innovation programs. Previously, he led the biggest Alcatel business unit to CMM Level 3, achieving substantial quality improvements and cycle time reduction. He is the IEEE Software associate editor in chief for requirements. Contact him at Alcatel, 54, rue la Boetie, F-75008 Paris;
[email protected].
focus
the business of software engineering
A Balanced Scorecard for a Small Software Group Steven Mair
ou can’t make money selling software. At least, that’s what many firms that view software as an ancillary component to their business think. Semiconductor manufacturers and other electronic equipment manufacturers typically need to supply drivers and applications to let customers use their components. However, such firms often overlook the value of these software components and, consequently, miss a considerable revenue opportunity.
Y The Balanced Scorecard is an effective and comprehensive methodology that can help organizations link their performance metrics to strategic objectives. The example matrix and strategy map focus on small development organizations. 0740-7459/02/$17.00 © 2002 IEEE
The problem partly lies with perception: “Are we a semiconductor firm or a software firm?” An even greater part of the problem is educating management, marketing, and finance about the software’s potential for revenue. To address this, we need a financial modeling tool that not only captures past results, but also provides a forward-looking view. A typical quarterly profit and loss statement gives a historical perspective of a firm’s operations, but it doesn’t give management and staff a roadmap. Nor does it let you link specific actions with desired outcomes. Over the last 11 years, the Balanced Scorecard (BSC) has developed as a way to execute strategic plans and continuously monitor strategic performance. This article presents basic information on BSC methodology as well as critical success factors and common pitfalls. The Balanced Scorecard Good managers understand that performance metrics identify what actions to take. Effective performance metrics must accurately
reflect a business situation, guide employees to take the right actions, and gauge those actions’ effectiveness. However, in today’s fastchanging economies, organizations need more than traditional performance metrics. They need metrics linked to strategic objectives that will promote positive future results and accurately capture past performance. The BSC can help your firm select performance metrics that will drive organizational strategy. Furthermore, the BSC is a method to communicate strategies. A BSC can be defined as a system of linked objectives, measures, targets and initiatives which collectively describe the strategy of an organization and how the strategy can be achieved. It can take something as complicated and frequently nebulous as strategy and translate it into something that is specific and can be understood.1
Many organizations have successfully implemented a BSC and realized remarkable improvements in their financial performance—becoming, in some cases, leaders in their industries. November/December 2002
IEEE SOFTWARE
21
Table 1 The four perspectives of the Balanced Scorecard Perspective
Key question
Financial Customer Operational
To succeed financially, how should we appear to our stakeholders? To achieve our vision, how should we appear to our customers? To satisfy our customers and shareholders, at what business processes must we excel? To achieve our vision, how will we sustain our ability to change and improve?
Learning and growth
History The BSC was devised in the early 1990s by Robert Kaplan of the Harvard Business School and David Norton as a method to help companies manage their increasingly complex and multifaceted business environments.1 This grew out of earlier efforts by Kaplan and Norton to shape the concept in the late 1980s. They saw the limitations of relying on purely financial measures, in particular, short-term financial goals. Companies might appear to be doing poorly in terms of short-term financial metrics because they were investing in core capabilities to drive superior future performance. Lagging indicators conveyed past performance but did not provide a good indication of future performance. Employees often did not understand how their jobs related to the firm’s strategy. Perspectives The classic BSC has four perspectives (listed in Table 1). You can explain each perspective by an associated key question. The answers to the key question become a perspective’s objectives. You can then measure performance against the objectives. The perspectives and key questions in Table 1 reflect possible organizational strategies, out of many, and should be adapted to capture the firm’s key strategies. Objectives and measures Metrics measure objectives, or desired outcomes. Metrics are quantifiable performance statements that indicate how an initiative is performing relative to its objectives. Metrics must be ■ ■ ■
22
IEEE SOFTWARE
Relevant to the strategy Stated in the context of a goal to achieve in a defined time Capable of being tracked and owned by a person or group with the power to influence the outcome
November/December 2002
A key tenet of the BSC is to balance lagging indicators with leading indicators. Lagging indicators tell us what has happened. In contrast, leading indicators attempt to quantify future results based on current actions. It is also important to balance internally focused metrics, such as cost reduction and productivity, with externally focused metrics such as market share and customer satisfaction. Tools Even a simple scorecard can contain an overwhelming amount of information. Strategy maps and a strategy matrix can help communicate large, complex quantities of information in simple, easily understood ways.2 Mapping a strategy lets you show visually an organization’s perspectives, objectives, and metrics and helps reveal the links between each factor in the BSC. Creating a map can ensure all elements are consistent and comprehensive in defining and executing the strategy. Maps also let you communicate across organizational boundaries. The strategy matrix is another useful visualization and summarization tool. It displays objectives, metrics, targets, and initiatives in one table. Typically, each strategic theme has its own strategy. The BSC at work I lead a 25-person software development department, which developed the BSC described in this article. The group is part of a larger organization that designs and manufactures communications semiconductors. Management views the software group’s primary role as providing applications and tools to customers. Secondary roles include providing software tools to other engineering groups in the company, helping customers use the group’s software, and providing software engineering design assistance to customers. Overall, management considers the group’s software products as secondary to the firm’s primary product—semiconductor chip sets—and views the software department as a cost center. Additionally, the firm’s culture sees the software group as support for marketing and engineering, in producing the sample software. Because the customer’s final prod-
Table 2 The software development Balanced Scorecard strategic matrix, with the theme of “timely, targeted software support” Objective
Metric
Target: Date
Initiatives
Financial
Department should be self-sustaining
25%: year 1 100%: year 4
Customer
Deliver complete solutions Deliver timely solutions
Total software revenue ≥ 2 × (total Full-Time Equivalent1 Employee salary + benefits costs) Number of customer requests for new or missing features Release date to marketing
■ Provide marketing with benefits analysis of product and support ■ Monthly review of sales with marketing ■ Review and analysis of requirements with marketing ■ Review and analysis of requirements
Deliver timely support
Operational
Learning and growth
Increase quality of delivered software and support Streamline development process
Increase C programming language knowledge Increase software process knowledge Educate sales and marketing on our software’s value
Days to answer customer inquiries to customer’s satisfaction Number of customer requests for bug fixes Reduce average time for defect repair Reduce defect density Code review defects found Number of engineers using the Software Engineering Institute Personal Software Process Number of sales agreements showing separate software revenue line items
<25 per release: year 1 <10 per release: year 2 ±1 week of plan: year 1 ±1/2 week of plan: year 2 1–3 < 1 day 1–3 < 3 days 1–3 < 2 weeks <25 per release: year 1 <10 per release: year 2 25%: year 1 50%: year 2 25%: year 1 50%: year 2 >90%: year 1 >99%: year 2 25%: year 1 50%: year 2 100%: year 3 25%: year 1 100%: year 4
with marketing ■ Schedule creation and review with marketing ■ Review and analysis of issues with marketing ■ Team defect analysis ■ Design review process ■ Code review process ■ Team defect analysis ■ ■ ■ ■ ■
Design review process Code review process In-house system training in C Code review process PSP training
■ Training of sales and marketing
personnel on the benefits provided by our software
uct does not use this software (at least not without modification and customer testing), the software’s quality is secondary to the speed at which it is produced. For years, the company had developed integrated circuits of increasing sophistication, and over time, the ICs became too complicated to operate without significant training and support. Accordingly, the firm progressed from delivering very simple software that showed customers how they might use the firm’s semiconductors, to software that permitted customers to produce designs quickly, to software that customers had to have to use the firm’s ICs. However, the company, following the modus operandi established with the first generation of ICs, continued to give the software away, even though
The software team and some marketing managers believed that the company was missing a considerable revenue opportunity—that it should charge for its software as a separate line item. By doing so, it could account for and realize the revenue associated with software output and move toward becoming a profitable software organization.
The software gave the IC new flexibility that the customer could leverage to differentiate their products. So, a small company that could never afford to produce a custom IC could still derive many
Developing the BSC Table 2 shows the BSC the software team developed, with the strategic theme of “timely, targeted software support.” We aimed to increase our capabilities to deliver
■
■ ■
of the benefits of doing so by running different software on the IC. The software reduced customers’ time to market. Producing tools for debugging, calibration, and manufacturing tests became nontrivial endeavors. Customers would have difficulty creating these tools without a significant investment in engineering resources.
November/December 2002
IEEE SOFTWARE
23
Financial perspective
Monthly review with marketing of sales Deliver complete solutions that reduce the customer's engineering burden
Learning and growth perspective
Review and analysis of requirements with marketing
Schedule creation and review with marketing Review and analysis of requirements with marketing
Increase quality of delivered software and support Code review process Design review process
Team defect analysis
Streamline development process Code review process Design review process Team defect analysis
Increase C programming language knowledge
Develop sales and marketing capability to sell our software
Increase software process knowledge
Training in C programming language
Training in the cost of producing the software and the benefits to the customer
SEI Personal Software Process training
Code review process
Assist in developing marketing collateral material
Figure 1. Software development Balanced Scorecard strategy map.
24
Deliver timely support
Deliver timely solutions
Review and analysis of requirements with marketing
Operational perspective
Customer perspective
Realize revenue of 2× software engineering direct costs
IEEE SOFTWARE
customer satisfaction in a timely manner. Additionally, we would work with marketing to develop plans to target profitable customers with weak software capabilities. We considered the four classic BSC perspectives (see Table 1). Each perspective has an associated objective, metrics, targets, and initiatives. Most have more than one objective, and most objectives have more than one metric, target, and initiative associated with them. To better illustrate the relationships between the perspectives, Figure 1 shows a strategy map. The map gives a visual indication of the interdependencies of each objective and the supporting initiatives. The various metrics we used in our BSC are available either directly or by statistical inference from existing department processes and tools. This mitigated a major problem of general BSC implementation in that we didn’t need to develop new technologies or tools. For example, data for the financial perspective comes from the marketing department’s detailed monthly sales reports.
marketing material, order fulfillment, and sales objectives would need modification. This was the firm’s first exploration of this topic and the first time it was using the BSC in planning and executing a strategy. We agreed that this first step must be quickly viewed as successful; the relationship between the group’s methods and its progress should manifest within one year. A typical BSC program plans for initial results in 24 to 36 months. With our compressed schedule, we decided that the initiatives driving each objective should remain, as much as possible, under our direct control. As with most change initiatives, our efforts returned a wide spectrum of responses. The least resistance came from management; we were providing them a roadmap that promised more revenue. The greatest resistance came from sales and marketing. We were asking them to earn revenue on what had previously been given for free to customers. They worried that they would lose sales if customers resisted moving from free to fee software.
The group’s mission With this BSC, the group aimed to demonstrate to the company that we could successfully and profitably market our software. This would require a cultural and strategic change. Additionally, we understood that operational items such as pricing structures,
The financial perspective After reviewing our original BSC with marketing, we restated our goal to make it easier for the firm to understand and more gratifying for the software group—that is, instead of going for a break-even scenario, we would target becoming a profit center in
November/December 2002
four years. With this, we established our financial objective. We assumed that the software revenue was being recorded and knew that awareness needed to be raised in the firm. We planned to spend more time with sales and marketing to clarify the value of the software and support, and how the customer benefited. We hoped sales and marketing might then see the software as a separate product, not merely an adjunct to the ICs. Additionally, we set monthly meetings with sales and marketing to review the month’s activity and to identify which customers might benefit the most from our software. The customer For this perspective, we concentrated on improving what we were doing right. Customers reported that they valued both how we delivered robust solutions in a timely manner and our very good turnaround time for support issues. To continue delivering these objectives, we set initiatives to increase our dialog with marketing. We reasoned that this would help us better understand the issues customers were facing—their schedules and competitors’ feature sets. We targeted reducing the number of postrelease feature requests. We believed this would let us reasonably judge our solution’s completeness. Furthermore, we established metrics for ontime delivery to marketing. Our final objective for this perspective was to improve the response time to customer support issues. We measured this in the number of days it took to successfully resolve the customer’s issue. To reach this objective, we aimed to review problems with marketing rather than just taking written issue reports. Additionally, we assigned each issue to a software review team, for team defect analysis, instead of a single engineer. Operational This perspective called for increasing software quality while streamlining the development process. Most of the team felt we had good design and code review processes but that they required too many people (8 to 10) to execute. We believed we could obtain most of the reviews’ benefits with smaller teams (3 to 5). Smaller teams could deliver virtually the same quality in a much shorter time.
To further support this goal, we reused team defect analysis to facilitate defect identification. We believed this would reduce repair time and increase overall quality by ensuring repair of the defect’s root cause. Learning and growth Ironically, we had the most controversy when we came to this perspective. Some team members wanted to use quantitative metrics as described in the literature.3–5 Others wanted to concentrate on taking a more customized approach regardless of whether we had preexisting research behind the metric. We finally decided to use metrics that were easiest to gather and, of course, made sense for the desired objective. We also considered the challenges posed by the other perspectives. For example, our customers preferred C to other programming languages, so we needed a strong competency in it. We determined that staff had varying levels of experience, which had resulted in coding styles that fell below our standards and coding practices that interfered with timely maintenance. We decided to hold an internally led training session to collaborate and equalize the staff’s knowledge. We addressed poorly executed code and design reviews by holding a series of weekly sessions to analyze our processes and create a consensus solution on review procedures. For our final technical initiative, we believed that supporting engineers in pursuing training in the Software Engineering Institute’s Personal Software Process would return value to the firm by increasing productivity and engineer retention. The technical initiatives aside, the software group’s most critical initiative was to start a series of training classes with help from several marketing personnel. Together, we demonstrated what each software component did and described the work it eliminated for the customer. With the assumption that our in-depth knowledge of the ICs would let us produce software for less than the customer could, it followed that we could easily translate our financial investment in the software into customer savings. By understanding this benefit, sales and marketing were able to establish a pricing schedule and a marketing plan with collateral material (data sheets, product notes, and so on) to start realizing this new source of revenue.
We decided to use metrics that were easiest to gather and, of course, made sense for the desired objective.
November/December 2002
IEEE SOFTWARE
25
We worked with sales and marketing to create objectives, metrics, targets, and initiatives that they felt were important for reaching the goal.
Metrics and targets For each objective, we established metrics and associated targets that we believed were visible to the firm and easy to gather from existing systems. As mentioned earlier, our BSC effort met with a wide spectrum of responses. The metrics and targets shown are not the first that we formulated. Because we determined that key resistance would come from sales and marketing and because we felt that our engineering outlook could bias the elements, we enlisted help from sales and marketing to add crossfunctional ideas. We worked with them to create objectives, metrics, targets, and initiatives that they felt were important for reaching the goal. By creating the BSC in an iterative, collaborative manner with them, we made it easier to gain their sustained support. Critical factors Organizations successfully use BSCs to create a culture of continual focus on strategy formulation, measurement, and revision— what Kaplan and Norton call a “strategy-focused organization.”1 These are key elements for creating a strategy-focused organization: ■
■
■
■
26
IEEE SOFTWARE
Mobilize change through executive leadership. Building a strategy-focused organization involves significant cultural changes. Organizational change is an evolutionary process.6,7 Executive commitment is critical to maintaining such a program’s momentum. Make strategy a continual process. A strategic plan cannot succeed if strategic planning is a one-time activity. You need feedback loops to constantly focus attention on and reevaluate strategy and metrics. Align the organization to the strategy. This requires reviewing current organizational structures, policies, and procedures to ensure consistency with the strategic plan. It might also require reorganization or redefining roles. Make strategy everyone’s job. You can accomplish this through training and awareness and by deploying the scorecard down through the organization. You must explicitly explain each group’s connection to the strategic plan. Depart-
November/December 2002
■
ments and individuals must align their actions to support the strategy. Link strategy to operational tasks. Use tools such as strategy maps and matrix scorecards to link and align strategy with the operational tasks that employees perform.
Common pitfalls Most organizations that adopt a scorecard fail to reap the rewards they expect, and some common themes stand out: ■
■
■
■
■
Failure to communicate and train. A scorecard will work only if an organization clearly understands and supports it. Without effective communication throughout the organization, a Balanced Scorecard will not spur lasting change and performance improvement. No accountability. Accountability and high visibility help drive change. This means that each metric, objective, and initiative must have an owner. A perfectly constructed scorecard will fail if no one is held accountable for performance. Measures that do not focus on strategy. A common problem is that an organization will adopt new nonfinancial measures but fail to align the measures adequately with strategy. According to Norton, “The biggest mistake that organizations make is thinking that the scorecard is just about measures. Quite often, they will develop a list of financial and nonfinancial measures and believe they have a scorecard. This, I believe, is dangerous.”1 Measures tied to compensation too soon. In most cases, compensation should be linked to the BSC. However, it can be a mistake to do that too soon in the scorecard’s life cycle. Most BSCs are revised several times during their lifetimes. You must take care to ensure that compensation linkages change as the BSC changes. Employees not empowered. Although accountability can provide strong motivation for improving performance, employees must also have the authority, responsibility, and resources to effect change. Otherwise, they will not remain committed to the strategic plan’s success. You must also provide resources, and fund initiatives, to achieve success.
■
Too many initiatives. When driving a cultural change initiative such as the BSC, you should ensure that each goal is important. Stress alone, created by cultural changes, can itself cause the plan to fail. This problem only intensifies if the BSC contains trivial items, or items that lack consensus among the management team.
I
t’s too soon to determine if we’ve achieved our overall goal. That will take several more years. However, we’ve observed progress in the desired direction on all initiatives. Monitoring and reporting progress (and problems) has been important in creating continuing support from various levels of the organization. We found it helpful to bring people from outside the team to review our BSC with an independent and critical eye. Like any change initiative, our BSC eventually became the team’s status quo. Additionally, we learned that the BSC must be a dynamic document. As internal and external conditions change, you must review your goals, initiatives, metrics, and objectives and should involve as many of the affected groups as possible. Creating a BSC is a work of self-discovery, as it forces you to define its role and contributions to the organization. No set formula exists to create a BSC’s various elements. You should tailor these for the greatest im-
About the Author Steven Mair has more than 18 years’ experience in developing embedded and other software systems and managing software organizations, most recently as Director of Software Engineering at Magis Networks and Division Director of Software Engineering at Conexant Systems. His research interests include software operational management and organizational behavior. He received a BSEE from California State University, Northridge, and a master’s degree in business administration from the University of Maryland. He is a member of the Software Engineering Institute, ACM, IEEE, Project Management Institute, and American Society for Quality. Contact him at
[email protected].
pact with the lowest burden on your organization. Finally, we learned that you can use the BSC to successfully communicate your goals and methods to people with diverse backgrounds and achieve a desired organizational change. References 1. R.S. Kaplan and D.P. Norton, “The Balanced Scorecard: Measures that Drive Performance,” Harvard Business Rev., Jan./Feb. 1992, pp. 71–79. 2. R.S. Kaplan and D.P. Norton, “Having Trouble with Your Strategy? Then Map It,” Harvard Business Rev., Sept./Oct. 2000. 3. R. Grady, Practical Software Metrics for Project Management and Process Improvement, Simon and Schuster, Englewood Cliffs, N.J., 1992. 4. R. Grady, Successful Software Process Improvement, Simon and Schuster, Englewood Cliffs, N.J., 1997. 5. V. Basili, G. Caldiera, and H. Rombach, “Goal Question Metric Paradigm,” Encyclopaedia of Software Eng., vol. 1, J.J. Marciniak, ed., John Wiley, New York, 1994, pp. 528–532. 6. S.P. Robbins, Organizational Behavior, Prentice-Hall, Upper Saddle River, N.J., 2001. 7. P. Senge, The Fifth Discipline: The Art and Practice of the Learning Organization, Doubleday, New York, 1990.
Get access to individual IEEE Computer Society documents online. More than 67,000 articles and conference papers available! $9US per article for members $19US for nonmembers
http://computer.org/publications/dlib November/December 2002
IEEE SOFTWARE
27
focus
the business of software engineering
Integrating Business and Software Development Models Christina Wallin, Fredrik Ekdahl, and Stig Larsson, ABB
oday, software product development cannot generally be regarded as successful. Only about one of four software development projects are completed on time and on budget, with all the features and functions originally specified.1 Running a software project is a complex task in itself; making the resulting product a commercial success is even harder.
T By mapping business decision gates to software development milestones, you can relate technical life-cycle models to business decision models. The authors mapped Unified Process, Synchand-Stabilize, and Extreme Programming life-cycle examples to the ABB Gate Model for product development projects. 28
IEEE SOFTWARE
Software development life-cycle models and business decision models contribute to the control of product development in different ways. However, both kinds of models have limitations. SDLMs do not ensure that resources are used in the right projects, that the market is available, or that the organization is ready for a release. Similarly, business decision models do not support software development, so development might take place with uncontrolled changes and inadequate time for verification and validation. Thus, successful software product development requires that the project use both a business decision model and an SDLM. This requires careful definition of the interfaces, or mapping, between the two model types, as well as to any other model related to software product development. The ABB Gate Model, presented here, supports decision makers with business-relevant project and product informa-
November/December 2002
tion, increases mutual understanding and improves visibility between decision makers and developers during product development, educates decision makers in software engineering problems and solutions, and educates developers in business issues. How business issues hurt software development Many business-related problems face software product development. First, stakeholders typically scrutinize their software development projects from a business perspective only at startup, if at all; they do not revisit the business case over the course of the project. Often, they do not identify market, technology, or schedule problems until the project has gone astray. Second, because new technology drives software product development, stakeholders typically examine a project’s business aspects less carefully than the technical solu0740-7459/02/$17.00 © 2002 IEEE
tions. This is of course a serious mistake, especially when a project is targeting a market that is new to the organization and when knowledge about this market is limited. Unfortunately, limited knowledge often leads to even less activity in trying to understand the business aspects. Third, decision makers who don’t understand the basics of software engineering change the target continuously without looking at resulting costs and delays. This is probably a result of the common view that developers can easily adapt software to last-minute requirements. However, decisions to change or add new functionality often overlook the tasks that go along with code changes—for instance, changed architecture and design documentation, changed user documentation, regression testing, redesign of test cases for verification and validation, and changes to training, marketing and support material, and so on. Finally, project managers can feel squeezed in the middle. Typically, decision makers want facts as soon as possible, but ask for finalized documents. For example, a manager might want to know if the selected technical solution is feasible, but instead asks if the detailed design document is ready. On the other side, developers might not think they can provide enough information when the decision makers want it. They often think that business decision models imply a waterfall-like development life cycle, so those who want to use modern development practices might resist using any such model. Also, modern practices such as the Unified Process and Extreme Programming require iterative and incremental development, which leads to late finalization of documents. Business decision models Delivering a product with expected quality and functionality, on time, and on budget is seldom enough to achieve commercial success. It is at least equally important to choose the right product development projects and to have a mechanism for closing down projects that no longer show sufficient potential. Good business decisions are based on facts elicited through careful evaluation of key elements of the business situation—for example, market, competitors, technical feasibility, strategy, intellectual property, product quality, and resource availability.
To facilitate the collection of relevant facts in time to make business decisions, many organizations use a well-defined process. Several well-known business decision models exist, of which Cooper’s Stage-Gate Process Model is one example (see the sidebar on page 30 for more information).2 Typically, they comprise a number of different development stages separated by decision points, often referred to as decision gates. The gates represent distinct decision points at which stakeholders decide the project’s future. Software development life-cycle models Several SDLMs support software development projects. When correctly implemented, they help projects deliver products with expected quality and functionality, on time and within budget. Most SDLMs divide the development life cycle into several phases, generally three to five. However, there are almost as many names for these phases as there are SDLMs (see Figure 1). Phase names typically indicate the main activity performed in that phase and do not distinguish the concerns of project management and software development. This article uses the life-cycle phases defined in Microsoft’s Synch-and-Stabilize Life Cycle,3 the Unified Software Development Process,4 and Extreme Programming5 as examples. These models are commonly known, and their life-cycle phase names cannot be confused with software development activities such as analysis, design, implementation, verification, and validation, as described in the traditional waterfall model. Moreover, these three approaches’ phase names indicate the product’s maturity rather than the development activities performed. In most SDLMs, passing a major milestone marks the transition from one development phase to the next (see Figure 2). Of the three models just listed, only XP does not mention milestones. The Unified Process uses the three anchor-point milestones that Barry Boehm defined6 (Life-Cycle Objectives, LifeCycle Architecture, and Initial Operational Capability) to mark each phase’s conclusion and the stakeholders’ commitment to move ahead. The UP also adds a Product Release milestone that concludes the Transition phase. Synch-and-Stabilize identifies three major milestones, each concluding a phase.3
Delivering a product with expected quality and functionality, on time, and on budget is seldom enough to achieve commercial success.
November/December 2002
IEEE SOFTWARE
29
Cooper’s Stage-Gate Process Model resource commitment to the project as well as an agreement Cooper’s Stage-Gate Process Model, shown in Figure A, on the product and project definition established during the breaks the development project life cycle into six stages and Building the Business Case stage. five gates. Each stage consists of a set of parallel activities, of The third stage, Development, mainly deals with the prodwhich software development is only one, performed by differuct’s physical development according to the product and project ent functions within an organization. Each activity in each stage is designed to gather information needed as input to the definitions. The deliverable from this stage should be a product upcoming business decision gate and to reduce risks associated ready for beta testing. Gate 4, the Go to Testing decision point, is based on a with the development project. postdevelopment assessment to ensure that the product and The stage before the actual development project starts, the project are still attractive to the market and to the organizaDiscovery stage, begins with an idea for a new product or product version. Generally, a product manager collects the in- tion. A go decision at this point is an agreement on the verification and validation plans and also on marketing and formation needed as input to the first business decision gate. operation plans. Gate 1, the Idea Screen decision point, follows the DiscovIn stage four, Testing and Validation, the product is verified ery stage and is the first occasion where decision-makers comand validated in-house or at friendly customers’ sites. mit resources to the product development project. The product Finally Gate 5, the Go to Launch decision point, is the last manager presents the idea to the stakeholders from development, marketing and sales, service and maintenance, manufac- point at which the project can be killed and the product cancelled. A go decision here is an approval of the marketing and turing, training, and so on, who together decide whether to operation plans and the startup of full production or operation. start a development project based on the idea. The final stage, Launch, includes, for example, activities for During the first product development stage, the Scoping marketing and sales and for production or operation. stage, the main objective is to assess the market and technology to identify key product requirements. Gate 2, the Second Screen decision point, essentially repeats the previous gate, although with Discovery more rigorous requirements and based on the information gathered during the Scoping stage. Gate 1 Idea Screen The second development stage, Building the Gate 2 Gate 3 Gate 4 Business Case, includes a detailed investigation Business Verification & Scoping Development case that clearly defines the product, market, organizavalidation Go to Second tion, development project, competitors, intellectual Go to Development Screen Testing properties, and so on in preparation for deciding whether developing the product is feasible. Gate 5 Go to Launch Gate 3, the Go to Development decision point, Launch is the gate prior to the Development stage and the last chance to stop the project before the organization makes significant investments. A go deciFigure A. Cooper’s Stage-Gate Process Model. sion at this point represents both a financial and
Figure 1. Phase names in four software development life-cycle models: waterfall, Synch-andStabilize, Unified Process, and Extreme Programming.
Classic waterfall
Microsoft SynchandStabilize Unified Process
Extreme Programming
30
IEEE SOFTWARE
Requirements specification
November/December 2002
Functional specification
Planning
Inception
Elaboration
Exploration
Planning
Design specification
Code & test
Development
Construction
Iterations to release
Verification & validation
Stabilization
Transition
Productization
Planning
Specifi- Review cation
Subproject 1
Subproject 2
Subproject 3
Optimization
Life-Cycle Architecture
Stabilization
Beta test Finalization
Initial Operational Capability
Elaboration
Release
Zero bug release
Scope complete
Feature complete
Development
Life-Cycle Objectives
Inception
Visual freeze
Milestone 3
Milestone 2
Milestone 1
Project plan approved
Schedule complete
Vision approved SynchandStabilize
Figure 2. The Synch-and-Stabilize and Unified Process milestones.
Construction
Product Release
Transition
Unified Process I1
I2
E1
E2
Both UP and S&S also use minor milestones; in the UP, each iteration ends with a minor milestone, whereas S&S uses a number of predefined minor milestones concluding various subprojects. Mapping business decision models and SDLMs A milestone is a scheduled event that marks the completion of one or more important tasks. The project manager uses milestones to measure and show achievements and development progress. At a milestone, a predefined set of deliverables should have reached a predefined state to enable a review. A gate, on the other hand, is a go-or-no-go decision point in the product development cycle, where all relevant business facts are brought together.2 At each gate, the decision maker uses the results from the preceding stage’s activities together with a decision criteria checklist as input to the business decision. Developers should not treat gates as software development milestones (see Figure 3), but they must pass some key milestones to be able to supply the decision maker with the required information in time before the gate. These important milestones could be called pregate milestones; they reflect the mapping between the business decision model and the SDLM. Of course, pregate milestones are not only in the software development plan but also, for example, in
C1
C2
Cx
Cy
T1
T2
plans for marketing and competitor management, business, intellectual property management, training, customer service, quality assurance, hardware development, and so on. The project should pass all pregate milestones in all plans before the corresponding business decision at the gate. Mapping a business decision model’s gates to an SDLM’s major milestones is straightforward (see the examples in Figure 4). A go decision at the first gate is a prerequisite to start software development as well as all the other activities. At this point, we can start the project if we decide that the intended product is a strategic fit, attractive to the market, and technically feasible. We can then use major milestones in the software development life cycle as pregate milestones corresponding to the business decision gates. If the gates outnumber the major milestones, we must select suitable Figure 3. A pregate minor milestones as pregate milestones. milestone’s relation to a gate. Gate Gate meeting preparation Gate assessment Development phase
Pregate milestone Next development phase
November/December 2002
IEEE SOFTWARE
31
Figure 4. Comparing Cooper’s Stage-Gate Model to the Unified Process and the Synch-and-Stabilize model.
Gate 1
Gate 2
Gate 3
Gate 4
Gate 5
Cooper
Life-Cycle Objectives
Initial Operational Capability
Life-Cycle Architecture
Inception
Elaboration
Product Release
Construction
Transition
Unified Process I1
I2
Gate 1
E1
E2
C1
C2
Cx
Cy
T1
Gate 2 Gate 3
T2
Gate 4
Gate 5
Specifi- Review cation
Development
Subproject 1
Subproject 2
Mapping SDLMs and the ABB Gate Model To raise the quality of its product development business decisions, ABB developed the ABB Gate Model,7 a project control model reminiscent of Cooper’s Stage-Gate. The ABB Gate Model consists of eight gates: gates 0 through 5 are true decision gates where the project can actually be canceled; gates 6 and
Subproject 3
Optimization
Release
Zero bug release
Scope complete
Feature complete
Visual freeze
Milestone 3
Milestone 2
Planning
Milestone 1
Schedule complete
SynchandStabilize
Project plan approved
Vision approved
Cooper
Stabilization
Beta test Golden Master
7 are used for follow-up and for a retrospective investigation of project experiences. Mapping the UP major milestones and the ABB Gate Model gates is almost as straightforward as mapping to Cooper’s Stage-Gate. It is only before ABB’s Gate 3, Confirm Execution, that the UP is missing a pregate major milestone. Here, project management can choose a minor milestone indicating the finalization of an iteration or subphase4 as a
Table 1 Mapping the ABB Gate Model gates and the major Unified Process milestones ABB gate
Gate’s purpose
UP’s corresponding major milestone
Milestone’s content
G0 G1 G2
Agree to start project Agree on project scope Agree on requirements and project plan
– Life-Cycle Objectives Life-Cycle Architecture
G3
Confirm consensus regarding proposed technical solution Agree on the product’s readiness for piloting and market introduction Agree on release
–
Project start Software’s scope set Stable architecture and planned software development schedule, staff, and cost Minor milestone should be selected
Initial Operational Capability
Software ready for beta testing
Product Release
Software’s formal release
G4 G5
32
IEEE SOFTWARE
November/December 2002
W
hen ABB first introduced a common decision model for product development, one of the developers’ most common concerns was that adapting to the ABB Gate Model seemed to force the projects to use the waterfall development model. To clarify this issue, ABB made available to its developers all the mappings this article describes. So far, the results are promising. decision makers, project managers, and software engineers have reacted well to these mappings. Initial results show enhanced communication between the developers and the decision makers, increased focus on business aspects, and increased understanding of the differences between the models. Current work focuses on making the mappings more widely known and used throughout ABB. By making these mappings available and broadly understood, ABB expects easier adaptation to future SDLMs, with new approaches to software development.
Go into production Iteration n –1
Productization
Iteration ...
Iteration ...
Iteration y
Iteration x
Architecture
pregate milestone. Table 1 summarizes the requirements for the ABB Gate Model gates and for the UP’s major milestones. Mapping the ABB Gate Model to XP resembles mapping to the UP but adds one complication. Because the time for planning in an XP project should be short, separating Gate 1 and Gate 2 is unnecessary (see Figure 5). (The recommended time for the planning phase in XP projects is about one week.) The proposed solution is to combine Gate 1 and Gate 2 and use the end of the planning phase as the point in time for a combined Gate1/Gate2.
Gate 5
Ready for production
Release planned
Iterations to release
Iteration 1
Prototyping (spikes)
Planning game
Exploration Planning Extreme Programming
Gate 4
Iteration n
Gate 3 Milestone x
Gate 1/Gate2 Confidence achieved
Gate 0 ABB Gate Model
Figure 5. Mapping ABB Gate Model gates and Extreme Programming milestones.
Certify for release
Acknowledgments We recently presented a more detailed and theoretical version of this article at the 28th Euromicro Conference 2002. It is available in the proceedings, published by the IEEE Computer Society.
References 1. J. Johnson et al., “Collaborating on Project Success,” Software Magazine, Feb./Mar. 2001, www.softwaremag. com/archive/2001feb/CollaborativeMgt.html. 2. R.G. Cooper, Winning at New Products, 3rd ed., Perseus Publishing, Cambridge, Mass., 2001. 3. M.A. Cusumano et al., Microsoft Secrets, Simon & Schuster, New York, 1998. 4. I. Jacobson et al., The Unified Software Development Process, Addison-Wesley, Boston, 1999. 5. K. Beck, Extreme Programming Explained, AddisonWesley, Boston, 2000. 6. B. Boehm, “Anchoring the Software Process,” IEEE Software, vol. 13, no. 4, July 1996, pp. 73–82. 7. ABB Gate Model for Product Development 1.1, tech. report 9AAD102113, ABB/GP-PMI, Västerås, Sweden, 2001.
For more information on this or any other computing topic, please visit our Digital Library at http://computer.org/publications/dlib.
About the Authors Christina Wallin is a research engineer at ABB Corporate Research, Sweden, and helps
support software process improvement at several ABB sites. She is also a PhD candidate at Mälardalen University. Her research interests include management involvement in software development. She received her MSc from Luleå University of Technology, Sweden. Contact her at Mälardalen University, Computer Science and Engineering Dept., SE-721 23 Västerås, Sweden;
[email protected].
Fredrik Ekdahl heads the software engineering process group at ABB Corporate Re-
search. He received his MSc in industrial engineering and management and a PhD in quality engineering and management from Linköping University. Contact him at ABB Corporate Research, SE-721 78 Västerås, Sweden;
[email protected].
Stig Larsson is responsible for product development processes for the ABB group. He received his MSc in electrical engineering from the Royal Institute of Technology in Stockholm. Contact him at ABB Corporate Research AB, SE-721 78 Västerås, Sweden; stig.bm.larsson@se. abb.com.
November/December 2002
IEEE SOFTWARE
33
focus
the business of software engineering
Business-Driven Product Planning Using Feature Vectors and Increments Brian A. Nejmeh, Instep Ian Thomas, Ian Thomas and Associates
oftware product development organizations are frequently hampered by all-or-nothing approaches to product planning driven by technical issues. We present a method for business-driven product planning based on the notions of feature vectors, feature levels, and release increments. This method provides a decision framework within which to assess the range of possible combinations of features, times, and costs and to define release schedules with desirable content, predictable
S Agile development processes emphasize prioritizing product features, but most don’t tell you you how to do it. This productdevelopment framework offers a method of defining release increments and assessing product features to deliver the highest return on development investment. 34
IEEE SOFTWARE
costs, and firm schedules. This incremental product-planning method provides significant product release flexibility crucial to both start-up and mature software companies. To illustrate how to use our framework, we present a case study of a company developing software and hardware Internet access equipment. The specific features and values we cite are representative, but, to protect proprietary information, they are not the actual values. This article focuses principally on software product companies engaged in developing software products for commercial sale. We believe our incremental productplanning method would prove a critical success factor for any software product organization, especially start-ups determining initial and evolving product release functionality. More mature organizations have found the method useful in planning the evolution of more established products. However, our
November/December 2002
framework also has relevance for software professionals who work in corporate IT environments and are planning a series of releases of their information systems, software companies using an application service provider (ASP) model, and systems companies whose systems contain a significant software component. In general, our product-planning method is relevant to anyone involved with selecting a balance of product features to maximize business value to the development organization. Commercial software productplanning challenges From the perspective of software product planning, we live in times of unprecedented haste. In the mid-1990s, commercial software enterprises typically measured product release cycles in one- to two-year increments. Today, many factors are driving 0740-7459/02/$17.00 © 2002 IEEE
product release cycles to months instead of years. Competition, for example, has become a critical force in product planning. Competition takes many forms, including ■
■
Time to market. A study of the technology industry found that products that come to market six months late but within budget earn 33 percent less profit over a five-year period compared to products out on time.1 The study also found that bringing a product to market on time but 50 percent over budget cuts profits by only 4 percent. Agile new entrants. Powerful software development frameworks such as J2EE (www.java.sun.com/j2ee) and .NET (www. microsoft.com/net) and the open source movement2 have facilitated rapid development of competitive products, giving birth to formidable competitors in most software markets. In addition, the lowcost, broad deployment, and product feedback options that the Internet affords enable unprecedented levels of timely product deployment, feedback, and revision.
Many product release plans depend on a single definition of the target release’s contents. This makes it difficult to reduce the product scope to meet delivery schedules, to respond to changes in market conditions, or to launch new products in the case of a start-up. Rather than defining incrementally releasable products, most product plans tend to focus on monolithic, large-grained releases. However, more than ever, today’s product planners must preserve flexibility as they define products—both in the product’s functionality and in the release calendar. In addition, the heightened competition means that companies must maximize the value of their development effort for new releases. The goal of software product planning should be to maximize the product’s value within available resources—that is, to deliver the highest return on development investment, or ROI. Meeting the challenges These challenges have generated several responses, with evolutionary and agile development methods playing leading roles (see www.gilb.com).3 An agile process is a
development approach that is adaptive rather than predictive in nature, and peopleoriented rather than process-oriented:3 Extreme Programming is one well-known example. This article complements some of the fundamental premises of agile process methods, including ■ ■ ■
Incremental and highly iterative development Frequent product releases Development priority and sequencing driven by business value, as expressed by business stakeholders
The goal of software product planning should be to maximize the product’s value within available resources.
Both agile and evolutionary process methods emphasize close contact between the development team and stakeholders. However, these methods assume that stakeholders can communicate business requirements effectively and reorient development as increments are released. In addition, these methods provide no guidance on several key issues: ■
■
■
■
They don’t address how the development group should interact with the key stakeholders and the product manager, who determines the product definition in most companies. They don’t describe how the product manager and the development group can produce consistent business feature requirements and priorities that will satisfy multiple customers. They don’t suggest techniques for the product manager to balance the often conflicting demands of multiple customers. They don’t provide techniques for product managers to assess the value of proposed features.
In some of the agile process approaches, effective requirements prioritization is implicit. Tom Gilb and Karl Wiegers have addressed this issue explicitly (see www.gilb. com).3,4 We extend these approaches using the concept of explicit functional levels for each product feature. We extend the established notion of a release increment by relating the increment’s definition explicitly to the product characteristics a product manager is charged with managing. Our approach also emphasizes that the product manager performs product planning to help the company meet its objectives—a goal November/December 2002
IEEE SOFTWARE
35
Software product organizations often have difficulty making decisions that transcend the purely technical, especially when it comes to product planning.
that might be different from maximizing a release’s value to customers. Effective product planning cannot separate the product manager’s planning processes and techniques from the development team’s development approaches. The product-planning framework we propose integrates well with incremental development methods and is a principal contribution of our work. Our approach builds on some of the models used at Microsoft.5 Product-planning and development processes Solutions for effective software development inhabit the broader context of systematic product planning. However, software product organizations often have difficulty making decisions that transcend the purely technical, especially when it comes to product planning. Nevertheless, thinking beyond purely technical issues is critical to these organizations’ business success. Thus, besides integrating with evolutionary and agile development techniques, our product-planning framework incorporates business and economic drivers such as time to market and ROI for engineering effort. It also emphasizes a cross-functional process that includes senior management, product managers, software engineers, and other key business functions. The product manager In a commercial-product setting, a product manager coordinates the activities associated with developing and commercializing a product or product line. Product manager responsibilities include product planning, partnerships, pricing, product rollout, and so on. In larger companies, the product manager often chairs a product-planning working group consisting of domain experts and representatives from sales, marketing, development, services, and support. In many companies, the product manager is responsible for the product line’s overall well-being and vitality, including profitability. For product planning, the product manager heads the process of considering all inputs and producing a release increment plan. The company’s business plan, strategy, and budget determine the process’s constraints. Inputs to product planning include data and perspectives from many stakeholders: prospects, customers, win-and-loss reports, problem re-
36
IEEE SOFTWARE
November/December 2002
ports, enhancement requests, sales and field feedback, competitive and market analysis, new technologies, development, and customer support. The principal output from the planning process is a rolling release plan that identifies a sequence of release increments with defined functionality and target dates. Often, trying to satisfy all the stakeholders with the contents of a release leads to plans that are too complex, too costly, and too late. Our approach lets the product manager obtain input from the stakeholders so that he or she can make decisions to define release increments that provide real value and can be delivered on time and within budget. Incremental product planning: Key concepts A release objective is a high-level statement of a major goal for a product release. Typically, each release has two release objectives. The release objectives drive overall release contents and act as a filter for candidate release features. At least one release objective should have obvious relevance to the end-user community. Release objectives might relate to functional areas, usability improvements, support for third-party products and standards, workflow improvements, performance enhancements, product stabilization, and so on. In our case study, the principal objectives for the initial release were to achieve data transmission performance comparable with existing and competitive products and a consumer cost no greater than existing or competitive products. A feature category provides a grouping mechanism for related functional capabilities. A feature is an individual functional capability that belongs to a feature category. Effective product planning requires identifying possible features over which planning decisions are made. We begin by breaking the product’s behavior into feature categories. In our case study there were seven categories— five hardware, two software. The seven categories included a total of 35 features, 23 of which were primarily software related. For one of the software feature categories—software performance—we identified protocol implementation and bandwidth management as two features. For the software usability feature category, we identified software upgradability features for different components, a bandwidth management feature, and operations tools that managed the opera-
Component 1 software configurability Extensible component set Component replacement, with rollback Component replacement, no rollback
Complete replacement of all software
None
None
Figure 1. Two case study feature vectors: feature levels of software configurability for Component 1 and Component 2. The axes show how the feature levels are different for each of the features.
Complete replacement of all software
Component 2 software configurability
tional system or simplified its installation. A feature level is a clear functional level that a feature implementation can achieve. Together, a feature and its levels define a feature vector. Figure 1 illustrates these key concepts using a graphical representation of two feature vectors from the case study, showing the feature levels for each (we’ve omitted the actual component names for confidentiality). In our case study, the feature levels for Component 1’s remote software configurability feature were 1. None 2. Complete replacement of all software 3. Replacement of components without rollback to the previous configuration 4. Replacement of components with rollback 5. An extensible, configurable software infrastructure/architecture (see Figure 1) For any feature vector, each feature level should represent a distinct business value to customers and compare with the functional
capabilities of competitors’ products. In addition, the difference between two feature levels should translate to a meaningful engineering development increment and represent an increase in perceived business value. A feature’s levels are often a monotonically increasing sequence or a partial order, as in the Component 1 example. On the basis of intuition, the product manager orders the levels for increasing value and levels of complexity. Each feature in the feature vector space should have between two and six levels defined. In our case study, the average number of feature levels for both hardware and software features was between four and five. Increases in functionality between levels should be modest—that is, implementations taking on the order of four to 10 weeks. Table 1 describes the feature levels for the Component 1 software configurability feature vector. In practice, we use a single spreadsheet for all such features. A release increment is a collection of {feature, level} pairs that define the contents of
Table 1 Feature description table: Component 1 software configurability feature vector Level
Description
All levels
This feature defines the support offered for remote replacement of the software in Component 1. Component 1 has no capability for remote software change. The application software in the component can be replaced completely. This solution does not support rollback. The application software is decomposed into distinct components, and each of these can be independently replaced. This solution does not support rollback to restore the component being replaced if replacement fails. As above, but with rollback if replacement fails. The set of distinct components in the application can be extended, with the additional components loaded and replaced remotely.
1. None 2. Complete replacement of all software 3. Component replacement, no rollback 4. Component replacement with rollback 5. Extensible component set
Related features
Operations tools and Feature Level 2 required
November/December 2002
IEEE SOFTWARE
37
Table 2 Increment definition using feature levels Feature
Increment 1
Component 1 configurability
None
Increment 2
Increment 3
Increment 4
Increment 5
Complete replacement of the software, no rollback Complete replacement of the software, with rollback Component replacement, no rollback Component replacement, with rollback Component interrogation for software version Component 2 configurability
None Complete replacement of the software, no rollback
the product release. A sequence of release increments represents increasing levels of functionality over time. In addition, by identifying feature levels of increasing functionality, such a sequence enables engineering to design the product intelligently, anticipating the changes.6 The features and levels are the lexicon over which our framework defines release increments. Our case study product included two software configurability feature vectors, one for each of two separate components. Table 2 shows our definition of how increments refer to feature levels: The feature column lists the feature name. The feature levels are distributed across the increment columns where the feature level is first implemented. A release of a releasable product is a release increment judged to be worthy of field release. A release should show feature level improvements in six to 18 feature vectors. Typically, a product and its evolution include many more feature vectors, but many of these are either not consistent with the release objectives or are known to be of very limited market value. So, limiting improvements to six to 18 feature vectors has not proven difficult in practice, and the restriction allows for more focused analysis on the release’s core features. Feature vectors are not necessarily independent. It is important to capture interdependencies between features and their levels to ensure consistent decisions. When two features represent direct trade-offs, sometimes the customer makes the trade-off, on the basis of value; in other cases the product development 38
IEEE SOFTWARE
November/December 2002
team makes engineering and related trade-offs during product development. For example, component size versus cost is a classic tradeoff. Identifying such feature trade-offs during the planning process lets the product manager solicit input on the desired coexistence of features and levels. Assessing the market ROI of feature levels The ROI is the business value to the company of a product development effort investment. Our approach is based on the principle that a product manager’s key role is to maximize the company’s ROI for the development group. This principle is different from the goal of maximizing business value to customers. To estimate ROI, the product manager must understand the business value and the effort involved in developing the features. Assessing feature business value In our approach, business value estimates for different features do not have to be absolute; they can be relative. To calculate ROI, we only need to know, for instance, that some feature is twice as valuable as another according to some units. Ultimately, we assess the business value of a particular feature level— not the entire feature. Feature level value assessment gives the product manager more control to maximize the business value received for a feature level’s development effort. Selecting candidate features and levels The product manager determines the features and levels to include in each of a prod-
uct’s release increments by assessing the relative value of features and levels. The initial decisions identify the features’ relative values. A typical software product might have hundreds of features. The objective is to reduce the number of features for which the product manager must identify and evaluate feature levels. (Later, we describe several techniques for achieving this reduction.) Using the features’ relative values, the product manager then selects a candidate subset of features for closer investigation. At this stage, it is unwise to exclude a feature only because its complete implementation would be too costly—a reduced functional level for that feature could be valuable to some customers. In our case study, the product manager chose about half of the 35 features as primary features. For each feature from the candidate set, the product manager identifies several candidate levels for release increments. The relative values of these feature levels guide the release definition. Practically, the choices of features and levels are often interdependent. Constituencies and value Several stakeholder constituencies participate in or influence value assessment. Current customers require continued product maintenance. They are also a constituency for enhancements that can be sold as add-on or complementary products (“upsell”). Prospective customers require certain product features, and selling to these prospects might require demonstration features that are of greater value during the sales cycle than in the product’s day-to-day use. In addition, other key constituencies that could influence overall feature and level value ratings include competitors with product offerings; industry and financial analysts; and internal sales, marketing, engineering, professional services, and support staff. In our case study, a key group of investors setting goals for the business formed one key constituency, and potential customers formed another. The investors controlled funding for the product development. The product manager identifies the value to the company of the constituencies’ product requests. Remember, we define value here in terms of the company’s business goals. This includes an understanding of what the customers value but is primarily focused on what is best for the company.
For example, the company’s goals might include increased revenue from new sales, revenue at higher margin, increased market share in a particular market, higher customer satisfaction and customer referenceability, and a clear functional differentiation with competitive products. Without understanding the company’s goals, product management cannot know which constituencies are most important and what to ask them. For example, prospective and current customer constituencies might figure differently in achieving the company’s business goals. If increased revenue from new sales is the goal, the current customer constituency with enhancement requests is unlikely to contribute much toward achieving the goal. However, the prospective customer constituency is very relevant to this goal in terms of their desired features, platforms, third-party interfaces, and so on. Ultimately, the product manager must weigh the importance of each constituency’s input. We define a feature level’s value as the relative likelihood that the appropriate target constituencies will respond to that feature level in a way that contributes to the company’s business goals. In our case study, there was a clear difference in value between two stakeholder groups. One group would have accepted a minimal software configurability feature level at an earlier date in order to validate the product concept, whereas another group demanded a higher level of configurability.
The product manager identifies the value to the company of the constituencies’ product requests.
The value survey We assess value by surveying constituencies. The product manager collects value data using survey techniques that include Web surveys, targeted article surveys, key customer and prospect interviews, industry analyst interviews, win-and-loss reports in competitive situations, and business process and value chain modeling. The surveys evaluate relative values of features and feature levels and tradeoffs between feature levels of different vectors. In our case study, for example, the feature vectors included one for device management and another for a specific aspect of operations management. We were interested in the relationship between these vectors’ feature levels and, for example, the trade-offs constituencies made between them. Several guides are available for creating useful surveys.7 For our focus on identifying November/December 2002
IEEE SOFTWARE
39
Maximizing information from a small sample requires carefully selecting the surveyed population and adhering to survey design principles.
the respondent’s relative value assignments, we have found several basic question formats useful: ■
■
■
■
Quartile technique. Constituents rate and evenly distribute the relative values of features and feature levels on a value scale. Minimum acceptable feature levels. Respondent selects the minimum acceptable level from a presented list. Trade-off value of feature levels. Survey determines relative values for two feature levels by presenting increments differing only by those feature levels. Preference rankings. Constituents rank combinations of features, showing the points at which the trade-off value changes with increasing feature levels.
Collecting the data The surveys must maximize the information yield from a small survey effort. Product management can rarely get accurate information from a large range of constituency members, except for an existing product with a large user base that can be polled through Web surveys. Maximizing information from a small sample requires carefully selecting the surveyed population and adhering to survey design principles.7 The company’s business goals imply the relative importance of constituencies, which guides the choice of survey participants. Analyzing survey results The survey analysis converts the value preferences expressed by respondents into assessments of the value to the company of a feature or feature level. Every preference expressed in a response might not have the same value, because some constituencies carry more weight. To arrive at values relevant for the company, we begin by classifying the expressed value preferences. We then calculate their contribution to the company’s goals. First, we rank the features and levels. We recommend that the features and feature levels be split into four or five ranges of value. We assign each response (defining a feature and a level) to a range. (Confidentiality concerns prevent us from providing ranking examples from our case study here.) We then weight each feature and level value recorded by the importance of the constituency the response 40
IEEE SOFTWARE
November/December 2002
represents. Although this involves a numerical calculation, the calculation is approximate. Finally, we divide the value to the company into four or five ranges and allocate the weighted feature and level values from the survey to the company value ranges to achieve a feature’s or level’s value to the company. Determining cost In general, product development management identifies the engineering effort for some defined feature and feature level by applying past experience along with more rigorous software-effort or cost estimation techniques, such as Cocomo and function point analysis. We have successfully used a modified Delphi technique, by which we ask each developer on a team to independently provide effort estimations for each individual feature level. We then bring the group together, compare responses, and facilitate a group consensus on the estimates. We measure the resulting effort estimate for feature levels in standard units such as the person-week. For each feature, the scope of each cost estimation is the incremental cost of moving one level along the feature vector—the effort it takes to implement a level, given that the previous level has already been implemented. Making this estimate for each individual feature level gives the product manager greater control over the contents of product increments. Because each estimate is incremental, the estimate of what is necessary to implement two or three feature levels for a given feature will probably be inflated. This occurs because additional engineering might be necessary to refactor one level’s solution to move on to the next level. (We manage the overestimate later in the process; it doesn’t reduce the approach’s value.) Level estimates include the required refactoring. Although it is often possible to provide a simple implementation of a low level for a feature, design and implementation complexity often increase at the higher levels. As higher feature levels are implemented, the existing implementations might need rework. The existence of a clear statement of levels in a feature vector format reduces the refactoring work without violating the agile process’s focus on minimizing the work required to achieve a level. Typically, a feature level has several possible minimal implementations; knowing later
Table 3 Feature level value rating table example Cost Low—one to two person-weeks
Commitments
Medium—two to eight person-weeks Medium-high—eight to 20 person-weeks
Feature values (the shading indicates the next features to consider) High Low Component 1 software Operations tools: configurability: replace all feature level 2
Operations tools: feature level 3 Availability of critical components from at least two suppliers
Component 1 software configurability: component replacement with rollback
Component 1 software configurability: component replacement, no rollback
Component 1 cost (maximum for minimal system) less than $x
High—20 to 50 person-weeks
implementation goals can help show which choice will reduce future refactoring. Examining the ROI To illustrate the use of the value and cost analysis, Table 3 is an example feature level value-rating table, showing costs and values for three feature vectors: Component 1 software configurability, operations tools, and a hardware feature that bears directly on Component 1’s cost. Table 3’s columns are arranged from highest to lowest value; rows are arranged in increasing cost. Each cell contains the features whose value corresponds to the column heading and whose effort corresponds to the row heading. Table 3 also includes another reality of software product development—commitments the company has made to specific customers. (To preserve confidentiality, the actual contents of the cells do not reflect the real values in the case study.) The features and feature levels with the highest ROI to the company are those in the upper left-hand corner of the table, where the highest value and lowest cost intersect. The shading indicates the next features to consider—those in cells nearest to the upper left-hand corner. Determining increments and scheduling Together, a set of features and a level for each of those features define a release increment. Each release increment is a candidate for external release. We advocate defining between two and six release increments between each external product release, depending on the external-release calendar. At the end of each increment’s internal development and prior to any decision being made about which increment will become
the product release, it is important to reexamine the definitions of subsequent increments. Increments ensure that you have a release candidate at any time, provided you have completed the first increment. Time is the principal determining factor in how much goes into each release increment. Some Extreme Programming proponents recommend two-week release increment intervals.3 Richard Selby and Michael Cusumano report that Microsoft uses a target of six to eight weeks.5 The product’s deployment situation can also affect the choice of increment interval. For example, ASP applications’ rapid, regular release model allows short increment intervals. The increment content itself, selected to be consistent with the release objectives, depends on several factors: ■ ■ ■
Engineering ROI, shown using a table such as Table 3 Development resource constraints imposed by any required staff specializations The need for a releasable product with mutually consistent features and feature levels
Other factors can also determine the increment size and contents. For example, increments also allow the development of an initial version of a product, which can reassure investors and customers funding part of the development. Early increments can also serve as sales tools. Engineering must review release increment definitions, as the increment might require less effort than the sum of the feature level implementation costs. This is because of the overestimation that results from the estimates referring to single feature levels in isolation. November/December 2002
IEEE SOFTWARE
41
About the Authors Brian A. Nejmeh is the founder of Instep (www.instep.com), a product strategy firm,
and a faculty member in the School of Mathematics, Engineering, and Business at Messiah College. Instep’s offices are located in northern Virginia and suburban Philadelphia. His research interests include software product strategy, product planning, process modeling, and software engineering. He holds a BS from Allegheny College and an MS from Purdue University, both in computer science. Contact him at
[email protected].
Ian Thomas is a consultant specializing in product planning and product management and their relationships with
engineering processes and architectures. He has published in the fields of development environments and software processes. He has a BS from the University of Wales and an MS from London University. Contact him at ithomas@ ix.netcom.com.
The short-term increment definitions are more detailed than the definitions for the longer-term increments; this is because the content of future release increments is subject to revision as each release increment is completed. Combined, the increment definitions constructed for several increments form the product development schedule.
W
e have applied this incremental planning approach in a variety of commercial settings, ranging from start-up technology ventures to longlived product lines. Several key lessons emerged during this experience. Defining product increments is only of value if you preserve your ability to release the product at any increment level, including an increment level achieved early in the development process. Version management tools and practices are critical for this purpose. Once a release increment is selected as the release candidate, tasks involving integration, final documentation, additional testing, rollout preparedness, and release readiness (alpha, beta, and so on) must be completed. We advocate performing these tasks only on the release candidate—not on every single increment. A company’s product manager controls the flow of information to personnel in externally facing company roles, especially to those dealing with prospective and current customers. Thus, product management must control external expectations of the product and its release date. It is important to communicate throughout the company—especially with externally facing personnel—the incremental planning approach in use, including what information can and cannot be made public until authorized by the product manager. Our experience also suggests directions for important future work. We are investigating the development of tools to support
42
IEEE SOFTWARE
November/December 2002
and guide people through the underlying product-planning framework. Clearly such a product-planning toolset would have to be well integrated with other complementary software development tools, including project management tools. Second, we are exploring improvements to the techniques we have developed for feature level cost and value analyses, as the accuracy of these analyses affects the results of our approach. As we expand our use of the technique, we will incorporate additional principles for performing these analyses. In addition, points of integration between our product-planning framework and the product development process bear further exploration. Finally, we are exploring ways to more effectively leverage the work of the conjoint analysis field in evaluating feature and feature level values.8 Conjoint analysis is currently used to analyze preferences for consumer goods. For example, the technique can illustrate the threshold where a customer will cease to pay additional amounts for increases in performance. We have used this technique, and it yielded useful information on tradeoffs between levels for different features. Acknowledgments We thank the IEEE Software reviewers and editors and Kevin Hickey for their many helpful comments.
References 1. A.K. Gupta and D.L. Wilemon, “Accelerating the Development of Technology-Based New Products,” Calif. Management Rev., vol. 32, no. 2, Winter 1990, pp. 24–44. 2. E. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, O’Reilly Press, Sebastopol, Calif., 1999. 3. M. Fowler and J. Highsmith, “The Agile Manifesto,” Software Development, vol. 9, no. 8, Aug. 2001, pp. 28–32. 4. K. Wiegers, “First Things First: Prioritizing Requirements,” Software Development, vol. 7, no. 9, Sept. 1999, pp. 48–53. 5. M. Cusumano and R. Selby, Microsoft Secrets: How the World’s Most Powerful Software Company Creates Technology, Shapes Markets and Manages People, Free Press, New York, 1995. 6. D. Parnas and P. Clements, “A Rational Design Process: How and Why to Fake It,” IEEE Trans. Software Eng., vol. SE-12, no. 2, Feb. 1986, pp. 251–257. 7. F. Fowler, Improving Survey Questions: Design and Evaluation, Sage Publications, Thousand Oaks, Calif., 1995. 8. A. Gustafsson, A. Herrmann, and F. Hubers, eds., Conjoint Measurement: Methods and Applications, 2nd ed., Springer-Verlag, Berlin, 2001.
For more information on this or any other computing topic, please visit our Digital Library at http://computer.org/publications/dlib.
focus
the business of software engineering
The Marriage of Business Dynamics and Software Engineering Ram Chillarege, Chillarege Inc.
ew technology disciplines have the emotion and belief wrapped around the issue of process that software engineering does. As a community of engineers, we are prone to discussions of exactness, and often these quickly disintegrate into arguments of right and wrong methods. Our heated debates about software process, models, and methods can make us forget, however, that a software engineering process is only as good as its ability to create products that meet market needs.
F Integrating market evolution concepts with software engineering processes can help us better evaluate process consequences and institute development processes that meet market needs. 0740-7459/02/$17.00 © 2002 IEEE
Businesses and markets are great opinion equalizers; because their survival is tied to people’s buying habits, they must account for emotion, rationality, behavior, and change. When we fail to realize that business dynamics drive software development—with cost, time, and product quality as outcomes—we make bad judgments. If these don’t negatively affect the software development organization, they usually affect the businesses that create the software, which is worse. To better understand how software process and product intertwine, we must focus on both software development fundamentals and market dynamics. We also need a realistic view of what is both humanly and technically possible. Evolution takes time, and implementing industry best practices won’t raise productivity levels if organizational resources can’t realistically support them. This article therefore takes a holistic
view of the software product business and offers guidelines for instituting development processes that match market needs and improve competitiveness. Business dynamics The software product business has been highly profitable in the past two decades. Several software product businesses demonstrated gross profit margins around 80 percent, returns unheard of in any other modern, legal industry, including transportation, finance, insurance, and health care. But the successes and profits have hidden many ills. When the good times pass, the consequences of poor engineering on efficiency, margins, and return on investment appear as surprises. Figure 1 shows a financial model commonly touted as a product development ideal. The chart shows a product’s cumulaNovember/December 2002
IEEE SOFTWARE
43
$ Feature fight (Websphere and Weblogic, for example)
Net earnings
Product in concept
Defines a platform (Windows and DB2, for example)
Cash cow (DOS and PL/1)
Successful outcome
Entry. This marks an individual product Failed outcome 0
Entry
Growth
Stability
Sunset
Product life-cycle stage
Figure 1. A financial model of software product development.
tive net earnings against a time line, with the classic “hockey stick” shape depicting an initial negative period of investment until payback, when the curve crosses the zero line, and then a meteoric rise following payback. Many successful products have met such an ideal, but of course no guarantees exist in the software business. Products commonly die out, and failure can occur at any stage over a lifetime that might stretch 20 to 30 years. Software business models and financial implications also differ based on whether the software is a consumer product, an outsourced development project, a service, or an internally funded application development project. Each presents stakeholders with different financial models, margins, and long-term possibilities. Life-cycle stages I divide a software product’s lifetime into four stages: entry, growth, stability, and sunset. These stages help us characterize product content, market evolution, viability, and the overall metamorphosis that occurs as software grows from small teams and tens of thousands of lines of code to large teams and millions of lines of code. Projecting the stages against the financial curve lets us visualize a holistic business model where each stage marks a significant transformation in market forces, competitive positioning, and business advancement opportunities. In Crossing the Chasm,1 Geoffrey Moore discusses how technology product marketing evolves from introduction and market development among early adopters to product growth in mainstream markets. To truly understand how product life cycle affects process, however, we must recognize that separately, but in tandem, the
44
product code base, process, technology, tools, development team, and other attributes also evolve.
IEEE SOFTWARE
November/December 2002
concept’s market introduction. Of course, other products within this market might launch at a later time. For example, Managing Your Money (MYM), an early product that defined the personal finance space, had few competitors in the entry stage, but the growth stage saw a new entrant, Quicken. Today Quicken dominates the space, with strong competition from Money, an even later entrant. These products are now arguably in the early stability stage following a shakeout in the late 1990s. Growth. Moving from entry to growth rep-
resents a significant life-cycle shift. Software makers hope their products will either turn profitable or, better still, see strong growth in profitability. At the same time, the product and market move away from novelty toward more basic business values such as predictability and reliability. Feature wars erupt as competition grows and software makers seek to gain customers and market growth. Customers relate to bottom-line value, and products become part of the overall business structure. The growth stage can be long, punctuated by technology, infrastructure, and competition changes. Stability. Products that reach this stage be-
come legends. They represent a unique combination of business and engineering acumen that creates platforms and new market segments, supplants older products, and promises the sky. But stability, like so much in the IT business, is relative. A stable product can always be unseated. Linux, for example, is changing the operating systems business. Although dominant operating system makers continue to exude confidence over their platforms, responsible product managers lose sleep over the Linux threat. Sunset. This stage sees no substantial business growth. Products can still make money, and some can even become cash cows2— provided manufacturers find ways to boost profit margins and don’t merely focus on large market share when growth slows. The product creates value in its clientele’s com-
Table 1 Business value drivers in product evolution stages mitted dependence, and manufacturers reward customer loyalty with high reliability and excellent service.
Rank
Entry
Growth
Stability
Sunset
High Medium Low
Innovation, time to market Features Predictability
Features Predictability Time to market
Predictability Reliability Features
Reliability Maintainability Predictability
Product evolution. Products in the entry stage
Value shifts A quantitative measure doesn’t really give the full scope of change from one stage to another. Although the financial model affects life-cycle stages, it doesn’t determine them. So what can we use to recognize these shifts and guide our actions? We can capture how products change between stages by recognizing shifts in what the market values. Over a product’s life cycle, market values change, and different characteristics become dominant and drive business. Table 1 ranks market values by their relative significance within each stage and shows how these change over the product’s life cycle. To some extent, contemporary beliefs influence product management, development, and marketing priorities. In the entry stage, innovation and time to
High Predictability
Relative value
tend to be lightweight and have little or no legacy burden, but as they grow larger and more complex, once-small products become heavy platforms. We can visualize this change using the rough and ready lines-ofcode measure. Although specific numbers vary widely, an entry product could easily survive with 10 to 40 thousand lines of code (KLOC). However, competition that drives a feature war might bloat the product to a few 100 KLOC during the growth stage. When a product reaches stability, 10 to 15 years later, it could have a million lines of code. Of course, such progression isn’t always monotonic—developers often restructure successful products for performance and manageability and thereby reduce their overall size. But more often than not, average product size grows. Trimming a product significantly costs development resources that are often in short supply. Along with content changes, products often see major personnel changes as they evolve. The entry-stage innovator usually doesn’t lead a product through its growth stage, nor do growth stage heroes negotiate the stability stage. This isn’t merely a skills issue. Several factors conspire to change the environment, including weariness, skills, attrition, and geography.
Innovation Low Entry
Growth
Stability
Sunset
Product life-cycle stage
market help grab attention and create an image of leadership. Novelty often hides quality and performance shortcomings, but such a product’s market lead lasts only as long as no other major players exist. Values shift as other players enter the market and novelty becomes less significant. The growth stage begins a significant shift in market values’ relative ranking. Time to market takes a back seat to features as companies target products to a wider customer base. New customers especially want business enablement, standards, ease of use, and domain specificity. The end of the growth stage sees consolidation and shakeout of second-tier products. Growth slows, but the market is large and doesn’t resemble that of the early growth stage. Stability presents a different playing field with products vying for leadership, not just survival. The transition from growth to stability sees values shift from features to interoperability. Companies are also gauging how large the market can become and whether it marks the beginning of a new segment. The stakes are high and the players big, and classic business values such as predictability and quality reign. Among the value changes, several prove particularly important for our discussion. Innovation and predictability present an interesting comparison. Figure 2 shows how the relative rankings of innovation and predictability exchange positions over a product’s life cycle.
Figure 2. Innovation and predictability shift in importance over a product’s life cycle. The x axis isn’t to scale, and the growth stage can be much longer than it appears here.
November/December 2002
IEEE SOFTWARE
45
Table 2 Process attributes ranked by process model Process attributes
Iterative
Spiral
Gated
Waterfall
Iterations per release Fast feedback Speed to change Scalability Predictability Distributed development Multiproduct integration
High High High Low Low Low Low
Medium Medium Medium Medium Medium Low Medium
Low Low Medium Medium High High High
Low Low Low High High High High
Innovation, which ranks in the top tier during the entry stage, falls off to the lower tiers over the product’s life cycle, and crosses paths with predictability somewhere in the middle of the growth stage. We could argue that a product’s success hinges on its performance in this stage, and interestingly, neither innovation nor predictability is particularly significant here. What drives product success has varied throughout the software industry’s evolution, and it’s difficult to predict what the next driver will be. In the desktop era, features drove product growth and competition; in the dot-com era, liquidity reigned. Current trends appear to favor standards, but time will tell. Regardless of the driver, as the growth stage progresses, innovation and predictability clearly reverse positions. Some values, such as quality, don’t change as dramatically. The interpretation of a quality aspect might change, or a market’s growing maturity might demand more of some aspect of quality, but a low-quality product rarely succeeds for long. That said, quality aspects such as reliability must keep pace with increased sales volume. I call this the large-volume effect. Although a company can massage a less reliable low-volume product through better customer service, soon the math does not work out to keep up with service and warranty costs, especially in markets that demand low pricing and, therefore, higher-volume sales. Companies must also carefully manage consumer product usability because quality depends almost entirely on how compelling the application is. The software industry’s maturity now calls for higher entry-level quality. For example, thanks to the Internet, client seats can ramp up much faster than ever before. Products with zero-footprint clients that need only a Web browser to run become instantly available to a global user base, and a low-quality application will therefore fail in the marketplace long before it gains a foothold. 46
IEEE SOFTWARE
November/December 2002
Process attributes When we understand how market values shift over a product’s life cycle, we can more easily identify process models with attributes that accentuate market values in a particular stage. As time passes and the values’ relative order shifts, this model guides us in making appropriate process changes. Our software engineering community has debated process models forever, largely without the context of a business model. Consequently, each side argues its position, whether for more or less regimen, with the vehemence of a recent convert. As with any passionate position, a process argument’s validity holds within bounds and a degree of isolation. To have a truly fruitful dialogue, however, we must develop a more global perspective of process models. Chapter 7 of Rapid Development3 provides an excellent discussion on the various process models’ core concepts, and Managing the Software Process4 takes a practical view of models’ execution aspects. Agile Software Development5 discusses the philosophies behind successful development teams. As we analyze process strengths and weaknesses, we can compare them based on a common set of attributes. Table 2 rates four process models on such attributes as scalability and predictability. This relative ranking captures each process model’s intrinsic strengths and weaknesses. Although such broad categorization helps us develop a mental model of what a process can offer, we can’t forget that software is essentially a labor-oriented business. People implement a process as a series of tasks, roles, and activities, and management philosophy guides this implementation. Therefore, no realized process ever represents a true implementation of the conceptual model. No rules forbid adapting ideas from one model to another, and often a blend results that includes both strengths and weaknesses of the process attributes. The market doesn’t care what process model developers use; the engineering and the process models are only a means to a business end. Customers care about the delivered product, its quality, and credibility of continued service. In the end, we see how “right” a process model is in how it helps a company meet market demands while creating a competitive advantage.
The marriage model A good marriage between business dynamics and software engineering process arises from a good match between market values and process attributes. An engineering process should help deliver a product while accentuating market value. Although developers could deliver against every value the market demands, this proves unrealistic from a practical point of view. Often, resources are limited and opportunities transient, and the prudent development organization recognizes limitations and ensures its products satisfy at least those values most in demand. This is where the relative ranking of values proves essential. The lower-ranked values aren’t irrelevant, but we clearly see the folly in delivering to a lower-ranked value at the cost of a higher-ranked value. That said, we must make these trade-offs within reasonable limits and using good judgment. In any marriage, people change over time, and what worked at one stage of the marriage doesn’t always work in another. Likewise, as market values shift, software development organizations must examine whether the current process model delivers to current market values and offers the most efficient way to run development. It’s possible to produce a product with any process model, albeit at the cost of efficiency and quality. A mismatch between process attributes and market values eventually creates cost, increases cycle time, and reduces quality, and other pressures that can break down a once successful engineering operation. Entry to growth A low-burden development model best serves entry-stage products. Agile methods with fast feedback address market values such as novelty and speed. They also insure against investing too much in any one area when market direction and other factors remain unknown. The spiral model6 captures a more formal elucidation of the process. In iterative methods, however, the process can degenerate to one of continuous change where rework defeats the presumed productivity. Unless developers apply more sophisticated methods to measure and manage development,7 inefficiency will result. The growth stage brings greater demand for efficiency and features. Good product management becomes key, and as develop-
ment teams grow and multiple development streams become common, projects need greater control and predictability. In this context, the less formal iterative methods that capture the early stage can easily break down and spin the development organization out of control. Those with informal processes must reverse their development philosophy to survive. Not all organizations consciously undergo this shift, but most find themselves doing it eventually—and far more painfully, later, if they survive. Growth to stability The growth stage often stretches a decade or more, and the engineering processes at each end look nothing alike. The product’s credibility in the market depends on release predictability and quality, with sparing exceptions for those companies that can buy some concessions with market domination. New markets, product cross-compatibility, postrelease services, a services business for customization, and other factors increasingly demand predictability and discipline in the engineering process. We therefore rarely see any process other than a gated one at the end of the growth stage, and this greater regimentation becomes standard for the product’s remaining life. We can thus conclude that the degree of choice for software engineering processes lasts only during the entry and early growth stages, which account for no more than a quarter of a product’s lifetime. Process management becomes increasingly critical in the stability stage as product complexity and thus engineering demands grow. Merely throwing people at the problem is rarely an affordable or effective solution. Most process models are no more than activity models, and conformance is hard to drive in a technology purely dependent on intellectual labor. Instead, we need methods to understand activity models, measure cost, and constantly assess process effectiveness. The notion that a process definition alone will meet these needs is misleading— we need process management to successfully apply an engineering process model.7
The degree of choice for software engineering processes lasts only during the entry and early growth stages.
Stability to sunset Successful growth-stage companies usually have fairly predictable and robust engineering processes. Exceptions exist, espeNovember/December 2002
IEEE SOFTWARE
47
High Rigor
Finish
Overdone Proactive Low Complexity
High Complexity
Just right Catch-up Chaos
Start
Low Rigor
Figure 3. Mapping product complexity and engineering process rigor. A software product can follow any one of the trajectories shown depending on how well we manage both factors.
48
IEEE SOFTWARE
cially in profitable product lines that manage to afford the inefficiencies of poor process. But the stability phase becomes even more demanding. The product’s ability to perform, scale, and interoperate creates a new level of engineering demands to which weak processes usually cannot deliver. Outsourcing’s huge cost advantages demand even more robust processes. When such strength does not exist, development organizations can outsource only noncore subsystems. As development increasingly demands the rigor that gated development methods provide, processes commonly tend toward a modified waterfall. This transition requires organizational maturity and a keen market sense. In addition to rigor, the organization must establish an effective in-process measurement and feedback system because without good feedback, regimented processes become unstable. Also, although much of the development effort may be regimented, exceptions always exist for smaller components that push the envelope on performance or technology. Agile methods serve best to prototype these as long as there’s also a clear reintegration plan to the main release. Stable products will likely be reengineered at least once. Survival in this phase stresses performance, interoperability, and productivity, which might demand code reengineering and restructuring. Although developers often recognize this need early, the heat of the growth stage affords few resources for this purpose. A stable product stands apart from an upstart because of its dependability and interoperability. For example, ask your database administrators
November/December 2002
whether they’d choose MySql, SQL Server, DB2, or Oracle for a new application. The first two are in the growth stage, while the latter two are in the stability stage. Products in the sunset stage often have a decreasing customer base and can’t be made into cash cows without reducing development costs. This requires managing customer satisfaction and warranty costs, and selectively enhancing features to keep the product from dislodging from its client base too soon. Retirement can be fun, but only if it is affordable. Companies managing sunset products must have a well-articulated development process but also improve on this process significantly to increase efficiency and lower costs. Ideally, they’ll initiate these changes during the stability stage to avoid burdening themselves with development expenses at the sunset stage. Failure to do so will make their attempts to turn products into cash cows unworkable, and they’ll need to discontinue the products. Although the entry and growth stages require vastly different software process models, the transitions from growth to stability to sunset demand less radical process change. Therefore, gated and more regimented processes will largely govern a product life cycle of 20 years. Discussions that focus entirely on iterative processes and rapid time to market often overlook this fact. The balance Figure 3 illustrates the trade-offs involved in managing product complexity and process rigor. Over its lifetime, a product’s complexity grows and—ideally—process rigor grows correspondingly. In the real world, however, a product’s path can be any one of the curved lines shown in the figure. More likely than not we are either playing catch-up or being proactive. Obviously, poor process management results in the arcs swinging out into chaos or overregimentation.
M
anaging a software development operation and engineering environment to suit the market is a sophisticated science. The software industry has long been plagued with low efficiency, poor productivity, and sick projects, and
many observers attribute these problems to failure to comply with known best practices. But that is only part of the equation—the business and technical factors intertwine with less quantifiable people issues. Sophisticated diagnostic methods such as orthogonal defect classification8 can help organizations pinpoint and prioritize such issues and their impact. Only when the organization can accurately diagnose problems and estimate the return on investment does money spent on process management bear strong business impact. Good methods and tools are necessary but not sufficient. True transformation occurs only when we complement our energies with skills to diagnose, solve, and execute the necessary process change.
6. B. Boehm, “A Spiral Model of Software Development and Enhancement,” Computer, vol. 21, no. 5, May 1988, pp. 61–72. 7. R. Chillarege et al., “Orthogonal Defect Classification for In-Process Measurement,” IEEE Trans. Software Eng., vol. 18, no. 11, Nov. 1992; www.chillarege.com/ odc/articles/odcconcept/odc.html. 8. R. Chillarege and K.R. Prasad, “Test and Development Process Retrospective: A Case Study Using ODC Triggers,” Proc. IEEE Dependable Systems and Networks, IEEE Press, Piscataway, N.J., 2002.
For more information on this or any other computing topic, please visit our Digital Library at http://computer.org/publications/dlib.
About the Author Ram Chillarege is a software engineering
References 1. G.A. Moore, Crossing the Chasm, Harper Business, New York, 1999. 2. B.D. Henderson, Anatomy of the Cash Cow, Boston Consulting Group, Boston, 1970. 3. S. McConnell, Rapid Development: Taming Wild Software Schedules, Microsoft Press, Redmond, Wash., 1996. 4. W.S. Humphrey, Managing the Software Process, Addison-Wesley, Boston, 1989. 5. A. Cockburn, Agile Software Development, AddisonWesley, Boston, 2002.
Career Opportunities PURDUE UNIVERSITY Department of Computer Sciences The Department of Computer Sciences at Purdue University invites applications for tenure-track positions beginning August 2003. Positions are available at the assistant professor level; senior positions will be considered for highly qualified applicants. Applications from outstanding candidates in all areas of computer science will be considered. Areas of particular interest include security, mobile and wireless systems, scientific computing and computational biology, and software engineering. The Department of Computer Sciences offers a stimulating and nurturing academic environment. Thirty-six fac-
consultant whose work focuses on the management–technology interface. He founded and headed the IBM Center for Software Engineering, where he created the Orthogonal Defect Classification method and was awarded the IBM Outstanding Innovation Award for its invention. He serves on the University of Illinois Dept. of Electrical and Computer Engineering board and the IEEE Steering Committee for Software Reliability, Dependable Computing, and Application-Specific Software Engineering. He also chairs the New York Software Industry Association’s CTO Council. He received a BE and ME from the Indian Institute of Science, Bangalore; a BSc from the University of Mysore; and a PhD in computer engineering from the University of Illinois, Urbana-Champaign. He is an IEEE fellow. Contact him at Chillarege Inc., 210 Husted Ave., Peekskill, NY 10566; ram#chillarege.com; www.chillarege.com.
ulty members have research programs in analysis of algorithms, bioinformatics, compilers, databases, distributed and parallel computing, geometric modeling and scientific visualization, graphics, information security, networking and operating systems, programming languages, scientific computing, and software engineering. The department implements a strategic plan for future growth which is strongly supported by the higher administration. This plan includes a new building expected to be operational in 2005 to accommodate the significant growth in faculty size. Further information about the department is available at http://www.cs.purdue.edu. Applicants should hold a Ph.D. in Computer Science, or a closely related discipline, and should be committed to excellence in teaching and have demonstrated strong potential for excellence in
research. Salary and benefits are highly competitive. Special departmental and university initiatives are available for junior faculty. Applicants can apply electronically by sending a curriculum vitae, a statement of career objectives, and names and contact information of at least three references as a postscript or .pdf file to
[email protected]. Alternatively, applicants can send hard copies of their application to: Chair, Faculty Search Committee Department of Computer Sciences Purdue University West Lafayette, IN 47907-1398 Applications are being accepted now and will be considered until the positions are filled. Any inquiries should be sent to
[email protected]. Purdue University is an Equal Opportunity/Affirmative Action employer. Women and minorities are especially encouraged to apply.
November/December 2002
IEEE SOFTWARE
49
focus
the business of software engineering
Six Translations between Software-Speak and Management-Speak Dorothy McKinney, Lockheed Martin
s software engineers, how many of us have been frustrated when we raise a serious issue to management, but it doesn’t get the attention it needs? How many managers repeatedly ask their software folks how their project is proceeding, only to be devastated to learn late in the project that the software won’t be done on schedule or within budget?
A
Speaking a foreign language One source of these problems is that we have not learned to speak each other’s language. Tables 1 and 2 include examples of the translations we can use to improve our communications.
Here is help on understanding what software engineers say to their managers, and vice versa. There are several win-win-win solutions that can benefit your organization, your customer, and your technical team. 50
IEEE SOFTWARE
Differences in perspective Sometimes miscommunications arise from differences in perspective or objectives. Although managers might have developed software earlier in their careers, they are now focused on different elements in the work. They might be concerned with contractual commitments, which are legally binding whether or not the promised work is possible given the committed cost and schedule, let alone technically feasible. They might also be concerned about not losing face or damaging their organization’s reputation. It is a reality of the software business that we must usually make contractual commitments well before we understand the full scope and complexity of the software development effort. If we are working on software
November/December 2002
that is part of a larger system (including hardware that must be developed), we often don’t understand the full set of software requirements until the hardware has been designed, built, and is partially through integration and test. It is tempting to just blame management for making unrealistic commitments. However, it is more constructive to seek ways to be part of the solution for your organization. In an environment open to learning, you can do this by using each project problem or challenge as a source of additional learning. In an environment that does not seem open to learning, your best approach might be to become knowledgeable about what solutions to these kinds of problems can work—usually by painful trial and error. So, if your interaction with your manager indicates to you, as the software engineer, that the best you can do is not good enough, it is time to try to help solve the larger problem. If management can’t understand the technical reality you see, you must make the extra effort to understand the business realities they’re facing. Seek out a win-win-win 0740-7459/02/$17.00 © 2002 IEEE
Table 1 Software engineer/manager translations Software engineer’s concern
Translation into management terms
The software requirements are changing. (Or, "The users don’t seem quite sure about what they want the software to do," "Every time we talk to the system/hardware engineers, they tell us additional things they expect the software to do," "The customers seem to keep changing their minds about what they want, except they say they are just ‘clarifying’ and not changing any requirements.") We have run into some unexpected problems integrating (or designing, or implementing, or testing) the software.
Our estimate of schedule and budget for the software project is no longer valid. At this time, we might not be able to see how many more requirements changes are coming, so we might not even be able to develop a new estimate with confidence. Can we set a time to discuss what our new strategy should be, since the old strategy is no longer workable?
This small change in requirements has a big ripple effect on the software design and implementation.
Our estimate of the time and budget required to complete the software is no longer valid. Furthermore, because we didn’t see these problems coming, we aren’t sure what other problems might be imminent. We’ll give you a revised estimate as soon as we have enough information to do so. In the meantime, please begin changing the customers’ expectations so they won’t expect so much so soon. If we go ahead with this change in the software requirements, the additional schedule time and cost will be much more than you expect. We can give you the technical details explaining why if you want. (However, we aren’t confident that you will appreciate the explanation. The essence of the problem is that we did not envision this change in requirements when we initially architected and designed the software.) We also know that if you go to the contract software shop down the road and ask them to make this change, they’ll tell you they can do it for a lot less money and complete it a lot more quickly. However, once they see how this software is designed, they will tell you this is a poor software design—because it wasn’t designed to accommodate this kind of change. So, it will cost more and probably take longer than they initially estimated.
Table 2 Manager/software engineer translations Management question
Translation into software engineer’s terms
How is the software effort going? or Are you meeting all your milestones? or Are your costs within the budget we had planned to spend to this point?
Do you have confidence that the software effort will be completed on schedule and within budget? If for any reason you aren’t confident, please tell me so— and tell me what you think we should do to regain that confidence. (Do not misinterpret the angry roar you might get in response to any pessimistic answer as management not wanting to hear the truth—they might find the truth upsetting, but they do want to hear it. But be realistic—if your management tends to “shoot the messenger,” see if you can schedule two sessions with them. In the first, you can give them a heads-up about the problem and in the second—preferably several days later—you can discuss how to proceed.) Can you make this change and still complete the software project within the planned schedule and budget, without reducing the product’s quality? If not, please educate me about why such a small change costs so much and takes so long—this is such a small change in requirements that it just doesn’t seem like a big deal to me.
Can you handle this change in software requirements? or Let’s just add this little bit of functionality to the software. or We have discovered that the hardware doesn’t work exactly the way we thought it would. So, we would like to fix the problem this creates in the system with a small software change. Can’t we just add people to the team and get the job done on schedule, even though we are behind schedule?
It is politically unacceptable to miss the delivery schedule, so please help me come up with a better idea. If you need more resources to get the job done adequately, this might be your time to get them. If it is too late to meet the original schedule, how can you add resources and break up the software delivery? Can you make an initial delivery plus one or more maintenance releases so the customer receives the product on the promised date and appropriate fixes soon thereafter?
November/December 2002
IEEE SOFTWARE
51
About the Author Dorothy McKinney is chief software architect at Lockheed Martin Space Systems Com-
pany Missiles and Space Operation. Her research interests involve application of advanced software techniques and processes to real-world implementation problems. She has BAs in systems sciences and English from Prescott University, an MS in computer engineering from Stanford University, and an MBA from Pepperdine University. She is a member of the IEEE Software Industrial Advisory Board, INCOSE, the IEEE, and AIAA. Contact her at 1470 McDaniel Avenue, San Jose, CA 95126;
[email protected].
solution (for your organization, the customer, and your technical team). In many circumstances, you might think it’s impossible to find such a solution. Sometimes it is, and the project is cancelled. However, solutions are often possible. Potential solutions One way to find potential solutions is to look with hindsight at the outcomes of previous project debacles. Many software projects have run into major problems, but in the end, they have delivered a useful capability to the customer. Here are some strategies that other software practitioners have successfully used to rescue a project and prevent its cancellation: ■
■
■
Offer customers additional capabilities so that although development might take longer and cost more than initially planned, they end up with more product features than they had initially envisioned. Your organization gives more value, the customer gives more money and time, and both sides are satisfied. Find a combination of additional capabilities and features that customers are willing to do without, so that they think the total set of product capabilities they end up with are worth the revised cost and schedule. When a project has lots of growth in derived requirements, sometimes you can cut many of them (management might call this “removing the gold plating”) with little or no obvious loss of capability to the customer. Make your customer look successful in the eyes of their bosses or customers. Sometimes this can be as simple as packaging the current version of the software, delivering it and declaring success, and then deferring remaining work to the maintenance phase. Don’t underestimate the value that good public relations can contribute to a project rescue effort.
Your “real” limits If management (or your organization’s customer) really pushes you, they might be 52
IEEE SOFTWARE
November/December 2002
searching for the “real” limits of what you can do. This technique is illustrated by a story about a management technique attributed to Henry Kissinger. An employee would bring him a report. The next day, Kissinger would summon the employee into his office and shout, “Is this the best you can do? This really needs to be improved!” The employee would nod, stammer an apology for the sections he knew he could have done better, and go off to revise the report. This continued until one day the employee told Kissinger, “This is the best I can do. I can’t improve it any more.” At that point, Kissinger said, “Good. Now I am willing to read your report.” So if your management pushes you hard, reexamine what you think you can do and how soon you think you can do it. In fact, if they push you really hard, you should do this more than once. But when you are clear about what is possible, hold your ground. It does not further your career or your organization to sign up for impossible goals that will only guarantee failure in the future. You don’t want to encourage your manager or your customer to push you this way. It is more effective—and more fun—to deliver products that from the beginning are sufficient to meet their needs. How do you do this? You must work from both ends, understanding who the stakeholders are and how you can meet their needs. This always requires more insight than just reading documents can possibly provide, so you must find ways to interact with the stakeholders—in this way, you can use multiple avenues of communication to gain the insight you need. You must also understand your team’s capabilities and limitations and how to negotiate commitments so you can succeed with the job’s initial scope and with any changes in scope you are asked to handle.
A
s software professionals, how successful can you be in never signing up for the impossible? When you are careful to understand what project success means for every stakeholder and to identify and manage risks as you proceed, you can deliver your product to your customer and avoid project cancellation. In the final analysis, delivering a software product that the customer uses to do their work is more important than the software matching some stakeholders’ early fantasies.
focus
the business of software engineering
Don Winter: One CEO’s Perspective Scott L. Andresen
on Winter, the president and CEO of TRW Systems, is the consummate leader. He started at TRW on the Group Research staff in 1972 after receiving his MS and PhD in physics from the University of Michigan. Winter is also a graduate of the USC Management Policy Institute, the UCLA Executive Program, and the Harvard University Program for Senior Executives in National and International Security. He left TRW for two years in 1980 to work for DARPA. Once back, he worked his way up the proverbial ladder from program manager to vice president and general manager of a division, to vice president and deputy general manager of a group, to his current position.
D
that students are being taught how to learn as opposed to specific facts. This is because the technical currency of specific information disappears rapidly. What we are really looking at is this: How do we help people, as they develop professionally, gain the additional experience they need to act with a broader perspective? A software development process can actually drive, demand, and facilitate some aspects of professional development. In our business as an integrator of large, complex systems, we often use software as glue, if you will, to tie systems together. In that regard, there is a critical need for software engineers to understand not Don Winter
Last February, he gave a presentation entitled “Measurement: The CEO Viewpoint” at the Software Quality Management conference in Anaheim, California. He talked about his perspective on what numbers are most critical to the bottom line and the use of Six Sigma at TRW. IEEE Software magazine sat down with him after his presentation to ask a few questions. According to many senior executives, software engineers don’t talk “the plans talk” or walk “the management walk.” Do you agree? One of the interesting aspects of just about any form of higher education is 0740-7459/02/$17.00 © 2002 IEEE
November/December 2002
IEEE SOFTWARE
53
The corporation has always taken on projects and problems of national and international importance— the hard problems that must be worked on.
only what they are doing in terms of the module they are developing but also how it interfaces with the other system elements, be they hardware or software. So, there is a huge demand for engineers to develop that broader, outside perspective. The other issue is whether people understand the management perspective. We all choose this business because it is what we wanted to do. Given a choice between writing a novel and building equipment, some of us are a little weird; we enjoy building things. But business looks at people’s work and contributions from a somewhat different perspective. At TRW, we are very proud that the corporation has always taken on projects and problems of national and international importance—the hard problems that must be worked on. We like to do this in a way that makes a profit and grows the business, as reflected in terms of the corporation’s value as evidenced in its stock price. This business perspective is not taught in any of the schools I have seen, at least not in the curricula today’s engineering students generally take. Helping people understand these issues broadens and strengthens their job performance and value to the corporation. Do you think that it would help if schools had a particular management engineering curriculum? A certain level of additional education would help. We really do need to work on communication skills—that should be a core element. A lot of engineers believe, “I am an engineer and I don’t need to write code, so what is the big deal?” Don’t take that wrong. I don’t need to write code either, but in reality everyone needs to be able to write English. And it is not just written communication—it’s also verbal skills, the ability to engage in dialog. For instance, what if you are going to talk to a customer about a user interface, do a rapid prototype of the system, and have the customer come in and evaluate it? It is not just a matter of getting the code to run. The interface is not just going to happen, in ones and zeros; you are going to have to extract its qualitative requirements from the customer. The ability to communicate and interact with the customer is very important. With regard to some of the more financial aspects of understanding how business operates, I am not sure it makes sense to teach
54
IEEE SOFTWARE
November/December 2002
this as part of our core curriculum. In many cases, people must be ready to learn this. We can send people who have seen what it means to be involved in a project back to school for three-day programs or after-hours programs. There, they can get a sense of what business is all about and put things into context— what net value, inflation, the cost of money, and things of that nature really mean. They’ll be able to come from class, go back into the office, and say, “So that’s what the boss was talking about the other day.” If an engineer wants to stay on the company’s technical ladder, should he or she expand his or her nontechnical skills? If so, in what particular areas? Employees with purely technical backgrounds must recognize they need to not only become better technically but also understand how to leverage their technical capability. Leveraging means working and learning different things than, perhaps, your educational background provided, and it means being able to communicate with your peers. If you have only one way of doing something but you can’t adequately explain it to somebody else, that’s not good. If you can communicate with your customer, that’s great. But, if you can understand the customer and his problem and come up with a solution and recognize there is a coupling between the technology you have been developing and the customer problem, that is tremendous. That requires an ability to communicate and interact. Another element we haven’t really talked about is personnel management and the ability to motivate employees. Even those who are individual contributors as opposed to managers (and most of our techies fall into this category) can be leaders. They can go in and grab a half dozen or a dozen of their best coworkers and say, “Hey, come with me, this is what we’ve got going,” and get people excited about and aligned with that objective. Admittedly, a lot of this is pure instinct, but some of it also is a result of helpful education in terms of knowing what motivates people. Becoming frustrated with people simply because they can’t understand a concept in the first 30 seconds of explanation is not going to help you develop teamwork with those people. The real push in my mind is, how do you
leverage your technical people even if they want to remain individual contributors? And, by the way, there is another whole range of activities where people can become technical managers. Technical project managers need people skills, and project managers need these skills and more in terms of financial skills. Even the individual technical contributors must be able to communicate, motivate, and work in teams. They must also be able to take what they are creating and inventing and get it out into the customer community or marketplace where it can do some good. This results in the greatest sense of achievement and benefits the corporation the most. What specific impact has software engineering had on TRW? Software engineering has given us the ability to take on a wide variety of projects in which we not only produce critical elements but also do systems engineering and integration. It also helps us build systems that are optimized to deal with the problems
that our customers face. Software engineering has been a mechanism for providing value to our clients. I very much believe there has been a change in valuation in the world economy. In his book The Third Wave, Alvin Toffler describes three major ages of world society—agricultural, industrial, and information. I have found you can take that whole concept and apply it to systems integration. I did that this morning, in my talk, in terms of platform integration. We looked at the platforms people buy, whether it is a car, a tank, or an airplane. At first, value was to a great extent in the hardware. We went from there to the electronic age, for example, with the avionics in airplanes. Now, value isn’t so much in the platform or even the electronics as it is in the software embedded in systems that is used to really integrate the various elements and provide the needed functions. By focusing on software engineering as an enabler for integrating very complex systems, we really are giving a lot of value to our customers.
BOOK REVIEWERS to
king Loo ibute? r t n co
Review a book for Distributed Systems Online. Volunteer your services to a great publication. Gain knowledge (and get a free book!). Select your book online at
http://dsonline.computer.org/books/list.htm
t have Don’ time? much
Even your individual contributors, the tech fellows, must be able to communicate, motivate, and work in teams.
point Security Band-Aids: More Cost-Effective than “Secure” Coding Greg Hoglund, Cenzic
Patching systems against the latest virus is a full-time job, and most corporations have heavier near-term problems facing them.
56
IEEE SOFTWARE
he war between hackers and software is being fought on the front lines—in the users’ trenches. But hunting down the “engineers” who write bad software won’t win this war, at least not in the short run. With the best of intentions, development shops are trying to address bad software by learning secure coding practices. Just tracing the problem is difficult enough. In many cases, the system’s original developers are long gone. And even if they’re still around, the application has evolved continuously since they first wrote it. Moreover, companies often inherit applications (through mergers and acquisitions) that have their own unique problems. Even simple software gives rise to incredibly complex behavior,1 and with complexity comes failure. (Over 70 percent of all software projects fail.2) Although the right place to solve software problems is in development, most companies do not have the luxury of time or money to rebuild old code. So, most developers try to fix problems in the easiest way possible, which usually translates to cheapest, not best. Apply-
T
November/December 2002
ing a software patch costs far less than, say, eliminating all buffer overflows from your code. Moreover, patching systems against the latest virus is a full-time job, and most corporations have heavier near-term problems facing them—their numbers for the next few quarters. Timesaving point solutions such as application firewalls have an instant return on investment. And consider the human element. Typically, developers throw their code “over the wall” to an understaffed security department, which is seen as a roadblock to progress. Required to approve hundreds of changes per week, these organizations often let stuff slide through. The developer’s world is far removed from the true universe of deployed software—a hostile, overcrowded network full of threats and unknowns. Furthermore, most software engineers are actually construction workers—just like the so-called civil engineers who dig up freeways. Many people who write code have upgraded their paychecks from other disciplines by obtaining
Continued on page 58 0740-7459/02/$17.00 © 2002 IEEE
point
continued from page 56
certifications; although their education is good, it doesn’t guarantee they will stop writing bad code. And, with the development tools available today, they fight an uphill battle to write secure code. Sadly, quality assurance testing today is inadequate to overcome most engineers’ skill levels and limited tools. Thus, the potential for security failure in hostile environments remains high. Band-aid security—consisting of using shunts and limiters on data input—could be the answer. Band-aids do not fix the disease—they protect the wound. They are designed to “detect the bad,” but they do nothing to stop the threat of an unknown attack. Think of a facial recognition burglar alarm with a camera that operates by the side of a speeding freeway. It can scan only one out of 10 cars, and some of the criminals are hiding in the trunk. This is the sort of risk you must live with in a band-aid system.
Because we’ll uncover new threats “in the wild,” band-aid security will prove itself useful time and again. Moreover, deploying such systems is incredibly easy; people who don’t really have a clue about software can manage them. Such systems do, however, require maintenance. They are knowledge-driven devices that require a steady diet of information about new exploits. Will deploying band-aid systems reduce your vulnerability? I could argue that it won’t, but it’s difficult to do so when a deadly virus rips through your network, and a bandaid is the only tool you have to stop it. Any case study of recent viruses Nimda or Code Red will prove the value of these tools. Using band-aids does nothing to prevent the attack from a solitary hacker, but they do protect you from the masses of idiots who could download that exploit and target your systems. A band-aid
continued from page 57
Application security is about protecting software and the systems that the software runs after development is complete. Issues critical to this subfield include sandboxing code, protecting against malicious code, locking down executables, monitoring programs (especially their input) as they run, enforcing software use policy with technology, and dealing with extensible systems. Application security follows naturally from a network-centric approach to security (embracing standard approaches such as penetrate-and-patch2 and input filtering), and providing value in a reactive way. Put succinctly, 58
IEEE SOFTWARE
November/December 2002
could also protect you from a worm’s brainless automaton. Application security tools are the most effective way your organization can protect itself today. Building more secure software is a goal, but it won’t stop the virus that gets released tomorrow. It comes down to this: secure coding practices are not going to produce 100 percent bug-free software. Thus, application security tools should always play a part in your risk mitigation plan.
References 1. S. Wolfram, A New Kind of Science, Wolfram Media, Champaign, Ill., 2002. 2. The Standish Group, CHAOS: A Recipe for Success, The Standish Group, West Yarmouth, Mass., 1999; www.pm2go.com/sample_ research/chaos1998.pdf.
Greg Hoglund is chief technology officer and cofounder of
Cenzic, a security quality assurance company. Contact him at [email protected].
counterpoint
application security is primarily based on finding and fixing known security problems only after they are exploited in fielded systems. However, this approach addresses security symptoms in a reactive way, ignoring the problem’s root cause. In general, application security takes the same approach to security as firewalls do. In fact, application security vendors often refer to their products as “application firewalls.” Although there is value in stopping buffer overflow attacks by observing HTTP traffic as it arrives over port 80, a superior approach is to fix the broken code and avoid the buffer overflow completely.
Software security—the process of designing, building, and testing software for security—gets to the heart of the matter by identifying and expunging problems in the software itself. In this way, software security attempts to build software that can withstand attack. Software security follows naturally from software engineering, programming languages, and security engineering. Both subfields are relevant to the idea of preventing software’s exploitation. Software security defends against exploit by building the software to be secure in the first place, mostly by getting the design right
counterpoint Building Secure Software: Better than Protecting Bad Software Gary McGraw, Cigital
oftware has become essential to business and to many other aspects of our daily lives. Yet creating software that works remains hard, especially where security and reliability are concerned. Trying to protect software from attack by filtering its input and constraining its behavior in a post facto way (application security) is nowhere near as effective as designing software to withstand attack in the first place (software security). Simply put, we can’t bolt security to the side of a software product. Software is the biggest problem in computer security today.1 Most organizations invest in security by buying and maintaining a firewall, but they go on to let anybody access multiple Internet-enabled applications through that firewall. These applications are often remotely exploitable, rendering the firewall impotent (not to mention the fact that the firewall is often a piece of fallible software itself). Real attackers exploit software. By any measure, security holes in software are common, and the problem is growing. The trinity of trouble exacerbates the problem of insecure software:
S
0740-7459/02/$17.00 © 2002 IEEE
■ ■
■
Modern software operates in a hostile networked environment. Extensible systems such as Java virtual machines and .Net common runtime environments (not to mention dynamically loaded libraries) are becoming common and introduce mobile code risks. System complexity is rising.
Simply put, we can’t bolt security to the side of a software product.
The ultimate answer to the computer security problem clearly lies in making software behave. The question at hand is “What is the most effective way to protect software?” We can divide the software/application security space into two distinct subfields. Software security is about building secure software. Issues critical to this subfield include software risk management, programming languages and platforms, software audits, designs for security, security flaws, and security tests. Software security is mostly concerned with designing software to be secure, making sure that software is secure, and educating software developers, architects, and users. Continued on page 58 November/December 2002
IEEE SOFTWARE
57
point
continued from page 56
certifications; although their education is good, it doesn’t guarantee they will stop writing bad code. And, with the development tools available today, they fight an uphill battle to write secure code. Sadly, quality assurance testing today is inadequate to overcome most engineers’ skill levels and limited tools. Thus, the potential for security failure in hostile environments remains high. Band-aid security—consisting of using shunts and limiters on data input—could be the answer. Band-aids do not fix the disease—they protect the wound. They are designed to “detect the bad,” but they do nothing to stop the threat of an unknown attack. Think of a facial recognition burglar alarm with a camera that operates by the side of a speeding freeway. It can scan only one out of 10 cars, and some of the criminals are hiding in the trunk. This is the sort of risk you must live with in a band-aid system.
Because we’ll uncover new threats “in the wild,” band-aid security will prove itself useful time and again. Moreover, deploying such systems is incredibly easy; people who don’t really have a clue about software can manage them. Such systems do, however, require maintenance. They are knowledge-driven devices that require a steady diet of information about new exploits. Will deploying band-aid systems reduce your vulnerability? I could argue that it won’t, but it’s difficult to do so when a deadly virus rips through your network, and a bandaid is the only tool you have to stop it. Any case study of recent viruses Nimda or Code Red will prove the value of these tools. Using band-aids does nothing to prevent the attack from a solitary hacker, but they do protect you from the masses of idiots who could download that exploit and target your systems. A band-aid
continued from page 57
Application security is about protecting software and the systems that the software runs after development is complete. Issues critical to this subfield include sandboxing code, protecting against malicious code, locking down executables, monitoring programs (especially their input) as they run, enforcing software use policy with technology, and dealing with extensible systems. Application security follows naturally from a network-centric approach to security (embracing standard approaches such as penetrate-and-patch2 and input filtering), and providing value in a reactive way. Put succinctly, 58
IEEE SOFTWARE
November/December 2002
could also protect you from a worm’s brainless automaton. Application security tools are the most effective way your organization can protect itself today. Building more secure software is a goal, but it won’t stop the virus that gets released tomorrow. It comes down to this: secure coding practices are not going to produce 100 percent bug-free software. Thus, application security tools should always play a part in your risk mitigation plan.
References 1. S. Wolfram, A New Kind of Science, Wolfram Media, Champaign, Ill., 2002. 2. The Standish Group, CHAOS: A Recipe for Success, The Standish Group, West Yarmouth, Mass., 1999; www.pm2go.com/sample_ research/chaos1998.pdf.
Greg Hoglund is chief technology officer and cofounder of
Cenzic, a security quality assurance company. Contact him at [email protected].
counterpoint
application security is primarily based on finding and fixing known security problems only after they are exploited in fielded systems. However, this approach addresses security symptoms in a reactive way, ignoring the problem’s root cause. In general, application security takes the same approach to security as firewalls do. In fact, application security vendors often refer to their products as “application firewalls.” Although there is value in stopping buffer overflow attacks by observing HTTP traffic as it arrives over port 80, a superior approach is to fix the broken code and avoid the buffer overflow completely.
Software security—the process of designing, building, and testing software for security—gets to the heart of the matter by identifying and expunging problems in the software itself. In this way, software security attempts to build software that can withstand attack. Software security follows naturally from software engineering, programming languages, and security engineering. Both subfields are relevant to the idea of preventing software’s exploitation. Software security defends against exploit by building the software to be secure in the first place, mostly by getting the design right
(hard) and avoiding common mistakes (easy). The process of securing applications defends against software exploit by enforcing reasonable policy about what kinds of things can run, how they can change, and what software does as it runs. In the fight for better software, treating the disease (poorly designed and implemented software) is better than taking an aspirin to relieve the symptom. There is no substitute for working software security as deeply into the software development process as possible and taking advantage of the engineering lessons software practitioners have learned over the years. Good software security practices can help ensure that software behaves properly. Safety-critical and high-assurance system designers have always taken great pains to analyze and track software behavior. Security-critical system designers must follow suit. We can avoid the bandaid-like penetrate-and-patch approach to security only by considering security as a crucial system property. This
requires integrating software security into the software engineering process.3
Greg Responds
Gary Responds
I fundamentally agree with most of Gary’s arguments— most importantly, that software is the root of computer security problems. I also agree with the “trinity of trouble” and the distinction between application and software security: not only is system complexity rising, the system is growing more interconnected. This means that software bugs can result in cascading failures across the system, which spells trouble for most companies because the system includes the environment and all the software that it communicates with. We’ll never be able to simulate real-world complexity in the lab or on the developer’s desktop. By this argument, software developers will never have the opportunity to explore what failure really means. Everything will be a close approximation. Thus, it would seem that Gary and I are debating the effectiveness of approach. I certainly agree that the best way to fix a problem is to cure the disease— that is, to fix the software in development. This is something that businesses can and should afford. However, I don’t agree that fixing problems in development is the only solution. I’ll go out on a limb and suggest that software development alone will never fully solve the problem. The playing field is hostile, and there will always be firefights. Based on costs, many problems will be solved by the IT department before going to development. Advanced development tools will soon eliminate trivial programmatical errors (such as buffer overflows), but complex problems take time to address. The band-aid extends the time cushion. That being said, I applaud anyone who actually takes advantage of that time to implement secure coding practices.
Microsoft’s highly touted Trustworthy Computing Initiative, spurred by the Gates memo of January 2002, is a direct business-driven response to a changing software market. Software users now demand high-quality software that works. Of course, software security reaches far beyond shrinkwrapped software of the sort that Microsoft produces. Building software to be secure and reliable from the start is cost-effective. TRW reports that the cost of fixing software defects in late life-cycle stages (testing and maintenance) is over US$10,000 per fix, whereas the cost of fixing a defect early in the life cycle (requirements, design, and coding) is an order of magnitude less—under US$1,500. The pervasive “penetrate and patch” approach to security is obviously unacceptable from a business standpoint. We must avoid the problem of desperately trying to come up with a fix to a problem that attackers are actively exploiting. Those software users who cannot directly impact software quality by building things properly can and should use application security technologies that attempt to protect fielded software. COTS software has problems too, and application security technologies can protect bad software from some kinds of harm. But postponing the hard work of building better software will not solve the problem. With software complexity growing—the source code base for Windows XP is 40 million lines—we have our work cut out for us. Building secure software requires educating developers and architects, retraining QA, and making better business decisions about security. CERT reports that the number of reported software vulnerabilities has risen from fewer than 500 a year for all years prior to 1999 to more than 1,000 in 2000 and almost 2,500 in 2001. There are not enough band-aids to stop the bleeding with a laceration of this size.
References 1. G. McGraw, “Software Assurance for Security,” Computer, vol. 32, no. 4, Apr. 1999, pp. 103–105. 2. G. McGraw, “Testing for Security During Development: Why We Should Scrap Penetrate-and-Patch,” IEEE Aerospace and Electronic Systems, vol. 13, no. 4, Apr. 1998, pp. 13–15. 3. J. Viega and G. McGraw, Building Secure Software, Addison-Wesley, Boston, 2001; www.buildingsecuresoftware.com.
Gary McGraw is chief technology officer at Cigital, a software quality management consul-
tancy. Contact him at [email protected].
November/December 2002
IEEE SOFTWARE
59
manager Editor: Donald J. Reifer
■
Reifer Consultants
■
[email protected]
Making Accurate Estimates Dick Fairley
W
henever I’m asked to recommend estimation methods and techniques, I always point to 12 issues you must consider in making accurate estimates, independent of the method, tool, or technique used. Satisfy these conditions and, as this column shows, your chances of making accurate estimates using your method or tool of choice will improve significantly. Violate one or more of them, however, and you’ll run a significant risk of making inaccurate estimates regardless of the method or technique you’ve used. Establish local calibrations Whether simple rule of thumb or sophisticated algorithm, an estimation method must be calibrated to reflect local circumstances. Published methods are invariably calibrated to historical data of some sort. If you don’t determine whether that calibration data reflects your situation, the estimate you make might be inaccurate. Local calibration can take one of several forms: ■ ■
■ 0740-7459/02/$17.00 © 2002 IEEE
Deriving estimation parameters and adjustment factors from local historical data Developing local values for productivity rates and percentages of effort and schedule for various types of work activities Applying a method to your completed pro-
jects and adjusting the method’s parameters to obtain agreement with local results Different types of projects—embedded systems, scientific computation, or Web-based, for example—will likely have different calibration parameters. Provide accurate inputs Estimates for factors of interest—such as overall effort and schedule, project milestone dates, resource allocations, and defect density—typically depend on factors such as ■ ■ ■
The future product’s size and complexity Schedule and resource constraints Assumptions about the development environment, the development team’s skills and experience, and other project attributes
An estimate cannot be more accurate than the accuracy of the data used to develop the estimate (excluding offsetting errors and dumb luck). Lack of information might make it difficult to accurately specify project factors and product attributes in a software project’s early stage. Historical data used as the basis for analogies or calibrations might be inconsistent or inappropriate. Expert judgment might be faulty or biased. False assumptions will invalidate an estimate. These factors can affect the input data’s accuracy and the estimate’s resulting accuracy. November/December 2002
IEEE SOFTWARE
61
MANAGER
Distinguish between accuracy and precision In an accurate estimate, actual values agree with estimated values within a specified band of variance. You might consider an estimation procedure or tool accurate if, for example, actual values consistently fall within 10 percent of estimated values. A precise estimate provides the specified resolution in the computed answer (for example, computed to three decimal places). Never report an estimate with greater precision than the precision of the input parameters. For example, do not report an estimate to three decimal places of precision if the uncertainty in the input values is, say, ±20 percent. A common mistake is to infer that a highly precise result also provides a high degree of accuracy. You can compute an inaccurate estimate to any desired degree of precision. Involve key team members To the extent possible, involve those who will do the work in preparing the estimates. You’ll see several benefits: ■ ■
■
The expertise of various individuals can be applied. Attempts to understand the “pieces and parts” of the job to be done from different points of view will result in discussion of, and clarifications to, the project’s requirements, constraints, and assumptions. The people involved will gain a sense of ownership and commitment.
You can use group techniques such as the Delphi method to control the impact of dominant personalities. To guard against overly optimistic estimates, you might ask each person to submit three estimates (pessimistic, most likely, optimistic). These values can also provide input data for probabilistic estimation techniques. 62
IEEE SOFTWARE
November/December 2002
Include all applicable activities in bottom-up estimates A bottom-up estimate involves summing the estimates for the work activities of teams and individual team members. Bottom-up estimates commonly overlook work activities such as ongoing customer interactions, requirements management, system integration, installation, user training, and project management. To obtain an accurate estimate, you must include these “glue” activities. As an (oversimplified) illustration, consider two work activities that are each estimated to generate three units of output, each having a squared-power relationship between output and effort. A bottom-up estimate of individual activities might indicate that the required effort is proportional to 32 + 32 (18 work units), when in reality adding the glue activities to coordinate the work and integrate the work products might require an additional four work units; the difference is an approximate 20 percent estimation error. Apply constraints top-down, measure status bottom-up If a project must be completed by, say, five full-time equivalent personnel in six months’ time, allocations of personnel and time to the various project
activities must not exceed 30 staff-months of effort. As the work activities are decomposed, activities subsumed by an element of the work breakdown structure (WBS) must preserve the allocations imposed by the constraint on that element. If software design, for instance, is allocated x percent of the available effort and y percent of the available time, all subactivities of design must total not more than x percent of total effort and y percent of total time. As a project evolves, understanding grows. Restructuring the WBS will (most likely) occur, and more detailed levels of decomposition will be possible; allocations of time and resources might be adjusted accordingly. Initial allocation and subsequent reallocations at lower WBS levels must not exceed higher-level constraints if the project plan is to satisfy the toplevel constraints. During project execution, roll-up of actual values, measured in a bottom-up manner, must not exceed the allocations at each level if the project is to satisfy its overall commitments. Account for resource availability People are seldom, if ever, available to work on a project 100 percent of the time. Meetings, training classes, study groups, task forces, vacation time, and sick leave can easily occupy 25 percent of a person’s work hours. Many organizations allocate personnel at 75 percent or 80 percent of available time. If, for example, a person is assigned to maintain the old system and work on the new system in equal allocations of effort, he or she might be able to spend (on average) only three hours per day on each assignment in a typical eight-hour workday. In addition, constraints on other factors, such as access to test facilities or to users, can disrupt an otherwise optimal schedule. Failure to realistically
MANAGER
account for resource availability is a major cause of excessive overtime and schedule overruns on software projects. Understand what is estimated Estimates are projections from the past into the future, with adjustments to account for the differences between past and future. Estimates based on historical data (the past) carry the historical conditions into the estimates for future projects. If historical data is for coding and testing but excludes requirements, design, and project management, the estimate will similarly be for coding and testing only. If the historical data is for projects of 80-hour work weeks, the future project will be estimated at 80 hours per work week. Failure to understand the scope of activities and the level of effort included in an estimate can result in unachievable commitments. Reestimate periodically and as events dictate Initial estimates typically rest on an imperfect understanding of the future. As a project evolves, new requirements will be added, existing requirements with be modified (“clarified”), decomposition of high-level requirements will reveal hidden complexities, assumptions will prove to be untrue, and the quantity and quality of resources might change. You must update estimates as conditions change; otherwise, the estimate is for the project as initially conceived, not as it is being conducted. Maintain a balance among requirements, resources, and time Periodic and event-driven reestimates might indicate that you cannot provide the required features and quality attributes within the parameters of available resources and time. Courses of action include descoping the requirements, adding more resources, using better resources, taking more time, or some combination of these approaches. Less attractive, but ever-popular, approaches include
excessive overtime and reducing “nonessential” activities such as user documentation, reviews, and testing. You can satisfy the relationship Success = R (requirements, resources, time) in various ways but only one independent variable should be tightly constrained; you must adjust the other two as a project evolves. Distinguish between estimates and commitments Estimates are often overruled by commitments imposed on us by outside forces. In such cases, you might count projects as unsuccessful when project completion conforms to the estimate but not to the commitment. A recent project of my acquaintance was estimated to require nine months, based on the requirements, available resources, and a history of similar projects. Outside forces dictated a six-month schedule. The project was counted as a 50 percent overrun when it was completed in nine months. Use standardized templates to report estimates Lastly, an estimate realistically includes, or should include, more than a pair of numbers (for example, 10 people, 12 months). Factors your estimate should report include the estimator’s name, others consulted, elapsed time and total effort devoted to making the estimate, estimation
No one asks about your screwdriver’s accuracy but only whether it is the appropriate tool for the job at hand and whether you know how to use it correctly.
methods used, the basis of estimation for each method used, assumptions made, adjustment factors applied, the scope of project activities estimated, ranges of estimates with associated probabilities, areas of uncertainty, the project’s risk factors, the estimator’s level of confidence in the estimate, and resources needed to make an improved estimate. Those who make estimates should provide this information to those who make commitments; those who make commitments should require this information from those who make estimates. Review, discussion, and mutual agreement should ensue.
E
stimation methods, techniques, and algorithms are tools; tools can be used correctly or incorrectly. No one asks about your screwdriver’s accuracy but only whether it is the appropriate tool for the job at hand and whether you know how to use it correctly. Using an estimation tool correctly requires training, practice, historical data, an estimation procedure, a standardized reporting format, and a commitment process based on negotiation. I do not mean to suggest that satisfying the conditions listed here is easy; however, omitting any of them will most likely produce estimates that do not match the realities of the projects you undertake regardless of the method or tool you use.
Dick Fairley is a professor of computer science and
associate dean for education in the OGI School of Science and Engineering, Oregon Health and Science University. He also participates in the Oregon Master of Software Engineering degree program, a collaborative effort of four Oregon universities. Contact him at [email protected]. November/December 2002
IEEE SOFTWARE
63
quality time E d i t o r : J e f f r e y Vo a s
■
Cigital
■
[email protected]
What Software Engineering Can Learn from Soccer Shari Lawrence Pfleeger
W
e’ve talked about software quality for a long time, developing numerous software quality assurance approaches in the hope of making our software increasingly better. Charles Mann, contributing editor of MIT’s Technology Review, points out that other technologies—televisions, cars, airplanes, bridges—have improved over time as their engineering matured; he asks why software has not. In the February 2001 issue of Communications of the ACM, Edsger Dijkstra said that software’s biggest challenge is “how not to make a mess of it.” So, where have we gone wrong? To answer this question, we can look at how other disciplines learn and grow. Software development is as much an art as a science, and we learn lessons from both perspectives. Many of us think of ourselves as engineers: we train in engineering departments and rely on engineering tools, techniques, and quantitative methods to guide us. But our work’s artistic side—which those who promote agile methods often emphasize—plays an important role, too. As good software developers, we are grounded in artistic engineering activities such as modeling and design. Our good people skills enable us to work with customers and on teams. And we need good instincts to select the best approaches and products to use. Appreciating instinct Instinctual expertise is hard to develop and 64
IEEE SOFTWARE
November/December 2002
difficult to trust, because it sometimes seems far afield from the engineering approaches we are taught. But the decision-making literature suggests that we should appreciate the essential role of instinct in the three kinds of expertise Jens Rasmussen describes (Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering, North-Holland, 1986): skill-based, rulebased, and knowledge-based. Skill-based This expertise is heavy on instinct, using highly integrated, automatic sensory and motor responses; it takes little conscious effort to apply these skills. We develop similar expertise in sports or music. We learn particular movements—hitting a tennis ball a certain way or playing scales repeatedly— until they become imprinted and we can do them instinctively. Effectiveness at this level depends on our innate skill, our experience, and how much we practice. It also depends on a predictable environment; if we raise the net or change the configuration of the strings or keys, our skills are less certain and possibly less useful. In software development, we might become skill-based testers by looking for the same kinds of faults in the same kinds of code until spotting those faults becomes second nature. Similarly, working many times on similar projects in similar circumstances could help us generate excellent cost estimates. Rule-based By contrast, when we use rule-based expertise, we are consciously aware of the 0740-7459/02/$17.00 © 2002 IEEE
QUALITY TIME
steps we take to attain a goal. Not always explicitly formulated, the steps are somehow codified into loose or strict rules that we follow in a given situation. For example, good software designers have rules about checking interfaces, including exception handling, and addressing performance issues. We might follow these rules without even realizing it before we turn a design over to inspectors or programmers. As with skill-based expertise, these rules rely on familiarity with the situation and its context. In a novel situation or an unfamiliar environment, a rule-based expert might need to perform additional analysis. Knowledge-based A knowledge-based expert recognizes goals, makes them explicit, and then develops procedures to reach them. The recognizing and procedure-building could be automatic, in that the expert seems to know instinctively what to do and when to do it. Once he or she decides on a grand plan, the approach becomes rulebased, where the procedures include steps to tackle each aspect of the overall problem. We see knowledgebased expertise when a good designer proposes a software architecture. The designer instinctively knows that one framework, say pipe-and-filter, will be more effective than another and why. Once the overall architecture is laid out, the designer follows ingrained rules about the best ways to document and improve it. Software and soccer Instinct is vital in developing and using our expertise. To learn to understand and develop those instincts, let us look at soccer, a game that blends art and science. Clearly some soccer players are more talented than others and some are better in certain roles than others. A coach aims to improve players’ talents and blend them to play a good game. The players are much like software developers, and the coach resembles a team leader or project manager. But looking deeper, we learn more. Professional soccer players do not
just show up for games, play them, and leave the stadium. Instead, they participate in three important ways: they perfect basic skills, analyze and abstract from past performances, and build a set of patterns that eventually becomes instinctual. These three aspects of performance analysis can help software engineers improve software’s quality. To see why, first consider our basic skills. Good software development often depends on good modeling and design. But if we train software engineers at all, we usually train them to write programs first and to think about modeling and design later—after bad habits are already embedded in how we do business. Instead of focusing first on programming, we can learn how to model and design. We can also learn how to use different modeling and design approaches together and in context to determine what a system will do and how it will do it. That is, we learn which techniques are appropriate in which contexts so that doing good modeling and design become second nature. Next, consider what happens at the end of the game. At the final whistle or bell, the players do not grab their belongings and head for home. Instead, they join the coach in the locker room to learn from the day’s play. They might watch videos of the game or simply recount to each other what they did and why. As they
Professional soccer players do not just show up for games, play them, and leave the stadium.
discuss their moves, the reasons for them, and the success or failure of the plays, the coach builds abstractions— diagrams with Xs and Os to show what transpired. The players learn to see the game not only from their own perspectives but also from the perspective of others’ roles and responsibilities. The abstractions capture the essence of each play and become a set of patterns players can choose from in future games. Eventually, the players act as a team—invoking appropriate patterns based on past experience—rather than remaining a collection of individuals who happen to meet on the field. In software, we rarely perform such post-project analysis. We pack our bags and leave for the next project without stopping to analyze what we did well and what we did poorly. We build no abstractions of the requirements, designs, code, or tests to help us learn what was most effective. We create no templates or checklists to capture the best of our analysis and review. And we do not use abstractions or patterns to drill and make instinctual the best actions and activities.
M
odeling and abstraction are key elements in quality improvement, and we want those skills to become instinctual. We cannot improve our products and processes if we do not stop to absorb lessons from the past. Gary Klein’s research on decision-making under pressure (Sources of Power: How People Make Decisions, MIT Press, 1998) tells us that the best decision-makers are those with rich and diverse “mental databases” of metaphors about which actions are most appropriate in which circumstances. Unless we take the time to analyze and abstract and to build our lessons learned into skill-, rule-, and knowledge-based expertise, we will never have the mental and technical tools to keep from “making a mess of it.”
Shari Lawrence Pfleeger is a senior researcher at
RAND. Contact her at [email protected]. November/December 2002
IEEE SOFTWARE
65
requirements Editor: Suzanne Robertson
■
The Atlantic Systems Guild
■
[email protected]
Requirements in the Medical Domain: Experiences and Prescriptions Aase Tveito and Per Hasvold
Aase and Pers’ experiences give us some convincing examples to show that requirements engineers in the medical domain must have domain knowledge. What do you think—is it always necessary to have domain knowledge to be able to specify requirements? —Suzanne Robertson
R
esearch shows that information flow in health care systems is inefficient and prone to error. Data is lost, and physicians must repeat tests and examinations because the results are unavailable at the right place and time. Cases of erroneous medication—re-
sulting from misinterpreted, misunderstood, or missing information—are well known and have caused serious health problems and even death. We strongly believe that through effective use of information technology, we can improve both the quality and efficiency of the health sector’s work. How do we feel today? In most European countries, funding is available and political pressure exists to develop and introduce IT systems in hospitals and at general practitioners’ offices. Each 66
IEEE SOFTWARE
November/December 2002
year, health care organizations spend huge amounts of money and resources on IT systems. Sadly, too often we hear about failed systems that increase workload, lower productivity, add to bureaucracy, and reduce the time available for patients. Why does this happen? As software developers working in hospitals, we discovered an unbelievable gap between everyday hospital life and the programmer’s desk. We believe this gap makes requirements gathering and reviewing much harder. Because software experts don’t necessarily understand the environment in which their systems must work, they do not know what questions to ask. Alternatively, health care personnel lack the expertise to discover both errors and missing requirements. A large proportion of unconscious requirements exist (“Of course the PC must be sterilized, otherwise it can’t go in the intensive care unit”). Health care workers don’t always invest enough time in bringing out requirements, which often hampers requirements gathering and system introduction. This is not due to ill will or lack of enthusiasm, but to the average health care professional’s working situation. Carina Beckerman of Stockholm’s Handelshøgskolan followed an IT project’s introduction to a hospital and presented her findings at a European Group for Organiza0740-7459/02/$17.00 © 2002 IEEE
REQUIREMENTS
tional Studies conference. She observed that, “People working in a hospital ward are constantly being interrupted. They are never able to plan their days in detail, and new situations occur all the time. Sometimes they are dramatic, sometimes they are barely a nuisance, but by and large, during a working day, there is no time to really concentrate on one thing.” Does this hurt? A failed IT project has many costs: ■ ■ ■ ■ ■
Actual money spent on the contract Time consumed internally in the health care organization The staff’s goodwill toward participation in IT projects The problem remaining unsolved Inefficient systems
Often, we only measure failures using the first item because the figure is usually readily available. The larger the project, however, the more the latter points contribute to the complete cost. We must investigate further to determine why projects fail and how to avoid this. Using our experiences, we will highlight some commonalities among the failures and some among the success stories. To exemplify, here are two true stories. A story of disaster In 1999, health officials introduced a new electronic patient record system (EPR) in a major European hospital. To save development time and cost, they procured and adapted an American system. They also decided to implement the system across several clinical departments—a “big bang” introduction. In addition to the anticipated introductory problems, the system carried cultural and organizational legacy that made integrating it into the organization difficult. American systems tend to look at each patient and what treatment he or she has received. Most European hospitals focus more on individual departments. Thus, an American system is not necessarily effective in Europe, even if it
is excellent in the US. Additionally, the new system’s user interface did not fit with systems already in place in the European hospital, and the learning curve proved different from expected based on US installations. Administrators are typically reluctant to provide exact measures of how much a project costs, partly because calculating the exact cost is dif-
Because software experts don’t necessarily understand the environment in which their systems must work, they do not know what questions to ask.
ficult and partly because the prestige and investments are so high that the project passes the point of no return. Summing up the total cost, therefore, is counterproductive and focuses attention on processes that cannot be undone. For this project, however, we know the excess costs of the EPR system were in the millions of dollars. Unforeseen differences between the US and Europe resulted in a longer introductory period, creating some of the additional costs. These differences required more consultancy hours to get the system up and running throughout the organization. Additionally, management did not sufficiently prepare for or understand the organizational changes, so (at least transitionally) efficiency suffered. It seems the managers believed the IT system would be a one-time investment and did not anticipate costs for maintenance and adaptation. The physicians and nurses made it clear that their reduced productivity was directly related to the extra work November/December 2002
IEEE SOFTWARE
67
REQUIREMENTS
(a)
(b)
Figure 1. Pictures from the University Hospital of North Norway. (a) This whiteboard is used to coordinate and inform nurses about which patients are admitted and why. (b) A nurse enters information in a patient’s chart. Nurses code the information using different colored pens to help process data and deal with information overload. These codes are often specific to the department.
the new EPR caused. The irony is that the existence of a well-proven US system was the argument used to cut costs in the requirements and testing phases. Thus, the assessment of how well the EPR system would fit into a European hospital was done superficially to save cost and time. This kind of cost saving proved to be extremely expensive. This story is by no means unique. There are numerous examples such as this from any country. The actual failure point might vary, but, all too often, the system does fail. A story of success From 1992 to 2001, the University Hospital of North Norway gradually introduced a digitized radiology system. This has been successful despite many odds—including a low budget, inexperience with similar software development, and a generally conservative user group. Jan Størmer, the radiologist who commissioned the system, claims that 68
IEEE SOFTWARE
November/December 2002
this success is due to some simple facts:
tem was then introduced to a wider user group.
The system “grew” into place—first it was a small, custom-built image managing system, then a patient flow handling system, and, finally, a more upgraded common version in all 11 hospitals in northern Norway. The system was based on a thorough understanding of the users’ needs, and the software developers were situated in the department during development and pilot use.
Take these pills twice each day We believe that insufficient requirements gathering and misunderstanding the requirements process cause many IT health project failures. The requirements specification aims to ensure that the developed system is what everyone expects it to be. That is, the process is twofold: understand the expectations and explain which part of them your system will meet. To achieve these goals, the requirements specification must be both readable for health care workers and clear enough to developers to avoid erroneous interpretations. This means formulating the requirements in natural language, free of IT jargon, while maintaining consistency and clarity. Several existing models, such as the Volere, encourage this by describing the require-
■
■
What started as a small computer program that served one hospital department grew into a system that serves all the hospitals in northern Norway, and the hospitals share both the image archives and the radiologists’ expertise over the network. This gradual approach from tiny to big let developers alter the system based on user feedback without spending large sums and involving too many users. The more robust and well-tested sys-
REQUIREMENTS
ment in specific as well as more general terms.1 The Sophist approach to testing the requirements’ ambiguity and clarity lets you go through each and every requirement and test it in several ways (see www.sophist.de). Both methods, however, require sufficient resources in the requirements phase. We have some suggestions on how to gather requirements to achieve as good an understanding of the problem as possible. Discover system requirements We must invest the necessary resources, time, and effort to ensure that the system’s requirements are discovered. Very few systems will solve all problems the user identifies, and one part of the job is helping the user prioritize conflicting requirements. We can seldom deliver the perfect solution, and only in close cooperation with the users can we deliver the best possible one. Time and money are often short, but initial efforts to cut costs and time by not leaving sufficient room for requirements could prove to be a pill with adverse effects at a high cost. Make requirements understandable We can use scenarios and characters to make requirements understandable and alive to developers. Better yet, we should use apprenticing2 to bring developers close to the future system users. Scenario descriptions and apprenticing can close the gap between health care and system development. At the Norwegian Centre for Telemedicine, we have experience involving nurses in creative work around the concept of uniforms with built-in information technology. We started by observing and photographing the nurses at work. We discovered that the nurses used a whiteboard to record information about the patients (see Figure 1a). Because nurses often needed the information when they were not near the whiteboard, this was our first candidate for a computer-supported collaboration system. We used PDAs to access
a common database that replaced and expanded the whiteboard information. We also observed how nurses code information (see Figure 1b). This gave us insight into how the nurses communicated and what information they considered important. Indirectly, we also further understood how nurses view their roles in relation to other nurses and health care workers. We discussed the observations, and the technical team generated some ideas for improvements that were later presented in grounded brainstorming sessions. We also tested some of our ideas and examined possible pros and cons. Additionally, we ran numerous role-playing sessions where nurses improvised plays introducing existing and future technology. The reactions to the technologies and the ideas that came from these sessions gave us a deeper understanding of how nurses relate to their work and to technology. This was useful when we later came up with specific ideas for information and communication systems for the surgical nurses at the University Hospital of North Norway. In the development phase, we used the scenario descriptions and use cases3 as important parts of our requirements. We have also successfully used scenarios and storytelling techniques to help engineers and users understand each other. Through simple stories, we can often create a better understanding of the goals for the workers’ activities and thus help the developer understand which features are important and which are not. Work with the organization Introducing an IT system in an organization often affects work activities in more ways than just changing specific tasks or actions. IT systems often challenge the activity and the participating actors’ roles and sometimes open the door for new procedures and new types of collaboration and communication. In very hierarchical organizations, such as healthcare institutions, even slightly disturbing the balance might cause mixed reactions and could significantly af-
fect even small and simple systems. This means that we must work with the organization’s various groups, departments, or users, prepare them and ourselves for changes, and try to identify people who feel their roles in the organization are challenged.
I
ntroducing a new system might shift power from old to young, from doctor to nurse, or from medical staff to administration. Few people appreciate loss of power, but even fewer will admit that the loss of power is why they resist the new system. Thus, we must work hard to bring this into the open and help people realize that a new system doesn’t have to threaten their positions. Again, knowledge and understanding of a hospital’s organizational structure, both official and hidden, is necessary if the system’s introduction is to be successful.
References 1. S. Robertson and J. Robertson, Mastering the Requirement Process, Addison-Wesley, Boston, 1999. 2. H. Beyer and K. Holtzblatt, Contextual Design, Morgan Kaufmann, San Francisco, 1998. 3. A. Cockburn, Writing Effective Use Cases, Addison-Wesley, Boston, 2001.
Aase Tveito was a section manager at the Norwegian
Centre for Telemedicine (NST) and is now the general manager of iMed Norwegian Telemedicine, a spin-off of NST. Contact her at [email protected] or [email protected].
Per Hasvold is a research scientist at the Norwegian Cen-
tre for Telemedicine. Contact him at per.hasvold@ telemed.no. November/December 2002
IEEE SOFTWARE
69
feature
requirements engineering
Is the European Industry Moving toward Solving Requirements Engineering Problems? Natalia Juristo, Ana M. Moreno, and Andrés Silva, Universidad Politécnica de Madrid
equirements engineering is critical for successful software development. Nowadays, software development organizations are not likely to question the importance of issues related to requirements management (RM) and specification. However, despite its importance, the requirements process has traditionally been connected with a host of problems. Frederick Brooks used the two Aristotelian categories, essential and accidental, to classify these problems.1 As systems become more
R Years ago, several surveys raised concerns about problems in requirements engineering practice. Has the industry progressed since? 70
IEEE SOFTWARE
complex, software developers can do little to overcome essential difficulties such as software invisibility or pressure for change. However, several surveys have highlighted principal flaws in the requirements process that can be linked to accidental difficulties such as tool integration or bad documentation.2,3 Not only are these problems solvable, they’re also often ones that researchers have already addressed. For years, researchers have conducted requirements engineeringrelated surveys, revealing problems and identifying potential solutions. Yet according to our own survey, RE problems persist. We contacted RE practitioners from European organizations to analyze how much progress European software development organizations have made in RE. Unlike other surveys, we don’t just point out RE problems. Our results call attention to the gap be-
November/December 2002
tween current RE practice and published solutions and to the poor communication between researchers and practitioners. Problems identified in previous RE surveys In 1998, Bill Curtis, Herb Krasner, and Neil Iscoe conducted one of the first RE surveys,4 providing information on critical development issues (see Table 1 for brief descriptions of the surveys discussed here). Their field study of 10 organizations suggested that information on project functionality and the ease with which people could change this information were key to application success. Many surveys have also identified incorrect tool use as an important issue. In 1993, Mitch Lubars, Colin Potts, and Charlie Richter surveyed 10 US software develop0740-7459/02/$17.00 © 2002 IEEE
Table 1 Requirements engineering surveys Researchers
ment organizations and found that “the most obvious documentation tools are word processing packages,” which is a poor choice for a documentation tool.2 Two years later, Khaled El Emam and Nazim Madhavji identified the same problem at several Canadian organizations, saying proper tool use is one of the seven key issues for RE success.5 This was still an issue in 2000, when Uolevi Nikula, Jorma Sajaniemi, and Heikki Kälviäinen surveyed 12 Finnish software development organizations and found that “no company used ... RM tools, and even tools such as templates, checklists, and metrics were in standard use in one or two companies only.”6 Furthermore, just last year, Humber Hofmann and Franz Lehner’s field study found that “the most common tool used during RE was an internal Web site,” used to post and maintain the requirements.7 Another RE problem has been the lack of proper documentation in software requirements specifications (SRS). Lubars, Potts, and Richter found that documentation was excessively formal and detailed in customerspecific projects but specifications were informally expressed in market-driven projects.2 Similarly, Nikula, Sajaniemi, and Kälviäinen found that “the decision whether the requirements document is created or not depends on many factors.”6 Erik Kamsties, Klaus Hormann, and Maud Schlich also detected this shortcoming in their survey of 10 small- to medium-sized European software organizations: “Only when subcontracted are SRSs done seriously.”3 Similarly, some findings made in the context of the REAIMS (RE adaptation and improvement for safety and dependability) Esprit Project concern the importance of proper requirements documentation. This project led to Requirements Engineering: A Good Practice Guide, which identifies guidelines for successful RE.8 The importance of properly performing an SRS underpins most of the guidelines. User involvement during the requirements process is paramount, but this is yet another stumbling block in software development. The Chaos Report series, from 1994 to 2001 (see, for example, www.standishgroup.com/ sample_research/chaos_1994_1.php), revealed that user involvement is one of the two main success factors in software development (the other is executive support). It consistently found low user involvement in
Iscoe 4
Curtis, Krasner, and Gotel and Finkelstein 9 El Emam and Madhavji 5 Hofmann and Lehner 7 Kamsties, Hormann, and Schlich 3 Lubars, Potts, and Richter 2 Nikula, Sajaniemi, and Kälviäinen 6 Ramesh10
Purpose a
Mechanism used
Analysis
Prescriptive Prospective Prescriptive Prescriptive Prescriptive Prescriptive Descriptive Prescriptive
Case studies Multipronged Case studies Questionnaire Multipronged Case studies Questionnaire Multipronged
Qualitative Qualitative Quantitative Quantitative Qualitative Qualitative Quantitative Quantitative
a. Descriptive surveys try to determine current practices; prescriptive surveys identify good, or bad, practices; and prospective surveys ascertain future needs that would further research.
failed or challenged projects. Other surveys identified similar results.5,7 Another problem detected in several surveys is traceability. In general, we might distinguish between prespecification traceability and postspecification traceability—the first links requirements to their sources (users, documents, and so forth), and the latter links requirements to development artifacts.9 Developers traditionally address only postspecification traceability. Balasubramanian Ramesh analyzed 26 organizations and concluded that postspecification traceability—but not prespecification traceability—is a general attribute.10 Stakeholders see this lack of traceability as hurting the project.7 Our survey We applied a method similar to that used in many other surveys. We contacted more than 150 practitioners from European organizations to provide an overview of the current situation, without emphasizing statistical data. The size of the organizations and of their products varied, but we considered all of them as representative developers of the applications that are shaping the information society, from embedded systems to Internet applications. We chose to contact practitioners on the basis of their involvement in RE. Most had a medium-to-high level of responsibility in the RE process at their companies, from a software development viewpoint (our set of respondents did not include marketing or sales personnel). We secured responses from 11 organizations in seven European countries. Seven of the organizations were small to mediumsized (five to 100 employees), and four were larger. The 11 organizations developed software for these domains: consumer electronics with embedded software, IT products for the health-care-systems market, software and sysNovember/December 2002
IEEE SOFTWARE
71
Number of organizations
Figure 1. The organizations, sorted by their number of requirements.
4 3 2 1 500–1,000 > 1,000 100–500 Number of requirements
Table 2 Questions used in the survey Question type
Question
Current profile
Briefly describe the methods and tools used. Describe the advantages of current methods and tools. Describe the disadvantages of current methods and tools. Are current methods and tools well suited for dealing with current and typical applications? Describe the RE life cycle. Describe how you integrate the RE process with other business processes. Who is involved in RE tasks (system engineers, marketing personnel, requirements engineers, and so on)? Have people or processes affected by changes been correctly identified? Has any guide or translation package been used? Have people been properly trained? Is there still some remaining resistance? Why? Stability of current practice: When was the last change made? Why? Are there problems identifying users and/or stakeholders? What impact do standards, certification, and COTS have on the RE process and products? How do you manage your dependability requirements? How do you manage the trade-off between dependability needs and available dependability?
Adoption
Sources of requirements Dependability requirements
tems targeting the industrial and public sector, Web multimedia applications, aircraft systems, virtual consultancy for e-commerce, smartcards, software tools for client-server application development, cryptography, intellectual-property-rights-handling software, and software for electrical-network maintainability. (We can’t, however, reveal the identity of the respondent organizations.) Figure 1 illustrates the approximate size of the systems these organizations built in terms of the number of requirements. Although this measure is not precise, it assures that we are considering organizations that cover a broad spectrum of software size and complexity. Table 2 shows the questions we used to gather information, which we designed to address the key issues the earlier surveys 72
IEEE SOFTWARE
November/December 2002
detected: tool misuse, improper requirements documentation, low user involvement, and nontraceability. We also included two more points: adopting new techniques and dependability. Adopting new requirements techniques and tools is essential for process improvement and technology transfer.11 However, industrial uptake of RE technology has rarely lived up to expectations.6,12 So we wanted to determine how conscious organizations were of its importance. We also asked about dependability because numerous services and products, based on both the Internet and the massive ubiquitous deployment of embedded systems, are used in many areas, including health care, transport, finance, commerce, and public administration. These areas have significant dependability implications, embracing security, safety, reliability, availability, and survivability.13 Current practice Our survey confirmed that immaturity still defines current practices. Although the questionnaire distinguished between methods and tools, the responses clearly indicate that the two concepts are used indistinctly. Some respondents described their “method” as “tool X,” which is not surprising because tools drive or enhance methods. However, our results indicated that organizations are more informed about requirements tools than previously.2,3,5 Most organizations reported using tools such as word processors for specifying and managing requirements, and more than 30 percent used only these tools. Organizations reported that, for applications with relatively few requirements, these tools had the advantage of simplicity. However, for those with many requirements (approximately 1,000) and where word processors were the main tool, the disadvantages outweighed the advantages: lack of scalability, no baseline, and so forth. Not surprisingly, the organizations using either in-house or commercial requirements tools (approximately 70 percent) worked on larger applications (over 1,000 requirements). This shows that industry realizes that requirements tools are useful for large projects. However, as expected (and as identified elsewhere8), these organizations pointed out that no single tool is valid for the whole
process, and tool integration poses a considerable obstacle to efficiency. Our results are similar to other surveys concerning the lack of proper SRSs, especially in market-oriented applications.2,3,6 We attribute this to the difficulty in finding specific users for this sort of application. Also, although most companies reported having no problems identifying their systems’ users and stakeholders, some reported that this didn’t mean they were involved in the process or had clearly defined roles and responsibilities. Furthermore, our survey also confirmed general problems related to a lack of traceability and supported Ramesh’s findings about postspecification and prespecification traceability.10 Only one organization mentioned how difficult it is to deal with prespecification traceability. Adoption At least six organizations had recently introduced new practices, although the scope of these changes varied among organizations. The introduction of tools and traceability-related issues were prominent, but nobody reported using transition packages to introduce new technologies. The organizations did not report significant problems in identifying the people and process affected by the introduced practices. However, there was some discrepancy among organizations concerning the changes’ effect. Half the organizations that had introduced improvements stated that reorganization did not have dramatic effects. In fact, one organization (in the health-care industry) reported that it was “always ready for change,” so the effects of reorganizing tasks were never dramatic. Other organizations, however, indicated that they met with resistance from project managers, who thought that changes could destroy their schedule. For example, the respondent from one multinational (consumer electronics) company said that even though current practices in different business units evolve at a different pace, this is not without problems: “People are so used to their existing ways of working that new RE practices must be fitted to them.” Consistently, Hofmann and Lehner’s study also found that, regarding tool adoption, users perceived an interference, not a support, with current activities.7
Requirements sources The survey results indicated that organizations must consider a multiplicity of requirements sources, including internal sources such as the marketing department, product managers, and sales personnel, and customers and related sources, such as users and help desks. They must also consider constraints, such as ■
■
■
■
Standards: Three-quarters of the surveyed organizations considered the impact of standards to be important. Standards are useful mainly for organizing various parts of the process, because they provide a list of things to remember, guides for doing tasks, documentation, and so forth. Laws: Organizations should anticipate future changes in laws and adapt their products accordingly. Certification: Certification has less impact on requirements. One-third of the respondents seemed concerned about certification issues (such as ISO9000). Commercial-off-the-shelf components: Respondents reported that using COTS components changed the requirements process, because the focus shifted from needs that the developer had to satisfy to needs that available COTS components should satisfy.
Our survey confirmed that immaturity still defines current practices.
Identifying all requirements sources has been a successful practice.7 However, the multiplicity of requirements sources increases RE complexity. Managing multiple documents and sources of sometimes-conflicting information can become overwhelming. Dependability requirements Our respondents were interested in dependability issues for their products. We can summarize the main dependability-related finding as the difficulty in quantitatively establishing dependability. The reason for this is that dependability is subordinate to a variety of elements: ■
■
Architectural components: These components’ final characteristics are often unknown early in the project. Particular difficulties arise when an external company develops these components. Available technology: It is essential to November/December 2002
IEEE SOFTWARE
73
Organizations need processes for selecting appropriate methods, which should consider, among other things, tool availability.
■
understand what you can achieve with current technology at a reasonable cost. However, as technology quickly evolves, you cannot know all of its characteristics at the start of development. System interaction: A medical application might operate perfectly, but another system with which it operates might supply inaccurate information. Developers should clearly establish responsibilities in case something fails.
So, unless a contractor imposes dependability levels, you cannot clearly express a dependability level when a project starts. Also, developers cannot always create a fixed prioritization of dependability requirements: priorities might change as you gather new information. Some of these practices for dealing with dependability requirements demonstrate that it is difficult to follow the advice given in much of the RE literature, which recommends expressing nonfunctional requirements quantitatively.8 Survey implications Here, we present guides for solving some of the problems found. These might not be the best solutions, and we can’t guarantee what improvements a particular solution will provide, but many people in the RE community advocate these well-known solutions. The fact that they have not been adopted clearly indicates the need to improve both technology transfer and industrial uptake. Requirements techniques Our survey revealed that requirements techniques are not used enough for either elicitation or negotiation. Elicitation techniques have been available for some years, but organizations seem unfamiliar with them, which means that knowledge is still not being transmitted effectively. Possible solutions would be to use transition packages,11 promote training within organizations,10 or use outside consultants.6 Our survey and others also detected that formal SRS documents are not defined, which generates much extra work. However, the definition of an SRS raises many critical questions, including how detailed the SRS should be. Ian Sommerville and Pete Sawyer provide guides for addressing this problem, taking into account that the
74
IEEE SOFTWARE
November/December 2002
detail level depends on whether the project is in-house or subcontracted.8 Requirements are often written in natural language, but this should not impede high-quality documentation or using tools to help analyze this documentation. Quality can be achieved in natural-language requirements documents by using style guides.14 Also, there are semiautomated techniques for analyzing natural-language requirements, sometimes requiring a strict syntax aimed at easing the analysis.15 One method suggested for providing some degree of formality in traceability is Quality Function Deployment (QFD).2,16 Orlena Gotel and Anthony Finkelstein found that prespecification traceability is a significant issue9 and proposed using contribution structures that help capture the network of people who participate in RE.17 Ramesh has discovered more specific factors that improve, and impede, traceability.10 Any organization should be able to improve traceability practices by supporting the improvement factors and trying to correct impeding factors. Requirements tools The main problem with multiple-tool approaches is the lack of tool integration. There does not appear to be any definite solution to this problem, apart from tool developers considering these issues to develop tools with more potential. Ideally, requirements tools should incorporate features such as capacity for large volumes of documentation, multiple levels of formality, traceability, configuration management, and model simulation.2 Another problem is how to select the tools best suited for the process and class of applications. There is public information on the characteristics and potential of commercial tools (see www.incose.org/tools/tooltax. html), but this is not enough. Organizations should use this information merely as a basis for conducting more specific studies using in-house criteria. (Merlin Dorfman offers advice on how to introduce tools into the organization.18) Organizations need processes for selecting appropriate methods, which should consider, among other things, tool availability. Only then should the organization purchase tools. Transition packages, including training and consultancy
services, can be a great aid for RM tool adoption.11 Support from tool manufacturers during tool deployment is helpful, but other solutions exist. In our survey, one big organization reported having a group of people who provided internal consultancy services to different company units. These people were well acquainted with the characteristics of the tools on the market and had a sound knowledge of the organization. This assured informed decisions regarding the suitability of available tools for the organization’s projects. However, this approach is cost-effective only in large organizations. Requirements sources Donald Gause and Gerald Weinberg proposed requirements techniques that raise user involvement in the process,19 and approaches such as usage-centered design can help.20 Although suited for the development of bespoke systems, these techniques do not completely solve the problem for marketdriven or Web-based applications. These situations require marketing-related techniques, such as portfolio-based techniques21 or QFD.16 Another source to consider originates from using COTS components. This involves relating the requirements of the applications to be built to the characteristics of available COTS components. Some solutions model both customer requirements and software products (including a model of complex interdependencies) and use multicriteria decision-making techniques for COTS selection.22 Other solutions propose weighted averages for calculating a particular COTS requirements coverage ratio, further refined by user evaluation of the product in particular scenarios.23 To overcome the lack of compatibility between packages and the lack of control over COTS evolution, Barry Boehm and Chris Abts have offered several maxims: ■ ■ ■ ■
Do not prematurely commit to a combination of COTS packages. Try to achieve COTS substitutability. Avoid tightly coupled, independently evolving COTS components. Try to establish strategic partnerships with COTS vendors to secure their continuous support.24
Dependability requirements Precisely quantifying dependability attributes faces two main obstacles: ■
■
Determining what you can achieve with current technology and using that information as input for the requirements process. A risk is never acceptable if it can be easily reduced with available technology. Specifying the dependability level. The desired dependability level can be unrealistically specified and lead to long delays and increased costs.
Developers need some flexibility to deal with possible variances in cost or technology availability.
A criterion such as Alarp (as low as reasonably possible) could replace the strict quantification of risks,25 considering the state of the art, particularly for safety and security issues. A solution used in environmental contexts for some time under the acronym Batneec (best available technology not entailing excessive cost),26 now included in many standards for environmental protection, means that product developers should define a tolerable region of risk on the basis of the available technology’s capabilities and costs. This process involves extensive negotiation with certification authorities and with technology developers. Safety-critical software has borrowed many ideas from other engineering fields;27 to our knowledge, however, there is no guidance yet on applying the Batneec principle. Some suggest using risk analysis techniques to deal with potential risks in safetycritical systems,27 with the aim of establishing an adequate protection level. However, these approaches implicitly assume that once the protection level is fixed, it will remain fixed. Developers need some flexibility to deal with possible variances in cost or technology availability. For example, one surveyed organization identified gaps between highly desirable and probably achievable dependability levels at the project’s start. The organization took further actions to narrow this gap until it reached an adequate balance. Rather than establishing a fixed protection level, it moved between flexible upper and lower bounds that defined the cost-effective and risk-tolerable region. However, it carried out this process ad hoc. Another problem identified is how to set priorities for requirements related to deNovember/December 2002
IEEE SOFTWARE
75
Requirements Engineering Resources The Atlantic Systems Guild: This site provides a collection of requirements and software engineering resources. www.systemsguild.com/ GuildSite/Guild/resources.html The Chaos Report series: These widely cited reports demonstrate that RE problems are the major causes of failure in software development projects. www.standishgroup.com The Good Practice Guide on RE: The GPG Web site at Lancaster University offers guidelines for RE. www.comp.lancs.ac.uk/computing/ resources/re-gpg ICRE and RE conferences: The International Conference on Requirements Engineering and the International Symposium on Requirements Engineering are traditional forums. These two conference series on RE have been united as the IEEE Joint International Requirements Engineering Conference in 2002. www.re02.org INCOSE: The International Council on Systems Engineering provides good information on commercial RE tool features. www.incose.org Karl Wiegers’ site: This site provides a good collection of downloadable RE tools and templates. www.processimpact.com
pendability issues. By their very nature, these are outstanding issues, but their relative importance could change as we learn more about costs, limitations, and the technical risk of available technologies. In this case, we need flexibility to set priorities. Karl Wiegers offers a useful schema for trade-offs between requirements, taking into account multiple factors.28 Dependability also includes security issues that greatly influence the requirements process, particularly for e-business software development. The lack of information about what is really needed when the project kicks off could be counteracted through a framework of some sort that could be used to discover security-related requirements. For example, Sarah Jones and her colleagues’ proposal29 considers this tradeoff between cost and acceptable risk and, simultaneously, helps to achieve a complete set of high-level requirements related to trust and dependability in e-business systems. Again, the suitability of the decisions made will depend on a thorough knowledge of available technologies.
RE is getting more resources than before (see the “Requirements Engineering Resources” sidebar).7 As the Chaos Report series demonstrates, the percentage of successful projects increases each year, but 45 to 65 percent of projects are still over budget and schedule. Improvements are occurring, but we are still far from achieving the level of performance that would be desirable in an engineering discipline. This immaturity has two potential solutions. On the industry side, technology uptake could be more proactive; on the research side, there could be more elaborate and practical guidelines for packaging and transferring recommendations to industry and including real evaluations of the advantage that these technologies provide.
Acknowledgments We performed our survey at the Joint Research Centre of the European Commission, and the Requirements Engineering Network of International Cooperating Research Groups (ESPRIT project 20800) funded it. We are grateful to RENOIR for its support and to Philip Morris at the JRC.
References 1. 2.
3.
4.
5.
6.
I
n the last year, awareness of requirements problems has increased, new books on RE have been published, more tools are available, and it seems that
76
IEEE SOFTWARE
November/December 2002
7.
F.P. Brooks, “Essence and Accidents of Software Engineering,” Computer, vol. 20, no. 4, Apr. 1987, pp. 10–19. M. Lubars, C. Potts, and C. Richter, “A Review of the State of the Practice in Requirements Modeling,” IEEE 1st Int’l Symp. Requirements Eng. (RE 93), IEEE CS Press, Los Alamitos, Calif., 1993, pp. 2–14. E. Kamsties, K. Hormann, and M. Schlich, “Requirements Engineering in Small and Medium Enterprises: State-of-the-Practice, Problems, Solutions and Technology Transfer,” Proc. Conf. European Industrial Requirements Eng. (CEIRE 98), British Computer Soc., London, 1998, pp. 40–50. B. Curtis, H. Krasner, and N. Iscoe, “A Field Study of the Software Design Process for Large Systems,” Comm. ACM, vol. 31, no. 11, Nov. 1988, pp. 1268–1287. K. El Emam and N.H. Madhavji, “A Field Study of Requirements Engineering Practices in Information Systems Development,” IEEE 2nd Int’l Symp. Requirements Eng. (RE 95), IEEE CS Press, Los Alamitos, Calif., 1995, pp. 68–80. U. Nikula, J. Sajaniemi, and H. Kälviäinen, A State-ofthe-Practice Survey on Requirements Engineering in Small-and-Medium-Sized Enterprises, tech. report, Lappeenranta Univ. of Technology, Lappeenranta, Finland, 2000; www.cs.ucl.ac.uk/research/renoir/TBRC_ RR01.pdf H.F. Hofmann and F. Lehner, “Requirements Engineering as a Success Factor in Software Projects,” IEEE
Software, vol. 18, no. 4, July/Aug. 2001, pp. 58–66. 8. I. Sommerville and P. Sawyer, Requirements Engineering: A Good Practice Guide, John Wiley & Sons, New York, 1998. 9. O. Gotel and A. Finkelstein, “An Analysis of the Requirements Traceability Problem,” Proc. 1st Int’l Conf. Requirements Eng. (ICRE 94), IEEE CS Press, Los Alamitos, Calif., 1994, pp. 94–101. 10. B. Ramesh, “Factors Influencing Requirements Traceability Practice,” Comm. ACM, vol. 41, no. 12, Dec. 1998, pp. 37–44. 11. P. Fowler et al., “Transition Packages: An Experiment in Expediting the Introduction of Requirements Management,” Proc. 3rd IEEE Int’l Conf. Requirements Eng. (ICRE 98), IEEE CS Press, Los Alamitos, Calif., 1998, pp. 138–147. 12. P. Morris, M. Masera, and M. Wilikens, “Requirements Engineering and Industrial Uptake,” Proc. Int’l Conf. Requirements Eng. (ICRE 98), IEEE CS Press, Los Alamitos, Calif., 1998, pp. 130–137. 13. Defining the European Dependability Initiative, Joint Research Centre of the European Commission, 1999; http://deppy.jrc.it. 14. B.L. Kovitz, Practical Software Requirements: A Manual of Content and Style, Manning Publishing, Greenwich, Conn., 1999. 15. B. Macias and S.G. Pulman, “Natural Language Processing for Requirements Specification,” Safety Critical Systems, Chapman and Hall, London, 1993. 16. S. Haag, M.K. Raja, and L.L Schkade, “Quality Function Deployment: Usage in Software Development,” Comm. ACM, vol. 39, no. 1, Jan. 1996, pp. 41–49. 17. O. Gotel and A. Finkelstein, “Extended Requirements Traceability: Results of an Industrial Case Study,” Proc. IEEE 3rd Int’l Symp. Requirements Eng. (RE 97), IEEE CS Press, Los Alamitos, Calif., 1997, pp. 169–178. 18. M. Dorfman, “Requirements Engineering,” Software Requirements Eng., 2nd ed., R. Thayer and M. Dorfman, eds., IEEE CS Press, Los Alamitos, Calif., 1997, pp. 7–22. 19. D.C. Gause and G.M. Weinberg, Exploring Requirements: Quality before Design, Dorset House Publishing, New York, 1989. 20. L.L. Constantine and L.A.D. Lockwood, Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design, Addison-Wesley, Boston, 1999. 21. S. Sivzattian and B. Nuseibeh, “Linking the Selection of Requirements to Market Value: A Portfolio-Based Approach,” Proc. 7th Int’l Requirements Eng.: Foundation for Software Quality (REFSQ 01), Essener Informatik Beitragë, Essen, Germany, 2001, pp. 202–213. 22. N.A.M. Maiden and C. Ncube, “Acquiring COTS Software Selection Requirements,” IEEE Software, vol. 15, no. 2, Mar./Apr. 1998, pp. 46–56. 23. P.K. Lawlis et al., “A Formal Process for Evaluating COTS Software Products,” Computer, vol. 34, no. 5, May 2001, pp. 58–63. 24. B. Boehm and C. Abts, “COTS Integration: Plug and Pray?” Computer, vol. 32, no. 1, Jan. 1999, pp. 135–138. 25. N. Storey, Safety-Critical Computer Systems, AddisonWesley, Boston, 1996. 26. Batneec Guidance Notes, Environmental Protection Agency of Ireland, Johnstown Castle Estate, Wexford, Ireland, 2001; www.epa.ie/licences/batneec.htm. 27. N. Leveson, Safeware: System Safety and Computers, Addison-Wesley, Boston, 1995. 28. K. Wiegers, Software Requirements, Microsoft Press, Buffalo, N.Y., 1999. 29. S. Jones et al., “Trust Requirements in E-Business,” Comm. ACM, vol. 43, no. 12, Dec. 2000, pp. 81–87.
For more information on this or any other computing topic, please visit our Digital Library at http://computer.org/publications/dlib.
About the Authors Natalia Juristo is a full professor of computer science at the Universidad Politécnica de
Madrid. Her research interests include requirements engineering, software engineering, the intersection of software engineering and knowledge engineering, and software process evaluation. She received her PhD in computer science from the Universidad Politécnica de Madrid. She is a senior member of the IEEE. Contact her at [email protected]; www.ls.fi.upm.es/UDIS/ miembros/natalia.
Ana M. Moreno is associate professor at the Universidad Politécnica de Madrid. Her re-
search interests include defining a conceptual model independent of the development approach, transforming textual requirements in object-oriented conceptual modeling, requirements engineering, and the intersection of software engineering and knowledge engineering. She received her PhD in computer science from the Universidad Politécnica de Madrid. Contact her at [email protected]; www.ls.fi.upm.es/UDIS/miembros/amoreno.
Andrés Silva is an assistant professor at the Universidad Politécnica De Madrid. His re-
search interests include requirements engineering for emergent applications, viewpoint-based requirements engineering, and knowledge management. He received his PhD in computer science from the Universidad Politécnica de Madrid. Contact him at asilva@fi. upm.es; www.ls.fi.upm.es/UDIS/miembros/asilva.
Good news for your in-box.
Sign Up Today for the IEEE Computer Society’s e-News Be alerted to • articles and special issues • conference news • submission and registration deadlines • interactive forums
Available for FREE to members.
computer.org/e-News
November/December 2002
IEEE SOFTWARE
77