Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2418
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Don Wells Laurie Williams (Eds.)
Extreme Programming and Agile Methods – XP/Agile Universe 2002 Second XP Universe and First Agile Universe Conference Chicago, IL, USA, August 4-7, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Don Wells 4681 Brockham Way, Sterling Heights, MI 48310, USA E-mail:
[email protected] Laurie Williams North Carolina State University, Department of Computer Science 1010 Main Campus Road, 407 EGRC, Raleigh, NC 27695, USA E-mail:
[email protected]
Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Extreme programming and agile methods - XP, agile universe 2002 : proceedings / Second XP Universe and First Agile Universe Conference, Chicago, IL, USA, August 4 - 7, 2002. Don Wells ; Laurie Williams (ed.). Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2418) ISBN 3-540-44024-0
CR Subject Classification (1998): D.1, D.2, D.3, F.3, K.4.3, K.6 ISSN 0302-9743 ISBN 3-540-44024-0 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by Olgun Computergrafik Printed on acid-free paper SPIN 10873780 06/3142 543210
Preface
The second XP Universe and first Agile Universe brought together many people interested in building software in a new way. Held in Chicago, August 4–7, 2002 it attracted software experts, educators, and developers. Unlike most conferences the venue was very dynamic. Many activities were not even well defined in advance. All discussions were encouraged to be spontaneous. Even so, there were some written words available and you are holding all of them now. We have collected as much material as possible together into this small volume. It is just the tip of the iceberg of course. A reminder to us of what we learned, the people we met, and the ideas we expressed. The conference papers, including research and experience papers, are reproduced in these proceedings. Forty-one (41) papers were submitted. Each submitted paper received three reviews by program committee members. The program committee consisted of 40 members. Papers submitted by program committee members were refereed separately. This ensured that reviewers could provide an honest feedback not seen by the paper submitters. In many cases, the program committee shepherded authors to significantly improve their initial submission prior to completing the version contained in these proceedings. In the end, the program committee chose 25 papers for publication (60% acceptance). There is wonderful variety here. You will be interested in some new additions to the agile toolkit. Usage-Centered Design (UCD) defines a new role for agile teams. Advice on how to evolve even the methodology itself is given. The most controversial ideas are those about XP and distributed teams. This will make very interesting reading indeed! We have included some great references for agile teams. Two sections, one introducing agile methods and one containing experience reports form a solid foundation of information for teams wishing to be agile or extreme. These reports take the form of lessons learned, tips for smooth transitions, and even a metric that can be used to decide where you are. These sections will be a good resource for new ideas. We follow this general line with a special section devoted to testing. Four different topics are presented that relate to testing. Acceptance tests and HTML code are explored. Ideas for testing legacy code and third party packages are presented, as well as a method for systematically generating JUnit tests. This section should be of interest to everyone agile or not. There were several studies conducted and reported. These papers are collected into a section on empirical studies. Several issues related to XP and agile methods were explored as surveys and experiments. The true compatibility of the Capability Maturity Model Integration (CMMI) and agile methods is investigated. A survey involving XP developers was conducted to determine how highly developers valued the XP practices. And an experiment was conducted
VI
Preface
to find out more about teams doing test first coding. All provide more than just anecdotal evidence to support their findings. There is a special section on pair programming. Distributed pair programming is explored. The support pair programming can provide to achieving the objectives of the People Capability Maturity Model (P-CMM) is discussed, and suggestions are made for handling conflicts when using the pair programming practice. All three of these are new topics in pair programming. There is a selection of papers that pertain to educators. The XP Universe conferences are proud to boast significant support of educators by hosting an educator’s symposium during the conference. A selection of papers on teaching and learning agile methods is presented. The last three sections in our proceedings help document the tutorials, workshops, and panels that were presented. These brief summaries are included for completeness and convenience of the attendees. These sections serve as a memento to remember the activities of XP Universe and Agile Universe. Laurie and Don wish to thank everyone who made this conference possible and everyone who attended this conference. We wish to thank not only the people who have made this memento we call the proceedings possible, but also anyone who picks this book up, reads it, and thinks about not what must be, but rather what could be.
August 2002
Don Wells and Laurie Williams
Organization
XP Agile Universe 2002 was organized by ObjectMentor, Inc.
Executive Committee XP Universe General Chair: Agile Universe General Chair: Organizing Chair:
Ron Jeffries Martin Fowler Angelique Martin
Student Volunteers: Rick Mercer Exhibits: Michael Feathers Marketing and Communications: Randy Miller Tutorials: Brian Button Workshops: Frank Maurer Panels: Ken Auer Open Space: Ann Anderson and Chet Hendrickson BOF Coordinator: Bill Wake
Program Committee Co-chairs:
Don Wells and Laurie Williams
Members:
Scot Ambler Kent Beck Mike Beedle Barry Boehm Alistair Cockburn Jim Coplien Ward Cunningham Aldo Dagnino Jeanine De Guzman Jutta Eckstein Hakan Erdogmus Steve Fraser Jim Highsmith Watts Humphrey Andy Hunt Bil Kleb Jon Kern Tom Kubit
Manfred Lange Tim Mackinnon Michele Marchesi Bob Martin Todd Medlin Randy Miller Linda Rising Ken Schwaber Forrest Shull Giancarlo Succi Jeff Sutherland Dave Thomas (Pragmatic Programmer) Dave Thomas (OTI) Jim Tomayko Arie van Bennekum Chris Wege Frank Westphal William Wood
VIII
Organization
Educators Symposium Committee Co-chairs:
James Caristi and David West
Members:
Joe Bergin Ed Gehringer
Rick Mercer J. Fernando Naveda
Sponsoring Institutions Galaxy Class Object Mentor, Inc., Vernon Hills, Illinois ThoughtWorks, Inc., Chicago, Illinois TogetherSoft, Raleigh, North Carolina Star Class Rational Radsoft Satellite Class DSDM Consortium, Ashford, Kent, United Kingdom RoleModel Software, Holly Springs, North Carolina Small Worlds, New York, New York Media Partners Agile Alliance C/C++ Users Journal Cutter Consortium SD Times
Table of Contents
Methods and Support Tools Designing Requirements: Incorporating Usage-Centered Design into an Agile SW Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jeff Patton
1
Supporting Distributed Extreme Programming . . . . . . . . . . . . . . . . . . . . . . . . 13 Frank Maurer Using Ant to Solve Problems Posed by Frequent Deployments . . . . . . . . . . . 23 Steve Shaw Supporting Adaptable Methodologies to Meet Evolving Project Needs . . . . 33 Scott Henninger, Aditya Ivaturi, Krishna Nuli, and Ashok Thirunavukkaras
Introducing Extreme Programming and Agile Methods Strategies for Introducing XP to New Client Sites . . . . . . . . . . . . . . . . . . . . . . 45 Jonathan Rasmusson Establishing an Agile Testing Team: Our Four Favorite “Mistakes” . . . . . . . 52 Kay Johansen and Anthony Perkins Turning the Knobs: A Coaching Pattern for XP through Agile Metrics . . . . 60 William Krebs
Experience Reports Agile Project Management Methods for ERP: How to Apply Agile Processes to Complex COTS Projects and Live to Tell about It . . . . . . . . . . 70 Glen B. Alleman Extreme Programming in a Research Environment . . . . . . . . . . . . . . . . . . . . . 89 William A. Wood and William L. Kleb Tailoring XP for Large System Mission Critical Software Development . . . . 100 Jason Bowers, John May, Erik Melander, Matthew Baarman, and Azeem Ayoob
Testing Acceptance Testing HTML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Narti Kitiyakara
X
Table of Contents
Probe Tests: A Strategy for Growing Automated Tests around Legacy Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Asim Jalis An Informal Formal Method for Systematic JUnit Test Case Generation . . 131 David Stotts, Mark Lindsey, and Angus Antley A Light in a Dark Place: Test-Driven Development with 3rd Party Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 James Newkirk
Empirical Studies Agile Meets CMMI: Culture Clash or Common Cause? . . . . . . . . . . . . . . . . . 153 Richard Turner and Apurva Jain Circle of Life, Spiral of Death: Are XP Teams Following the Essential Practices? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Vinay Ramachandran and Anuja Shukla Tracking Test First Pair Programming – An Experiment . . . . . . . . . . . . . . . . 174 Matevz Rostaher and Marjan Hericko How to Get the Most out of Extreme Programming/Agile Methods . . . . . . . 185 Donald J. Reifer Empirical Findings in Agile Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Mikael Lindvall, Vic Basili, Barry Boehm, Patricia Costa, Kathleen Dangle, Forrest Shull, Roseanne Tesoriero, Laurie Williams, and Marvin Zelkowitz
Pair Programming Exploring the Efficacy of Distributed Pair Programming . . . . . . . . . . . . . . . . 208 Prashant Baheti, Edward Gehringer, and David Stotts Pair Programming: Addressing Key Process Areas of the People-CMM . . . 221 Gopal Srinivasa and Prasanth Ganesan When Pairs Disagree, 1-2-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Roy W. Miller
Educator’s Symposium Triggers and Practice: How Extremes in Writing Relate to Creativity and Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Richard P. Gabriel Extreme Teaching – An Agile Approach to Education . . . . . . . . . . . . . . . . . . 238 Daniel Steinberg
Table of Contents
XI
Extreme Programming as a Teaching Process . . . . . . . . . . . . . . . . . . . . . . . . . . 239 J. Fernando Naveda, Kent Beck, Richard P. Gabriel, Jorge Diaz Herrera, Watts Humphrey, Michael McCracken, and Dave West From the Student’s Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 James Caristi, Frank Maurer, and Michael Rettig Perceptions of Agile Practices: A Student Survey . . . . . . . . . . . . . . . . . . . . . . . 241 Grigori Melnik and Frank Maurer
Tutorials XP in a Legacy Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Kuryan Thomas and Arlen Bankston XP for a Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 James Grenning Accelerated Solution Centers – Implementing DSDM in the Real World . . . 254 Alan Airth Refactoring: Improving the Design of Existing Code . . . . . . . . . . . . . . . . . . . . 256 Martin Fowler The Agile Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Pramod Sadalage and Peter Schuh Change Wizardry – Tools for Geeks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Joshua Kerievsky and Diana Larsen Beyond the Customer: Agile Business Practices for XP . . . . . . . . . . . . . . . . . 261 Paul Hodgetts XP Release Planning and User Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Chet Hendrickson, Ann Anderson, and Ron Jeffries Steering the Big Ship: Succeeding in Changing an Organization’s Practices . . . . . . . . . . . . . . . . . . . . 264 Lowell Lindstrom Scrum and Agile 101 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Ken Schwaber How to Be a Coach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 William Wake and Ron Jeffries Sharpening the Axe for Test Driven Development . . . . . . . . . . . . . . . . . . . . . . 269 Michael Hill Pair Programming: Experience the Difference . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Laurie Williams and Robert Kessler
XII
Table of Contents
How to Start an XP Project: The Initial Phase . . . . . . . . . . . . . . . . . . . . . . . . 273 Holger Breitling and Martin Lippert Effective Java Testing Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 John Goodsen Test Drive for Testers: What, When, and How Testers Do for XP Teams . . 277 Lisa Crispin Scaling Agile Processes: Agile Software Development in Large Projects . . . 279 Jutta Eckstein Applying XP Tools to J2EE for the Extreme Programming Universe . . . . . 281 Richard Hightower
Workshops Distributed Pair Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 David Stotts and Laurie Williams Agile Acceptance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Bret Pettichord and Brian Marick XP Fest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Nathaniel Talbott and Duff O’Melia Empirical Evaluation of Agile Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Grigori Melnik, Laurie Williams, and Adam Geras
Panels Are Testers eXtinct? How Can Testers Contribute to XP Teams? . . . . . . . . 287 Ken Auer, Ron Jeffries, Jeff Canna, Glen B. Alleman, Lisa Crispin, and Janet Gregory XP – Beyond Limitations? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Steven Fraser, Rachel Reinitz, Ken Auer, Barry Boehm, Ward Cunningham, and Rob Mee Extreme Fishbowl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Zhon Johansen, Ken Auer, Brian Button, Alistair Cockburn, James Grenning, Kay Johansen, and Duff O’Melia Agile Experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Dave Thomas, John Manzo, Narti Kitiyakara, Russell Stay, and Aldo Dagnino
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Designing Requirements: Incorporating Usage-Centered Design into an Agile SW Development Process Jeff Patton Tomax Technologies 224 South 200 West Salt Lake City, UT 84101, USA 1.801.924.6924 NTEXXSR$XSQE\GSQ
Abstract. Over the past years of developing software I’ve increased efforts to understand what makes the development process successful and what makes it fail. This paper describes how starting to develop using agile methodologies solved many but not all problems. And how subsequently discovering Constantine and Lockwood’s Usage-Centered Design and incorporating it into an Agile Development process increased our likelihood of success. In addition to being more likely to meet end-user expectations, UC-D helped our team do that sooner, guess right more often, and achieve our goal of releasing usable software earlier. U-CD represents a repeatable, collaborative approach to interaction design that can be incorporated into an agile software development process.
1
Introduction
This experience report discusses my discovery and incorporation of Constantine & Lockwood’s Usage-Centered Design [6] into the day-to-day work my team does to deliver high quality software. The actual day-to-day application of U-CD may not necessarily match how their original text describes it. However, this is actually normal [13]. And, by allowing ourselves to deviate from a strict implementation, we found more places to apply U-CD, if only in part. The application of these techniques lead to greater understanding of the users’ goals and what we believe to be simpler more appropriate responses to them. Experimentation, study and reflection led to the assertion I’ll make here: interaction design, and Usage-Centered design as one possible approach, is a valuable component of any software development process. It can be done with or without intent but, it is always done. It can be taught, then repeatably and dependably applied as part of an agile development process. Employing interaction design with intent and skill allows us to deliver truly usable software earlier. By better understanding who the consumers of our software are and their goals we reduce the amount of trial and error that often occurs when deciding what the software should do. This allows us ultimately to deliver working software sooner. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 1–12, 2002. © Springer-Verlag Berlin Heidelberg 2002
2
Jeff Patton
Properly applied, interaction design techniques don’t interfere with the values of agile software development. In fact they increase communication between customer and developer, and decrease time to market with a best possible solution.
2
Identifying the Problem
2.1
There Has to Be Better Way
I’ve spent years developing software "traditionally." Basically, this consisted of a blend of Waterfall Methodology and complete chaos. I saw intelligent folks work very hard to identify requirements, create a thorough definition of scope and functional design, approve that, and then finally build it. More often than not the resulting software would miss its target. It was often late. There were problems with quality the software released with bugs. But even with quality issues resolved, the resulting software was hard to understand and cumbersome to use. Important requirements were often missed in the design phase, resulting in features necessary to automate the business process being left out of the software. Features originally thought important during the design phase were often discovered to be unnecessary and went unused. Watching this cycle over and over left coworkers and customers paralyzed with fear. We analyzed and designed longer with the hope we’d get it right this time. Customers reviewed designs longer or delayed reviewing them out of fear they’d miss something and be blamed for inevitable omissions in the delivered product. We started developing late. We finished even later. There had to be a better way.
3
Finding the Solution
3.1
Enter Extreme Programming
Extreme Programming [3] surfaced as an alternative to this madness along with other new ways of developing software - now branded as agile [1]. Surely close customer collaboration and iterative development would correct customer satisfaction issues. Surely test-driven programming [14], pair programming and aggressive refactoring [11] would improve software quality. I stumbled into the opportunity to spend a valuable year with Evant Solutions, a company committed to XP principles. We built high quality software at an aggressive rate. Ideally the XP customer is an expert end-user, and in Evant’s case they indeed were at one time. But now as product managers they had the responsibility to deliver commercially viable software to be competitive with other products in the same marketplace. They had to balance the needs of users we currently had with users we hoped to acquire. This responsibility was large and daunting.
Designing Requirements: Incorporating Usage-Centered Design
3
The development team closely collaborated with the product managers and succeeded in delivering the software they described. Deliveries were on time and with the expected scope usually intact. However, we still found the resulting software missing targets. The resulting product seemed to have features the actual end user didn’t need or care about today while lacking features the end user did need today. Enormous time seemed to be spent on features that would go unused by actual end users. These same actual end users had to devise lengthy procedures to force their actual business processes to work with the software that was built. Were our product managers not collaborating enough with our actual end-users? Was this merely a necessary side effect of balancing the needs of today’s customers with future customers? We all seemed to be surprised when the actual user wasn’t thrilled with what we’d delivered. Following XP principles in development seemed to solve delivery and quality issues. But, the pattern I’d observed in a non-XP context of not knowing exactly what to deliver seemed to reappear. A team consisting of consultants, the coach, skilled developers, product managers and business leaders encompassed decades of business and software development experience. Kent Beck and Rob Mee brought years of combined XP experience to the company. Under this very competent direction, we still encountered problems determining what was most important to develop. XP offered specific techniques to developers such as refactoring and unit testing. XP offered specific techniques to project managers in the form of the planning game. But we were missing some specific techniques for determining what to do. In fairness to the competent folks at Evant, not everyone considered this a problem - just a necessary challenge of software development. 3.2
Beauty Is Only Skin Deep
We’d always worked hard to make our software look good, be easy to use. Our UI specialist did a fabulous job with screen design and the product was easy to understand. However we still had the issue with actual business processes being hard to accomplish in the software and important parts of business processes being left out completely. Sure it looked good, but apparently there was more to hitting the target than looks. 3.3
We’re All Crazy
Years ago I’d read Alan Cooper’s About Face [9]. It contained lots of good information on what not to do when designing the software’s user interface. During the spring of 2001 I was able to hear Cooper speak in Berkeley. His focus was less on bad screen designs and more on software missing its target as a result of not understanding its user. He pointed out the necessity of a creating a persona. A persona was a walking, talking fictitious user with well-developed fictitious needs and concerns. Reading Cooper’s The Inmates are Running the Asylum [10], I found that as a technologist, I’d likely never be able to identify with my user. I, and the folks I
4
Jeff Patton
worked with were the inmates and it would take serious effort to think like the persona we could create. Knowing you have a problem is half the battle. I proceeded under the assumption I could find the steps to recovery.
4
Finding the Solution, Again
4.1
It’s Not Chet’s Fault
While lurking in the Extreme Programming discussion group [16], I read Ron Jeffries’ recommendation of Constantine & Lockwood’s Software For Use [6] as a possible source for good information on user interface design. Although Extreme Programming Installed [12] may encourage blaming Chet - I’ll assign Mr. Jeffries the responsibility for starting me down this path. Like Cooper’s concerns, Constantine and Lockwood’s justifications for effective user interface design and usability were preaching to the choir. But something different in this book was an actual documented method for arriving at a usable piece of software. However, the process described looked complicated, time consuming, and potentially hard to put into practice - and not at all agile. What’s more it needed to happen up front. Adding a time consuming process to the front end of software development sounded too much like the bad experience I had been running from. 4.2
Ah-Ha!
During the summer of 2001, I had the opportunity to learn Usage-Centered Design from Larry & Lucy directly. The book is thick - and I’d wondered how we were going to compress this process into a weeklong class. As exercises between lectures we discussed a business problem, brainstormed ideas onto 3 x 5 cards and saw models emerge almost magically by shuffling cards around the table. We did this collaboratively in a group with lots of discussion. We learned an effective way to move from these arranged models of cards on the table to wireframe user interface. We learned how to validate - or test - our user interface using the information we’d put together still on those 3 x 5 cards. The business problem we took on to solve in exercises seemed daunting at first. But surprisingly, in a short amount of time we arrived at a good understanding of what the software that would solve that problem could look like. And what’s more, the whole process was fun. Usage-Centered Design was agile. Employing U-CD before developing the software using Extreme Programming techniques would surely result in us delivering software that was high quality and effective at meeting the real business needs of the user. Simultaneously I stumbled onto an assertion on page 122 of Cockburn’s Agile Software Development [5] that when we look at the scope of concern for Usage-Centered design and XP that the two sets of practices could indeed inhabit the same project.
Designing Requirements: Incorporating Usage-Centered Design
5
Defining the Solution
5.1
Agile Usage-Centered Design
5
Although Usage-Centered Design is thoroughly explained in Software for Use, the agile approach to it is best documented in Constantine's paper: “Process Agility and Software Usability: Toward Lightweight Usage-Centered Design” [8]. I'll give here an abbreviated overview of the process. This is Constantine and Lockwood's process with a few minor variations and matches the way my team and I practice it today. 1. Sequester a diverse mix of people in a room to collaborate on this design. Include domain experts, business people, programmers and test/QA staff. Include a facilitator that knows this process. 2. Preconception purge. Let loose. Everyone brain-dump about the software we need to write. Complain about the product you're replacing. Explain the cool features you expect the new product to have. Get everyone's concerns out into the open. Write them down in plain sight on whiteboards or poster sized paper hung on the wall. 3. Brainstorm user roles onto 3 x 5 cards. Who will be using this software? What are their goals? Prioritize the roles by shuffling the stack of cards. Note the most important roles. Place them in an arrangement on the table that makes sense with similar roles closer to each other. This is a role model. 4. Now that we know who will use our software, brainstorm tasks these roles will be doing to accomplish their goals onto 3 x 5 cards. Shuffle the cards to prioritize them based on importance, then on frequency. Note the most important and most frequent. Arrange the cards on the table. Place tasks similar to each other, or dependent on each other, together. Place tasks that have nothing to do with each other further apart. This is a task model. 5. You'll find tasks in the arrangement on the table clump up. Grab a clump. This is an interaction context. 6. For each task in your interaction context, write a Task-case directly on the card. The Task-case takes the form of a conversational Use Case similar to that described by Rebecca Wirfs-Brock in Designing Object-Oriented Software [15]. Alistair Cockburn in Writing Effective Use Cases [4] might classify them as "system scope, sea-level goal, intention-based, single scenario, Wirfs-Brock use case conversation style." U-CD would encourage you to simplify and generalize these task-cases. Using a conversational form makes them easy to read. Limiting the scope and goal keeps them from being too broad or too detailed. Generalizing them keeps them short and allows deferring user interface details for implementation time. 7. Create an Abstract Prototype using the task-cases you've detailed. This process is best described in the paper "From Abstract to Realization in User Interface Designs" [7]. At the end of this process you'll know what components will be in the interaction context. 8. Using pencil and paper create a wireframe drawing of the interaction context. Show basic size and placement of screen components.
6
Jeff Patton
9. Test the interaction contexts by stepping through each task-case used in the context. Pretend to be the role that would perform the task. Validate that you can easily and effectively reach your goal using this interaction context.
6
Putting It into Practice
6.1
Starting in the Middle
Armed with a years worth of Extreme Programming development experience, usability training, and lots of other bits of useful information from books, papers and colleagues, I set out at a new employer to prove that U-CD + XP was indeed a potent combination. The rest of the paper describes how close we came and how much remains to discover. While it’s exciting to think we could put into place a set of new practices we never quite have a clean slate. In my situation we had legacy practices to deal with. When it came time to apply Usage-Centered Design - it was often a bit too late. There was no shortage of new software to write, but before our company had agreed to write the software, documents had generally been written up and agreed to describing scope, features and functionality. In many cases if we were to attempt to practice U-CD our company would have been accused of re-trenching the same material already discussed by marketing and/or project management. Looking at the use of the software often meant asking users to repeat conversations they’d already had drawing up the agreement. In addition the results of such a conversation may yield changes in scope. This notion was at best unpopular. 6.2
Some Opportunities and Some Success
There were, however, some greenfield opportunities. These were projects where requirement were not yet agreed to and where the customers and management were willing to approach things in a slightly different way. In those situations we practiced agile U-CD as described above with great success. 6.3
What Worked
The preconception purge before the process seemed to be the chance to vent that everyone was looking for. Giving the group permission to have an unorganized conversation where anything could be said brought to light many concerns and fears we’d have not gotten to any other way. This free form conversation supplied everyone involved with an immense amount of useful background. Working with 3 x 5 cards struck some participants as very low tech, but the results were very effective. The discussion took on the same properties as a CRC card session [2] might take. But, instead of classes, responsibilities and collaborations, we
Designing Requirements: Incorporating Usage-Centered Design
7
talked about user roles and tasks. We saw lots of card waving and passing cards back and forth. People immediately understood what was important by looking at the position of the card on the table. People immediately knew what ideas were related by their position in relation to each other. An arrangement of cards on the table could communicate far more, faster than any paper document or diagram could. Mapping Task-cases to Abstract Prototypes was a very effective way to push through from knowing what we needed to do to how it might look on the screen. 6.4
What Was Bumpy
Folks had problems with User Roles. In U-CD a role isn’t a job title - but more accurately a high level goal. Clerk is a job title. CustomerSalesTransactionHandler is a role. The distinction becomes important when someone looks at a list of roles later and is unable to determine what each does. Or when looking at a task-case like ReturnMerchandise and asks who does it? In this case if you’re using job titles, the Clerk, Assistant Manager and Manager may all have responsibility to perform that task - but, we’d have to know the business rules to be sure. However, we can reasonably assume a CustomerSalesTransactionHandler might have that responsibility. Choosing expressive role names is valuable - but is a hard idea to grab on to for domain experts. A well written role name captures the goal of a person in that role. Attention spans weren’t long enough. By the time you reach the tail end of the process when it will really bear fruit, people are exhausted and unable to effectively do a good job building the UI. Reconvening the next day left us with a fair amount of ground to cover again to get everyone back on the same page. The process takes a while and for those who don’t do it often, it’s time consuming and tiring. Folks were accustomed to one person going off to a cubical to write functional specifications and not this long collaborative process. As anyone who practices pair programming can tell you constant collaboration can be exhausting. The resulting artifacts "look funny." In this organization functional design previously took the form of a list of "shalls" - the software "shall do this" sort of statements along with assumptions, a very literal screen design and sometimes a narrative on how it would be used. Roles and a role model weren’t immediately understandable. Task-cases seemed too general - too abstract for some folks used to long narratives. Wireframe UI drawings weren’t quite literal enough. These issues impacted acceptance of the functional design.
7
Reflecting on What We’d Discovered
7.1
Were We Gaining Anything?
It sure felt that way. Although close collaboration within in a large group was tiring, when we finished, the amount of tacit knowledge in the group was irreplaceable. Everyone within the team understood who the users were and what their goals were.
8
Jeff Patton
Those in the team who hadn’t been present for the U-CD sessions quickly assimilated the vocabulary of those who did. Our priorities became clear. We need only find the focal, or most important user roles and their focal task-cases to find the best starting point for development. Was this better than a long functional design written by one expert? It’s not easy to say that the results were definitely better, but it is easy to say that team members’ understanding and ownership of the software was higher than before. By arriving at this functional design together, all knew how to accomplish this process and we’d eliminated what was before a single point of failure. This seemed like a definite improvement. 7.2
Planning with Task-Cases
The resulting task-case cards represented a named feature and way to use it. They met the XP definition of a story by being concise, testable and void of implementation specifics. We were able to bring these into XP-style planning sessions and use them as basis to attach estimates and drive through the development process. In the context of other task-cases and the user roles, we were able to determine business value and consequently priority. We knew who our focal roles were, and drove at implementing all the task-cases we knew would allow that role to meet their goal. We knew that developing a few task-cases for all roles would be worthless, since no role would be able to meet their goal. We knew which task-cases were focal so we knew how much effort to expend to get things right. A focal task-case demanded a lot of attention and time, while we could expend less time and effort on less important task-cases. Knowing this helped us estimate and prioritize. As a result of developers participating in the process originally, they understood the priorities already, and planning happened faster. 7.3
Test-Driven Design for User Interactions
Throughout the development process, whenever anyone on the team was unclear on the direction we were going with the software, we’d pick up the original task-cases and attempt to execute them on the software. They became our working acceptance tests. Knowing user roles helped answer other questions - like what the ability level of the user was and what that user’s goal was. Often in a business process the goal of the user doing the process is much different than a manager who needs to have visibility to what was done. They need to see different information at different times. Using user roles, circumstances like this became clearer. Finally when formal acceptance and QA had to occur, task-cases could be "fleshed up" to contain specific references to the actual implemented user interface along with literal test data. Roles would serve as a collection point for acceptance tests. We’d
Designing Requirements: Incorporating Usage-Centered Design
9
focus on validating the software a role at a time essentially wearing the hat of the user role and performing the work they’d need to perform with the software. Our confidence in the finished software was higher. The feeling seemed analogous to the feeling you get developing source code using automated unit testing and test-driven development. It’s not really provable that code developed this way is better than other ways, but after doing it I find my confidence in the code is higher. I also find I’m unwilling to work any other way as that seems risky or foolish. As with test-driven development, there was no knowing if our finished results were indeed better than we could have come up without U-CD, but confidence was higher. Proceeding on a project without knowing what user roles existed for the product and what tasks they needed to perform now feels as risky as writing code without unit tests.
8
User Interface, Usability and Interaction Design
8.1
Aren’t They All the Same Thing?
Early on when trying to find solutions to what really was missing in this whole process, I looked to effective user interface design as a solution. But after seeing strong user interface folks still turn out software that was poorly suited for real work - it was clear the decisions made up-stream regarding where there would be a user interface and what would be on that user interface were at fault. No amount of artful layout of components could correct this problem. The software needed to be usable for what the user needed to accomplish. This involved really understanding who the user was and their goals - then taking that understanding and determining how best an interaction with some piece of software could help. Usability to me meant making the software easily usable. But before we could make it usable - we needed to discover what responsibilities the software should have at all. The term "interaction design" seemed to be more fitting. I’ll borrow Alan Cooper’s definition from The Inmates are Running the Asylum [10]: "Almost all interaction design refers to the selection of behavior, function and information and their presentation to users." Before the first line of code can be written someone needs to decide how a specific user will interact with it to achieve her goals. That decision is the up-front design that will always happen. 8.2
One Man’s Design Is Another Man’s Requirements
Larry Constantine made the unfortunate statement in an extreme programming news group [16] that design must be up front. This was an unfortunate remark only because not doing big design up front is such a hot spot for XP advocates. After a long discussion back and forth including Constantine and others it finally became clear that much of what Constantine called design, especially the decisions that happen up front, are viewed by others as requirements. And indeed they’re right. If I as a cus-
10
Jeff Patton
tomer describe what I’d like the team to build for me, I’ve given requirements. But the method I used to arrive at that decision is indeed design. In casual conversation with Alistair Cockburn he put it best: "It’s a requirement to a person if that person *has to* do it. It’s a design decision if that person gets to choose." XP customers make decisions therefore they design. How well they do this or how well anyone in this role accomplishes this task often determines the success or failure of a product. The role that Usage-Centered Design played for me was that of designing the requirements.
9
What Should I Do Tomorrow?
9.1
Interaction Design Incorporated into Day-to-Day Processes of a Mostly Agile Company
At Tomax Technologies, certain agile processes have taken off and work well. Daily stand-up meetings are common and efficient. Cockburn's “information radiators” abound [5]. Teams have stand-ups in front of their current iteration schedule generated by an XP style planning game. Some projects religiously use unit-testing, pairing and refactoring. Other teams are still a bit suspicious of all these new-fangled ideas. Although we have product managers, they don't have the time to ride shotgun on a project the way an XP customer should. They rely on the team to make the detail decisions about the implementation of features in the product. Acceptance testing is up to the team. Development methodology is a decision made more at the team level than the corporate level. In this sort of environment, how do we incorporate some interaction design into things we do every day? The following is a short list of interaction-design-centric guidelines we try to observe: 1. We always ask who. While we're looking at a piece of development we make an effort to understand who will be using it. What is the user role involved. If we don't know, we back up and do a quick user role brainstorming session. Arrange a few 3 x 5 cards on a table to understand the role model, and then continue on. When we understand who will be using the application, we make better decisions about what they should see and how sophisticated the interactions can be. 2. We validate user interactions with a task-case. To make sure our user interface is usable, we write a simple task-case giving us the step-by-step intention driven process a particular user role might follow to complete the task. Does the current design of the application do this efficiently? This may be analogous to a manually executed XP acceptance test. 3. We strive to understand focal user roles and focal task-cases. Make sure everyone in the project understands who it is most important to satisfy and what specific activities need to run smoothest. Focus on those. Spend extra effort to make them right. Allow the less important roles and task-cases to slide. They need to be functional - but fluid and pretty may be a little less important. Time is most wisely spent elsewhere.
Designing Requirements: Incorporating Usage-Centered Design
11
4. We attempt to be on the lookout for features that don’t serve any role or facilitate any task. There’s always a temptation to scoop up seemingly easy features. Beware statements like "It would be cool of the software could..." - or - "right here we could show..." Always ask what user role needs this? What will they be doing when they do need it? Does this user role care about this information? What information do they care about? 5. We attempt to elevate the writing of stories into interaction design. Help the folks that know the business understand user roles and task-cases. Before requirements are created discuss roles - who’s important, who isn’t. Discuss task-cases - what does each role do. Clearly understand priority and dependence. This makes planning an iteration easier. This allows us to delivering a truly usable product sooner by appropriately accommodating all the necessary tasks of a focal role. 6. We revisit our requirements often. In implementing the software thus far, have we learned about a role we didn’t know about earlier? Have we found that to accomplish a goal that it may take unforeseen tasks or that some of our tasks are unnecessary? When we’re not sure, we pull out the 3 x 5 cards and reassemble role models and task models to evaluate if the design still makes sense.
10 At the End of the Day 10.1 Conclusions We’ve arrived at a situation specific agile methodology. We’ve taken advantage of development practices used in XP. We’ve overcome the lack of an onsite customer by allowing the practice of agile Usage-Centered Design to substitute for one. The simplicity and repeatability of U-CD allow the actual customer, business leaders and developers to all participate in "designing" the requirements. The result is during the process we all feel more confident that we understand what to do and why we’re doing it. We still get things wrong sometimes, and good development practices do indeed allow us to change the design quickly. But, when we do get it wrong we now understand a little better why. It’s often an undiscovered user role, or goal. Using an interaction designer’s sensibilities and U-CD as process framework, we are all learning to ask better questions. The result being better designed requirements.
Acknowledgements Thanks to valued team-members from Tomax Technologies & Evant Solutions for providing a laboratory to learn in. Thanks to Larry Constantine & Lucy Lockwood for being great teachers. Thanks to collaborators and advisors: Stacy Patton and Kay Johansen. Special thanks for valuable feedback and advice goes to Alistair Cockburn for help in motivating and revising this paper.
12
Jeff Patton
References 1. Agile Alliance: http://www.agilealliance.com 2. Beck, K., Cunningham, R.: A Laboratory For Teaching Object Oriented Thinking (1989) http://c2.com/doc/oopsla89/paper.html 3. Beck, K.: Extreme Programming Explained, Adison-Wesley (1999) 4. Cockburn, A.: Writing Effective Use Cases, Addison-Wesley (2000) 5. Cockburn, A.: Agile Software Development, Addison-Wesley (2001) 6. Constantine L. & Lockwood L.: Software For Use, Adison-Wesley, (April 1999) 7. Constantine, L., Windl, H., Noble, J., & Lockwood, L.: From Abstract to Realization in User Interface Designs: Abstract Prototypes Based on Canonical Abstract Components (2000) http://www.foruse.com/Files/Papers/canonical.pdf 8. Constantine L.: Process Agility and Software Usability (2001) http://www.foruse.com/Files/Papers/agiledesign.pdf 9. Cooper, A.: About Face, Hungry Minds Inc. (1995) 10. Cooper, A.: The Inmates are Running the Asylum, Sams (1999) 11. Fowler, M.: Refactoring: Improving the Design of Existing Code, Addison-Wesley (1999) 12. Jeffries, R., Anderson, A., Hendrickson, C.: Extreme Programming Installed, AddisonWesley (2000) 13. Mathiassen, L.: Reflective Systems Development, Dr. Techn. Thesis, Aalborg University (1998) pp 14-15, http://www.cs.auc.dk/~larsm/rsd.html 14. Test Driven Programming: http://xp.c2.com/TestDrivenProgramming.html 15. Wirfs-Brock, R.: Designing Object-Oriented Software, Prentice Hall (1990) 16. Extreme Programming Yahoo Group: http://groups.yahoo.com/group/extremeprogramming/
Supporting Distributed Extreme Programming Frank Maurer University of Calgary Department of Computer Science, Calgary, Alberta, Canada, T2N 1N4 QEYVIV$GTWGYGEPKEV]GE Abstract. Extreme programming (XP) is arguably improving the productivity of small, co-located software development teams. In this paper, we described an approach that tries to overcome the XP constraint of co-location by introducing a process-support environment (called MILOS for Agile Software Engineering - MILOS ASE) that helps software development teams to maintain XP practices in a distributed setting. MILOS ASE supports project coordination using the planning game, user stories, information routing, team communication, and pair programming.
1
Introduction
Extreme programming (XP) 3410 is one of the most innovative software development approaches of the last years. The agile movement seems to be driven by disappointment with current software development practice: low productivity and low user satisfaction are seen as commonplace. Software development teams are often delivering huge amounts of documentation (for example requirements specifications, system architecture descriptions, software design documents, test plans) instead of delivering useful functionality to the user. Sometimes, projects are cancelled before the system is deployed – wasting all the effort that was already spent on analysis and design. XP, on the other hand, focuses development effort on activities that deliver highquality functionality to the user as fast as possible. Documentation is often restricted to high-level use cases (user stories), source code and test code. XP is based on several basic assumptions and trade-offs. The two that directly influence our work are: • Nobody can always predict the future correctly: Many standard software engineering practices are based on the 1970’ies observation that resolving a problem in a software system gets much more expensive the later it is found. Software engineering textbooks (e.g. 18) claim that fixing a problem after release of the software is 60-100 times more expensive than in the definition phase. As a result, software engineering processes often try to minimize the number of problems found in later development phases by spending much effort in earlier stages: by writing very detailed requirement and design documents the software development team is trying to make sure that the future system fits the user’s needs (requirements are met) and that the system functions properly (adequate design). As a result, it usually takes quite some time (several months or years) to deliver productivity-increasing software functionality to the end users. The drawD. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 13–22, 2002. © Springer-Verlag Berlin Heidelberg 2002
14
Frank Maurer
backs of this approach are twofold. First, it increases the risk that the system is obsolete when it is introduced because the business environment has changed. Second, it assumes that somebody is able to predict what functionality will be required in the future (at the time when the system is deployed). XP and other agile methods, on the other hand, try to put a useful system in place within a very short time frame (usually a few weeks) and then refines and adapts this to changing requirements. Hence, XP trades risk against future effort: it reduces the risk that the team spends effort on things that may never be useful against additional development effort in case that the teams needs to implement a feature in the future that may have been anticipated today. • Direct communication is more effective than documentation: Following XP, a project briefly documents user stories describing the requirements and specifying development tasks for the team. The documentation very short and serves more as a reminder to address an issue that as a detailed specification. Beside user stories, the primary documents to be created are source code: the implementation of the system as well as automated unit test drivers. In this sense, XP goes against standard software engineering practice by drastically limiting the amount of written documentation for a project. Nevertheless, anecdotal evidence shows that is works (e.g. 16). The question is now: what makes it work? The answer lies in the combination of XP practices that focus on improving intra-team communication. First, XP uses small teams that share a workspace – improving informal communication (coffee breaks, overhearing discussions etc.). Second, a customer representative is co-located with the development team and, hence, resolving requirement issues happens fast. Third, pair programming makes sure that team members talk about their design and implementation in detail. Forth, shared code ownership leads to an overall understanding of the system design and implementation by many or all team members – if a new team member has a question, she can simply ask a colleague. Usually it is faster to explain a design face-to-face than describing it clear enough on paper (or in a CASE tool) so that others can understand. By reducing the amount of time spent on documentation of requirements and design, more effort can be spent on implementing additional software features for the customer – resulting in improved productivity of the team.1 Despite the fact that XP has some major advantages compared with more traditional software development methodologies, it has in its original form proposed by Beck 3 two severe limitations. First, it does not scale well to larger teams. Second, it requires the XP team to be co-located. Scalability of the process: Replacing documentation by communication seems to be a limiting factor for the scalability of the size of an XP team: If one design needs to be explained to n people and n gets larger, there will be a point when creating the documentation and letting them read it will take less time than explaining it n times in face-to-face communication.2
1
2
Productivity is here used in the informal sense of the number of functions provided to the customer divided by the time needed to develop them Even if one trainer explains the system requirements/design to many new developers at the same time (e.g. in a classroom setting), there will be a point where producing a document is more cost-effective than 1-to-1 or 1-to-many training sessions.
Supporting Distributed Extreme Programming
15
Co-located teams: XP requires all team members to be co-located and share the physical workspace. Co-location improves face-to-face communication and makes it easier to quickly ask questions. It enables collaboration and enhances coordination. Overcoming the co-location requirement while preserving the high productivity and good coordination of XP processes is the focus of this paper. In Section 2, we discuss some trends in the software industry that justify why thinking about distributed extreme programming is important. Section 3 gives an overview on our MILOS approach. Section 4 provides a usage scenario. We conclude with a summary and a look on future work.
2
Virtual Software Teams and Extreme Programming
While XP focuses on small, co-located teams, the 1990’s saw the emergence of various kinds virtual software development organizations: teams of developers that worked on the same software but were distributed all over the world - using telecommunication and the Internet for communication, collaboration and coordination. The same decade also saw a dramatic increase in freelance work 13, especially in the high tech field. As a result, Thomas Malone and his co-workers described the emergence of an e-lance economy 14 where freelance workers collaborate virtually over electronic networks to perform their jobs. E-lancing works best when all job related information and work results can be transferred over electronic networks. Software development is a prototypical example for this kind of work organization. This is demonstrated by new ventures like eLance (http://www.elance.com) or Asynchrony (http://www.asynchrony.com/welcome.jsp) that are building virtual marketplaces for e-lance work and whose focus is on software development and web design. • The business advantage of virtual teams and e-lancing are primarily twofold: • Increased flexibility in finding required resources when they are needed • Reduced costs because of less training expenses and of outsourcing to markets providing cheaper labor Examples of virtual software development teams are plenty. They reach from outsourcing small development tasks to an e-lancer to large-scale software development in the telecom or military industry where teams are distributed over the whole country or even the whole world. Other examples are open source projects like Linux or the Apache Web Server. These projects delivered complex software of high quality but the software development team never or, at least, very rarely met in the real world. Open source projects share the XP focus on source code but they are neither colocated nor are the teams developing Linux or Apache small. This leads us to believe that the co-location requirement of XP is somewhat arbitrary and DXP seems to be possible. The term distributed XP was actually coined in 12. They analyzed XP practices and show that only the Planning Game, Pair Programming, Continuous Integration, and On-Site Customer are influenced by the co-location requirement. They describe some initial experiences using of-the-shelf, not integrated tools (e.g. MS NetMeeting, CVS, e-mail) for communication, collaboration & coordination and reported several
16
Frank Maurer
problems. Besides providing an integrated coordination approach, MILOS ASE deals with issues: handling of user stories and uniform access to a repository. While Kirchner et al started from XP in their investigation of DXP, we started by analyzing how virtual software development teams currently work. We used open source projects as the base for the investigation. Open source projects are good examples of software development in a virtual environment. Analyzing their processes lets us determine • • • •
How virtual software teams communicate, collaborate and coordinate their work How they share information and knowledge What tool support they use Where shortcomings are Besides violating most standard software engineering practices regarding documentation of requirements and design, most open source projects demonstrate some common approaches how the development process is organized and how knowledge is transferred between heads of team members. 2.1
Open Source Development Processes
Open source projects need to ensure that relevant information is available to all participating developers. Here we need to differentiate between which information is seen as relevant (contents) and how it is made available to the developer community (modus). All well-known open source projects maintain one or more Web site(s) as a focal point for information exchange. The Web site usually describes the mission and goals of the project and provides access to project related information. Source code and documentation is usually accessible as one compressed file or via an Internet-based configuration management system (most often CVS). Issue tracking systems, using a Web-based front end, are used to record bugs and for managing the workflow for fixing them. Usually, there are clear definitions when to submit a new bug report and how it will be handled. Discussion groups or mailing lists are used to propose new features and extensions3. Newsgroups and mailing lists can also be used for getting help in case of problems with the open source software. Most information is made available to the developer community in pull mode: developers are able to access the Web site of the source code tree whenever it fits into their schedule. The same holds for discussion groups. This lets the developer in control of his/her own time. Push mode is e-mail based. It is used for distributing bug reports, for discussing new features and for on-line support. Although e-mail is more intrusive than newsgroups, it still is an asynchronous communication medium. Overall, open source development processes primarily use Internet technologies for supporting communication and collaboration. For example, the Apache Group states: “using the Internet and the Web to communicate, plan, and develop the server and its related documentation” 1. A good example on collaboration support is provided by sourceForge (http://sourceforge.net/), a site that hosts many open-source projects.
3
Sometimes new feature requests are stored in the issue tracking system.
Supporting Distributed Extreme Programming
3
17
The MILOS ASE Approach
The overall goal of the MILOS approach is to support project collaboration & coordination and organizational learning for virtual software development teams. The support provided by MILOS should be minimally intrusive to reduce overhead: MILOS stands for “Minimally Invasive Long-term Organizational Support”. In this paper, we focus on how MILOS ASE supports DXP. 9 describes the knowledge management aspects in more detail. 3.1
Requirements on Tool Support for Virtual XP Teams
Using DXP and open source processes as a baseline, the work process of virtual software teams can be improved in several ways. Project coordination: XP teams are usually much more closely coordinated than open source projects. Hence, project coordination support is strongly required for DXP. This should allow the XP team to assign tasks to developers, set deadlines and get an overview on the current state of the project. Team members should be able to access their to-do lists and retrieve relevant information for performing their tasks easily. Synchronous communication: Besides using e-mail for communication, synchronous communication like audio and video calls or text chat may be helpful. If two developers want to do pair programming, application sharing is needed. Active notifications and information routing: Instead of merely making information available for pull access, it would be useful push to important information to the users as soon as it becomes accessible. This push approach should include notifications when important events occur in a project. For example, a manager needs to be notified when a task gets delayed or a developer needs to be notified when an update of another component becomes available that she is using. The change notification mechanism of MILOS is discussed in 7. 3.2
Process Support in MILOS ASE
In this section, we give an overview on the support provided by MILOS to virtual XP teams. Background information can be found in 15. In this paper, we cover only the DXP support aspect of MILOS. MILOS supports the execution of globally distributed software development projects in several ways. Project managers are able to define tasks and decompose them into smaller subtasks. They can schedule them by using the Web-based user interface of MILOS. In addition, the information flow between tasks can be specified: For each task, a user is able to define expected outputs (e.g. the expected output of the “Develop workflow kernel” task is compressed library containing a set of Java source code files that is stored in the variable “workflow kernel”). These outputs can then be used as inputs for other tasks (e.g. the variable “workflow kernel” could be the input of the task “Develop workflow user interface”). In contrast to standard workflow engines, MILOS allows on-the-fly plan changes and redefinitions of the information
18
Frank Maurer
flow, notifying team members affected by those changes and updating the workflow engine accordingly 15 The MILOS framework nicely fits our requirements on DXP support. Nevertheless, we added the several extensions for supporting distributed XP: • User stories: A new product type that represents user stories was added. In addition, whenever a new user story is entered, MILOS ASE automatically adds a task for implementing this story (see example in the next section) into the task list. • Release and iteration planning: MILOS ASE allows easily defining and changing releases, iterations, user stories, and tasks. In a distributed setting, MILOS provides awareness on what is going on in the project based on 4 task levels from XP (release, iteration, user story, task). • MS NetMeeting Integration: By integrating MS NetMeeting into our tool, we are able to support distributed pair programming and synchronous communication.
4
Using MILOS ASE for Distributed Extreme Programming
The following scenario illustrates the infrastructure provided by MILOS to support distributed extreme programming. Team members access the Internet with a Web browser and connect to the MILOS ASE server (http://sern.ucalgary.ca/~milos). First, they login to the MILOS system to access their workspace. From their workspace they may retrieve the list of current projects, user stories, currently available tasks, task estimation, and pair programming facilities. 4.1
Creating User Stories
After creating a project and assigning a project manager to it, the customer is able to enter story cards into the MILOS system (see Figure 1). The top part of shows a menu that allows accessing all components of MILOS (Tasks, Team, Experience Base, etc). The left side shows a context-dependent navigation menu that enables to create a new story card. When a programmer pair starts working on a user story, it will contact the customer and discuss the story with him and update the description of the story if needed. When a user creates a new story, the MILOS ASE system automatically creates a task to handle this story and places it under “Unassigned tasks”. These unassigned tasks can then later be assigned to a specific iteration. Using the release and iteration planning user interface (see Figure 2), the developers may split a user story into several tasks as well as assign them to an iteration. Using the up/down links shown in the UI, a developer or customer may easily move user stories between iterations (and iterations between releases). After having decomposed user stories into concrete development task, the developer may describe the task in more detail. Tasks are part of a specific project and can be accepted by one team member. For each task, the manager may enter planned start and end dates. In addition, the users are able to define the inputs and outputs of processes. Furthermore, the users are able to specify the information flow between tasks by defining the output of one process to become the input of another. Specifying the information flow allows the MILOS system to provide access to input
Supporting Distributed Extreme Programming
19
information that was created as the output of another task: the output of a task, e.g. a source code file, is transferred to the MILOS server and stored in a version management system. From there, any successor task may access the current version as well as older versions. As this is done via HTTP requests, tunneling through a firewall usually is not a problem.
Fig. 1. Story card
The project manager may keep a close eye on the progress of each task by watching “percentage complete” values and steer the team in the correct direction if the requirement for the next build will not be met. We believe that this information is useful in a distributed setting as the informal information flow between team members is more difficult than in a co-located environment. As a result, a little bit more information needs to be “put on paper” than for co-located teams. 4.2
Pair Programming
MILOS keeps track on whose team members are currently logged in. A developer is able to pair up with one of them using the application sharing and audio/video capabilities of MS NetMeeting. Initial tests done by two of the MILOS team members4 indicate that state of the art networks are sufficiently fast to support distributed pair programming. They also said that a video link is not useful as long as the pair know each other well. They also recommend that both machines should use the same screen resolution (to avoid scrolling). 4
Darryl Gates & Sebastien Martel paired up for a couple of hours over the Internet while developing MILOS source code.
20
Frank Maurer
Fig. 2. Release and iteration planning user interface
5
Related Work
Related work comes mainly from two areas: Software Process Support and (Distributed) Extreme Programming. As we already discussed XP and DXP in Sections 0 and 0, we focus here on related work in process support. Most process improvement approaches, e.g. capability maturity model, SPICE, QIP, require describing the development processes more or less formally. Within the framework of software process modeling, several languages were developed that allow for describing software development activities formally 6. Software process models represent knowledge about software development. They describe activities to be carried out in software development as well as the products to be created and the resources & tools used. These models can be a basis for continuous organizational learning as well as the actual basis for the coordination and the management of the software engineering activities. Software process modeling and enactment is one of the main areas in software engineering research. Several frameworks have been developed (e.g. procedural 17, rule-based 1118, Petri net based 2, object-oriented 5). Process modeling and enactment approaches usually are used to rigorously define heavy-weight processes. They are weak concerning light-weight approaches like XP and do not directly support key XP practices. They also are not good at providing a good communication and collaboration infrastructure for virtual teams.
6
Conclusion and Future Work
In this paper, we described our approach for supporting virtual software teams in distributed extreme programming. The MILOS system supports communication, collaboration and coordination of DXP teams.
Supporting Distributed Extreme Programming
21
With MILOS ASE, we are aiming at an improved efficiency of virtual teams. Whereas undoubtedly the introduction of new tools at first results in an increased workload, we argue that, in the long run, the proposed approach will improve productivity of virtual software development teams. Our future work will focus on four aspects: • Formal evaluation of the approach • Extreme federations • Knowledge management for DXP Formal evaluation of the approach: We would like to set up controlled experiments to evaluate the feasibility and the benefits & problems of distributed extreme programming. In addition, we would like to compare the productivity and quality of XP teams and DXP teams to determine the influence of co-location on productivity. Extreme federations: One of the problems of XP is scalability concerning team size: XP works for small teams of five to ten people but there is some doubt that it works with even a mid-sized team of thirty people. One way to scale it up could be to have loosely coupled federations of XP teams that work together on a single project. This poses several interesting research questions: • How can we preserve XP productivity and quality in multi-team environment? • Do we need additional documentation and, if so, how much more? And what needs to be documented to enable a smooth work of the Extreme Federation. • Do Extreme Federations need a component architecture to work? How fixed need the interfaces between components of individual XP teams be? How much flexibility and/or adaptability of requirements do Extreme Federations loose compared with “normal” XP teams? Knowledge management for DXP: XP focuses on verbal communication for knowledge exchange. That makes it difficult to preserve information in a storable format. As a result of keeping development knowledge primarily in the heads to the people, XP runs into trouble when the members of the development team change frequently or when the development on the system stops for some time and is then resumed. Hence, an approach is needed that integrates knowledge management and DXP.
References 1. N.N.: About the Apache HTTP Server Project, http://httpd.apache.org/ABOUT_APACHE.html, 1999 (last visited March 2002) 2. Bandinelli, S., Fuggetta, A., and Grigolli, S. (1993). Process Modeling-in-the-large with SLANG. In IEEE Proceedings of the 2nd Int. Conf. on the Software Process, Berlin (Germany). 3. Kent Beck: Extreme Programming Explained: Embrace Change, Addison-Wesley Pub Co, 1999, ISBN: 0201616416 4. Kent Beck, Martin Fowler: Planning Extreme Programming, Addison-Wesley Pub Co, 2000, ISBN: 0201710919 5. Conradi, R., Hagaseth, M., Larsen, J. O., Nguyen, M., Munch, G., Westby, P., and Zhu, W. (1994). EPOS: Object-Oriented and Cooperative Process Modeling. In PROMOTER book: Anthony Finkelstein, Jeff Kramer and Bashar A. Nuseibeh (Eds.): Software Process Modeling and Technology, 1994. Advanced Software Development Series, Research Studies Press Ltd. (John Wiley).
22
Frank Maurer
6. Curtis, B., Kellner, M., and Over, J. (1992). Process modeling. Comm. of the ACM, 35(9): 75–90. 7. Barbara Dellen: Change Impact Analysis Support for Software Development Processes, Ph.D. thesis, University of Kaisersalutern, Germany, 2000. 8. P.K. Garg, M. Jazayeri: "Process-centered Software Engineering Environments". IEEE Cumputer Society Press, 1996. 9. Harald Holz, Arne Könnecker, Frank Maurer: Task-Specific Knowledge Management in a Process-Centred SEE, Proceedings of the Workshop on Learning Software Organizations LSO-2001, Springer, 2001. 10. Ron Jeffries, Ann Anderson, Chet Hendrickson: Extreme Programming Installed, AddisonWesley Pub Co, 2000, ISBN: 0201708426 11. Kaiser, G. E., Feiler, P. H., and Popovich, S. S. (1988). Intelligent Assistance for Software Development and Maintenance, IEEE Software. 12. Michael Kircher, Prashant Jain, Angelo Corsaro, David Levine: Distributed eXtreme Programming, Proceedings XP-2001, Villasimius, Italy, http://www.xp2001.org/program.html (last visited July 2001) 13. Laubacher, Robert J., Malone, Thomas W.: Flexible Work Arrangements and 21st Century Worker's Guilds, Initiative on Inventing the Organizations of the 21st Century, Working Paper #004, Sloan School of Management, Massachusetts Institute of Technology, October 1997, http://ccs.mit.edu/21c/21CWP004.html 14. Malone, T. W., Laubacher, R. J.: The Dawn of the E-Lance Economy, Harvard Business review, Sep-Oct 1998. 15. Maurer, F., Dellen, B, Bendeck, F., Goldmann, S., Holz, H., Kötting, B., Schaaf, M.: Merging Project Planning and Web-Enabled Dynamic Workflow Technologies. IEEE Internet Computing May/June 2000, pp. 65-74. 16. James Newkirk, Robert C. Martin: Extreme Programming in Practice, Addisson-Wesley, 2001, ISBN: 0-201-70937-6 17. Osterweil, L. (1987). Software Processes are Software Too. In: Proc. of the Ninth Int. Conf. of Software Engineering, Monterey CA, pp. 2-13. 18. Peuschel, P., Schäfer, W., and Wolf, S. (1992). A Knowledge-based Software Development Environment Supporting Cooperative Work. In: Int. Journal on Software Engineering and Knowledge Engineering, 2(1). 19. Roger S. Pressman: Software Engineering: A Practitioner's Approach, Fourth Edition, 1996, ISBN: 0-07-052182-4
Using Ant to Solve Problems Posed by Frequent Deployments Steve Shaw Intelliware Development Inc., 1709 Bloor Street West, Suite 200, Toronto, Ontario, M6P 4E5, Canada WXIZI$MRXIPPM[EVIGE
Abstract. Deploying a system in an agile development environment presents its own set of challenges. If it is true that development cycles are short and that every release is as small as possible, then we are going to be releasing software much more frequently than with other methodologies. Related to this is the concept that the system is never finished, and deployments have to occur throughout the lifetime of the system. This paper examines some of the problems posed by this type of deployment environment and suggests how Ant can be used to solve them. An appendix describes concrete solutions to problems encountered on a real-life medium-sized project.
1
Introduction
This paper describes problems that were encountered during the development of a web-based system, but an attempt has been made to generalize wherever possible. Solutions are suggested, with the understanding that no solution is going to be appropriate for all, or even most, environments.
2
Ant Is My Very Best Friend
The Jakarta Project1 creates a variety of open-source Java tools. Among these is Ant, a build tool. It is easy to use, relatively easy to extend, and provides enough functionality out of the box to handle many build issues. Most of the solutions suggested, and all of the concrete solutions presented, rely on the use of Ant. Ant has several attractive features. All Ant scripts are written as XML, which is quickly becoming the lingua franca of distributed service development and as such is often familiar to developers. It is designed to be platform independent, so the little (but aggravating) things like differing file separators become unimportant. It’s fully extensible with a little bit of Java knowledge. And it boasts an array of features that cover most of the tasks that a build and deploy process may need. Intelliware has been using Ant as a build management tool for over a year. It is the core component of the deployment solution that Intelliware is currently using. There 1
More information on Ant and other Jakarta projects can be found at jakarta.apache.org.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 23–32, 2002. © Springer-Verlag Berlin Heidelberg 2002
24
Steve Shaw
are Ant targets2 to handle just about every aspect of the build and deploy process. Use of Ant allows for evolutionary change to the deploy process: most of the scripts and targets used for the deploy process grew out of the build targets that were defined early in the projects. These same scripts, with some slight modifications, are also used in the production environment to start services and set up resources. Ant has also proven useful in other situations when platform independence and flexibility are required: code integration, setting up batch jobs, and integration testing.
3
Issues
Many of the issues described in this paper apply to any kind of software deployment.3 They become more important in an agile environment because of the frequency of deployment. In traditional development environments, deployments occur less frequently, and difficulties can be handled by manual intervention. 3.1
Different Types of Deployment
A robust deployment strategy allows for different flavours of deployments. This is especially true for test-centric development methodologies: it is often desirable to deploy the system to development machines, as well as to the nightly build4 machine. Each type of deployment target poses different challenges. Development Machine. When running an entire system on a developer’s machine, it becomes necessary to run each component (or a dummied-out version of that component) on a single machine. Actually doing the deployment should not be a problem, since the development machines will likely have a full set of tools. In fact, the “deployment” in this case can be as simple as building the application directly on the machine. Nightly Build Machine. The nightly build machine (or machines) will likely contain the full set of development tools. Whether this machine is identical to the developers’ workstations will depend on project needs and target environments. Another variable is whether the nightly build targets a single machine, or whether various services are spread across different targets in order to more accurately depict the eventual production environment. It can be easier to run unit tests against processes running on a single machine, but more robust integration testing might call for the use of multiple machines. 2
An Ant target is a set of tasks that are executed together. An Ant task is a single unit of work, either built-in or user-defined. Built-in tasks include javac, to compile Java code, and junit, to run JUnit test suites. 3 Deployment is an overloaded term. It is used in this paper to refer to the process of distributing executables, setting up resources, and performing other tasks that are necessary to install a piece of software. 4 The “nightly build” refers to the regularly-scheduled build, often run at night to reduce the impact on users of the system. Also called the daily build. It is common for the daily/nightly build script to be run more than once a day as required by the development team.
Using Ant to Solve Problems Posed by Frequent Deployments
25
Everything Else. Other types of deployments can be thought of as “real deployments”. In these cases, the application is usually not built from scratch, but is distributed in some compiled format – JAR files, for example. It is unlikely (and undesirable) that these targets have the full set of development tools available for use. Whenever possible, the components should be compiled and packaged in a single spot. Often the production environment calls for a multi-machine deployment. This type of deployment can be configured in such a way that changing one set of properties allows for different deployment targets. For example, the same deployment script could be used to distribute the application to three different sets of servers: one for end-user testing, one for pre-production staging, and a final set of production servers. Only the properties file driving the deployment needs to change. This file indicates the location of each component and updates resource directories accordingly. Suggestion: • When defining the deployment process, draw clear lines between the responsibilities of different agents. Where does the “build” end and the “deploy” begin? Or are they part of the same process? As the production environment becomes more complicated, these clear delineations of responsibility will make evolution of the system much easier. 3.2
Platform Dependencies
It is not unusual for the eventual target environment of a system to differ from the development environment. The most common difference is operating system, although it is possible that such things as database managers and other system resources will also be different. These platform differences have to be addressed during development, and are usually handled by using as many platform-agnostic tools and resources as possible. There are also implications during the deployment process: each problem mentioned here will have to be solved for each target platform. Suggestion: • Use Ant. It is specifically designed to be platform independent. If there isn’t an Ant task that does what is needed, write a custom task or use the exec task to kick off a system command (different commands can be specified for different operating systems). 3.3
Defining System Resources
One of the first problems with the deployment of any system is the definition of system resources. Examples of the resources that may need to be defined are databases, message queues, and resource directories such as JNDI. A joy of working on a small agile project is that individual developers often have the authority to change the resource definitions to meet their needs, without having to schedule resource drops. The other developers will simply pick up the changed resource definitions on their next integration. As long as a working system can be built from scratch, the integration is considered to be successful.
26
Steve Shaw
Transient Resources. Resources such as message queues5 and resource directories can be handled without difficulty. These are “transient” resources because they can usually be torn down and redefined without affecting the state of the system (assuming the system is not currently active6). It is easy to arrange for the deploy process to destroy the existing resource and recreate it to current specifications during each build. Databases. It is more difficult to handle databases, because the data they contain represent the current state of the system. Deploying a new database schema to development machines is simple enough – it’s usually sufficient to drop the tables, recreate them, and repopulate them with a set of reasonable initial data. However, a database on a testing or production platform cannot always be dropped and repopulated without regard to the data that it contains. More complex methods of updating a database schema while maintaining the integrity of the data must be utilized. This is not a trivial problem, as the database may have gone through several sets of changes since the last time a release was deployed to the target platform. Database changes that involve changing existing data are awkward. As Kent Beck[1] wrote about this and similar problems, “If preproduction weren’t so dangerous, you’d keep from going into production forever”. Suggestions: • Ant has a built-in task for handling JDBC calls, so database setup can be handled simply by coding SQL as CDATA7 in the Ant scripts. An alternative is to create a database command script as supported by a specific database manager, but this approach is less portable. (However, it would allow access to vendor-specific functions not usable through JDBC – which may or may not be a good thing.) • Write custom Ant tasks to define resources. An example is the bindname task developed at Intelliware to handle JNDI registration. The task implements different JNDI solutions based on Ant properties. • Use exec tasks for everything else. For example, an exec task can be used to define JMS queues. The task invokes the vendor-supplied batch or shell script used to define, delete and register JMS resources. Outstanding problem: • The issue of dealing with database changes while maintaining the integrity of data is not one that can be dealt with easily. Minimizing the number of post-release database changes is desirable, but often unrealistic given the evolutionary nature of agile development practices. Traditional deployments will often deal with this sort of issue by creating data migration scripts, which is not onerous if database changes are made once a year. When releasing something every six weeks, however, a little bit of automation might be called for. 5
It can be argued that message queues are not transient resources, especially if they have persistent messages. In that case, changes to queue definitions behave much as changes to database definitions – special effort must be made to ensure the integrity of the persistent messages is maintained. 6 Deploying a new release of a system that must be constantly available presents challenges outside the scope of this paper. 7 Character data included inline in the XML file.
Using Ant to Solve Problems Posed by Frequent Deployments
3.4
27
Remote Deployments and Authentication
In the case of a multi-machine deployment, the application has to somehow distribute the components of the application to the target machines. This is not usually a huge problem on a local network, as there are a number of solutions that will work across different operating systems.8 The distribution of components becomes more difficult once the application leaves the development floor. The mechanisms used in the lab are often not applicable in production, where the deployment process has firewalls, subnets and DMZs9 with which to contend. For example, using FTP or the Unix rcp (remote copy) commands to handle deployments on the local network might be acceptable, but both utilities are insecure and not appropriate for transferring sensitive data over a public network. Related to this is the problem of authentication. If the application has components on a publicly-accessible machine (as is necessary for a web application), it is imperative that only trusted peers are able to update the components on that machine. This limits the choice of tools to use when deploying to remote machines. Suggestion: • Ant is not of much direct use here. Creating a task is one approach to handling these issues – but it is difficult to ensure that authentication security is sufficient. It may be more secure to rely on OS-specific remote authentication procedures, which are bound to be more thoroughly tested than homegrown code. On Unix/Linux, use scp and ssh (secure versions of rcp and rsh) whenever remote authentication and security are important. Use the Ant exec task to kick off the Unix commands, and substitute the Windows equivalent when necessary. Outstanding problem: • Is there a Windows solution for remote deployment and authentication? There are versions of scp and ssh that are available for Windows, but it would be preferable to use a native solution. 3.5
Deployment Frequency and Timing
The increased frequency of deployments is the primary challenge in deployment planning for agile methodologies. Problems that are simply annoyances on other projects can become important issues on agile projects as more developer hours are spent manually handling them. As well, agile developers are used to automating everything – it seems wrong when manual intervention is called for. Another issue is that of the timing of deployments. Different types of deployment are going to occur at different intervals. The application may be built and “deployed” several times per day on each developer’s machine, and should occur at least once per day on the nightly build machine. Other deployments have to be scheduled more carefully. For instance, changing the testers’ deployment environment without warning is bound to result in irrepro8 9
Samba, FTP, and a shared cvs server are three different ways to solve this particular problem. Demilitarized zone, referring to the subnet(s) that exist between the safe local network and the unsafe Internet.
28
Steve Shaw
ducible bugs and angry testers. It is also undesirable to deploy to a production environment while the system is under heavy load, even if a backup is in place to handle user requests while the primary system is being upgraded. Suggestions: • When dealing with very frequent, low-impact deployments (e.g, on nightly build machines), have an scheduled automatic process kick off the build/deployment. • Other kinds of deployments are best initiated manually after developers, testers and users have agreed that it’s time for a new version of the system. 3.6
Stopping and Starting Services
Monolithic systems are largely a thing of the past. Today’s systems usually consist of a variety of components. The systems are flexible and easier to maintain, but are more complex. Part of this complexity is the issue of starting and stopping services that act as components of the system. These “services” could be anything that has to be shut down before the new code is dropped, and restarted afterwards. Examples include web and application servers, and parts of the system that run in their own process space. This is another example of a task that is relatively straightforward when the system is deployed onto a single target machine but becomes more difficult when the components are spread across several machines. This type of task will also vary from platform to platform. Suggestions: • Again, Ant’s exec task is useful for starting up any services that the system depends on. Alternatively, the java task might be applicable in some circumstances.10 • When starting or stopping services on remote machines, use the same kind of process that was set up for remote deployment. The same argument applies: secure system utilities are likely to be more secure than homemade solutions. 3.7
Deployment Monitoring and Testing
With an agile development process in place, production deployments may be performed every few weeks. It is important that the development team is satisfied that the deployed system is working properly. This problem can be solved with the same practice that solves many other problems: more testing. Hopefully the unit tests and integration tests that are run against the nightly build will catch any bugs before they move into production, but that cannot be guaranteed. It is likely that these tests are not suitable for running against a live production system. Before tests can be run against a deployed system, the deployment must have completed successfully. This implies that a monitoring system must be in place. When the deployed system resides on a single machine, it is a trivial matter to ensure 10
For example, starting a Java program. It’s not rocket science.
Using Ant to Solve Problems Posed by Frequent Deployments
29
that the deployment has completed. It can be difficult to track the progress of a deployment across multiple machines, especially if individual processes are being done asynchronously. The tests that are run against the deployed production system are bound to be different than unit or integration tests. It is important that the tests do not modify the behavior or state of the system. Of course, it is difficult to ascertain whether a test has worked if it does not affect the state of the system. Suggestions: • Track the success of the deployment carefully. If a deployment fails, then the development team should be made aware of it as soon as it is appropriate – whether through a log message, email or page. • Create a series of lightweight tests that act strictly as a sanity check. Such “smoke tests” do not do anything except invoke the system components in the least intrusive way possible – e.g., make sure that all the web pages of a site are up, that all the servers are responding to pings, and that all services are responding to “heartbeat” maintenance messages. This level of testing should be sufficient to ensure that the application was deployed properly, while the other levels of testing performed during development ensure that the system performs as intended.
4
Conclusion
The major challenge presented by deploying in an agile development environment is that all of the traditional deployment problems are faced much more often. It’s often not worth the effort to automate processes that only occur once a year, but when deployment occurs on a monthly basis, it is well worth automating the process as much as possible. Ant has proven to be a valuable tool in implementing the build and deployment strategies at Intelliware. The combination of its ease of use, impressive range of features, and extensibility has contributed to the success of several projects.
5
Previous Work
Fowler and Foemmel [2] discuss continuous integration and the importance of automating it. Ant is mentioned as the tool of choice, in combination with JUnit and a reasonable amount of customization. The paper is more heavily focused on “why” rather than “how”, but its conclusions and recommendations are compatible with those of this paper. Hightower and Lesiecki [3] extensively describe the use of Ant and other opensource tools within agile projects. The book does not specifically address deployment issues, but does touch on them in passing. It also advocates the use of other open source tools, notably JMeter, JUnitPerf, and Cactus. Loughran [4] provides an invaluable resource for anyone wanting to use Ant in a real-world situation. The paper has a short section on deployment, but its true value is in the strength of its common-sense suggestions.
30
Steve Shaw
References 1. Beck, Kent. Extreme Programming Explained: Embrace Change. Addison-Wesley (2000) p.135 2. Fowler, Martin and Foemmel, Matthew. Continuous Integration. http://www.thoughtworks.com/library/Continuous Integration.pdf 3. Hightower, Richard and Lesiecki, Nicholas. Java Tools for Extreme Programming. Wiley Computer Publishing (2002) 4. Loughran, Steve. Ant in Anger. http://jakarta.apache.org/ant/ant_in_anger.html
Appendix: Anatomy of a Deployment Solution This section outlines the solutions developed for a specific project at Intelliware. The deployment of this system encountered a variation on every problem outlined in this paper. The application consists of: • a web application, including servlets and JSPs, distributed as a WAR. • a web services application, including servlets, distributed as a WAR. • a number of independent asynchronous processes, distributed as JAR files. • support and common code, bundled as JAR files. Supporting this application: • a small database (approximately 40 tables). • the various (transient) messages queues use by the asynchronous processes. Other considerations: • development occurs on Windows workstations. Each machine is able to run the entire system locally. • the nightly build runs on a Linux box. • integration testing is spread across the build machine and several other Linux servers. • the production target is Linux, using a variable number of servers depending on budget, load, and failover requirements. The servers are on a subnet with limited visibility to the development network. Evolution: the Different Types of Deployment The Beginning. When the project started, the team knew that they’d have to handle multi-platform deployment in the future, even though they didn’t have any hardware yet. They borrowed a single Linux box from another project and used it as their nightly build machine. That was the point in the project where the core Ant scripts were developed: scripts to define the database, define the JMS queues, and run the build. In the early days, all the services ran on a single machine. The “deployment” process was pretty easy – the script would simply start the services as necessary on the Linux box using shell scripts.
Using Ant to Solve Problems Posed by Frequent Deployments
31
Maturity. A full Linux build environment was set up. The deployment process only had to change a bit. The primary build machine would kick off the build process on the other servers. In this case, each machine would check the source out of CVS and compile it, stop and start its own web servers, and define the JMS queues on each machine. The database server was on an entirely separate machine by this time, and the database definition and initial data load was performed by a database client on the build machine. The obvious drawbacks to this type of deployment – redundant compiles and testing, and the fact that each server had a complete code drop and resource build even though they only needed a subset of the functionality – were acceptable in the development environment. It was even desirable that this sort of thing was happening on the development servers, as it gave developers the chance to run the entire system on each server, making it possible to cordon off a server or two for isolated testing. Production. Several known issues became unacceptable when the system moved to production. The team modified the build process into distinct parts – compile, unit test, jar, deploy/distribute, and integration test. The deploy/distribute phase has three parts. First, on the build machine, the necessary files (JARs, WARs, and whatever else is needed) are copied into subdirectories. Each subdirectory represents one machine – one of the web servers, or the services machine, for example. Next, these directories are distributed to the remote machines by use of rcp (the Linux/Unix remote copy command). Finally, rsh (remote shell) is used to kick off a shell script to handle the deployment of files on each machine locally (e.g. putting the web apps in a web server directory). The local shell script is also responsible for starting the services on each machine. Once this setup was in place for the build machines, the deploy/distribute script was adapted to take the JARs from the build machine and perform deployment to production, with a different machine specified for each function. There was an immediate problem – the production subnet was tightly secured, and the local network could only communicate with a gatekeeper machine on that subnet. Furthermore, rsh and rcp were deemed to be too insecure to use on the production machines. The deploy process was further modified. Now all of the files are copied to the gatekeeper machine, and a script run on that machine is responsible for deploying the components to the other machines on the production network. As well, all file copying and process invocations are now handled by scp and ssh, which required a little more setup on each machine. Today, three different kinds of deployment are done. The local developers run a “local build and deploy” on their workstations when they want to ensure that changes they made do not affect the system. The local build compiles the code, runs unit tests, deploys the code locally, starts the support services, and runs the integration tests locally. (In practice, the local build is only used when changes have been made that cannot be fully tested by the unit tests: these usually are composed of changes that are made to non-code resources (e.g., JSPs) or code changes that cross several subsystem boundaries.) The second kind of deployment is the nightly build. The build machine checks the current source out of CVS, compiles it, and runs all the unit tests. If the unit tests run correctly, the code is jarred and distributed to the appropriate machines. Resources are built or defined on the machines as they need to be, the database is rebuilt, and serv-
32
Steve Shaw
ices are stopped and started as necessary. Integration tests are run, and the build is deemed a success if all the integration tests run properly. The final kind of deployment is used for moving the system to the testing, staging and production environments. These deployments are started manually. JARs from a known “good” build are deployed to the appropriate machines. Resources are redefined as necessary. Services are started automatically, and no integration tests are run. Since the target machines are not directly accessible from the local network, all of the deployment and services management is handled through the “gatekeeper” machine as described above. The difficulty of merging existing database data with a new schema is still handled semi-manually – a pair of developers will examine the differences between the current schema and the new one, and use scripting tools to assist in creating the database migration script. All three types of deployment are handled by one set of scripts that evolved over the lifetime of the project. The scripts are generally modularized by function instead of by platform or type of deploy; this allows us to keep all of the “start service” logic in one script, for example. Things That Aren’t Being Done Well… Yet Monitoring System. The deployment process is not being monitored. Because some of the processes are kicked off asynchronously by remote shell commands, it is difficult to know when the deployment is complete. Similarly, there are no automated tests that are suitable for running against the deployed system – someone has to test the system manually after a deploy. Windows Deployment. There is no allowance made for a remote or distributed deployment on Windows. It is unlikely the system will ever be deployed to that environment, so this capability has not been a priority.
Supporting Adaptable Methodologies to Meet Evolving Project Needs Scott Henninger, Aditya Ivaturi, Krishna Nuli, and Ashok Thirunavukkaras Department of Computer Science & Engineering University of Nebraska-Lincoln Lincoln, NE 68588-0115 WGSXXL$GWIYRPIHY Abstract. While most agile methodologies assume that change is inevitable, current approaches have adopted the strategy of defining practices and activities that are general enough to be adapted to many project settings. These methodologies have the ability to address variance and adaptability within the processes, but are unable to adopt different methodologies to meet the evolving needs of projects as they progress through their lifecycles, or change to meet new business or user conditions. For example, a project may begin with a Scrumbased process, but require some XP processes or even heavyweight processes later in the lifecycle. Agile methodologies should be able to react to these changes with appropriate practices and processes that fit project needs at any point in time. In this paper, we describe a methodology generator, a tool that can create hybrid approaches to software development spanning from the most simple to the agile to the heavyweight, depending on project needs. A rule based system is combined with an experience-based feedback mechanism to define the conditions under which a given methodology, process, or activity is applicable to project needs. Deviations from the defined process are freely allowed, but the deviations are captured by the tool so it can be analyzed for process improvements that can help software development organizations become more adaptive to changes in business and technology conditions.
1
Finding the Right Methodology for the Job
While often confused with high-speed software development in “Internet time” [7], agile methodologies, such as Extreme Programming [1, 2], Adaptive Software Development [16], Scrum [25, 24, 23], Crystal methods [8], Dynamic Systems Development Method (DSDM) [27] and others have focused on the need for software development methodologies that can adapt to changing requirements and fast changing business needs. A critical, but neglected, issue these methodologies face is the need to balance the desire for innovation with knowledge of past experiences and best practices. This tension between past knowledge and creating new knowledge is particularly acute in the software industry, which involves the development of a highly variable product that dictates the need for continuous process adjustments. The rise of agile methodologies can partly be attributed to a backlash against “heavyweight” methodologies that can overemphasize process, documentation, and stability over software development. These values can be dysfunctional when requirements are volatile [13] or business conditions change. The alternative is to embrace change [1, 16, 17] and refocus on ensuring that the right product is built, not just that the product is built to specification [9]. While these views are often seen as D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 33–44, 2002. © Springer-Verlag Berlin Heidelberg 2002
34
Scott Henninger et al.
commonsense and already widely used in industry [22], the act of designing methodologies that adapt to volatile conditions demonstrates a need that is not being met by traditional methodologies. But the choice is not binary, as demonstrated by Cockburn’s Crystal methods, which defines the need for increasing levels of methodology and procedures as the number of developers increase and/or the significance of the system to human needs increases, in terms of both monetary and loss of life risks [8]. What is missing, though, is the means to understand what methodology is appropriate under what circumstances. The problem of understanding what methodology should be used under which circumstances has barely reached the consciousness of software engineering researchers and practitioners. It seems clear that a strategy that mixes a continuum of various heavyweight and lightweight processes [5] will be necessary to tackle the complexity and diversity inherent in efforts that involve different development tools and languages, vastly diverse application domains, and varying degrees of developer experience and competence, among other factors. Organizations need to capture and manage this knowledge [14] to create best practices that suit specific development needs. Einstein was known to have said “My pencil is cleverer than I,” meaning that writing instruments have a longer-term memory than human beings. While it is clear that individuals and interactions are more important than processes and tools [3], tools can clearly help people perform the tasks. In this paper, we present a human and toolbased approach that captures the context in which a particular practice, process, or methodology is applicable or known to be effective. Thus, the focus shifts from “which (agile) methodology is the best” to “which set of practices are most effective for a specific development setting.” In our approach, tool-based support is used to guide and inform humans, not restrict or impede progress. A rule-based system is used to capture the context in which certain process segments and methodologies are appropriate for software projects. In addition, and more importantly, developers and managers are free to deviate from the assigned process when necessary. These deviations are recorded in a manner that can be used for analysis of how the overall process can benefit from the experiences of each individual project, so that future projects can benefit from the collective intelligence of the organization. While the overall approach is simple, the complexity and diversity of software development efforts will result in complex and emergent behavior that will mirror the complexity needed to successfully complete software development efforts. 1.1
Knowledge Management and Improvisation
While agile methodologies have focused on cutting down on excess process, our focus has come from a knowledge management perspective that aims to find criteria for tailoring and combining processes to meet project needs [14, 15]. Software development is a knowledge-intensive activity that involves many different kinds of knowledge and, unlike manufacturing processes, varies significantly from project to project. The implication is that it is not enough to consider the use of existing knowledge. Because software development (and design in general) has so much variance between projects, we need to focus more on the knowledge creation process
Supporting Adaptable Methodologies to Meet Evolving Project Needs
35
[11, 21]. One way to look at this is as a process of improvisation, where knowledge is not constructed from a blank sheet of paper, but “involves and partly depends on the exploitation of prior routines and knowledge” [10], while allowing appropriate amounts of exploration that enhances competitiveness and adaptability [18]. In other words, knowledge creation depends on knowledge use and knowledge use leads to knowledge creation. This process is captured in Figure 1 where knowledge creation is not only encouraged, but required, at two levels. The first is at the team level, where a continuous review process is used to plan next phases and assess the appropriateness of process knowledge provided to the team. Deviations from the process are viewed as a normal part of the development process, one that leads to refinements and opportunities for learning [21]. Deviations from the process are handled in a disciplined manner through both the people-centered review process and an escalation step in which significant changes to the process are reviewed in an cross-project or organization-level analysis. Process specialists at this level have the ability to view all deviations and discover emergent process needs that can be used to refine, modify, and extend the process. Note also that the process begins by drawing on software development resources that are tailored to the project during the software creation phase. This ensures that novices and experts alike are alerted to organizational best practices, while allowing teams to deviate when necessary. Thus, the freedom to innovate is allowed, while giving people the tools to do so in an educated fashion (as indicated by the term “improvisation”).
Fig. 1. Tailoring and Adapting Software Methodologies to Meet Project Needs.
36
Scott Henninger et al.
2
Agility Based on Contextual Criteria
McConnell describes a similar knowledge maturation phenomenon for software developers involving an evolution from self-reliant pioneers to a focus on rules to understanding “software project dynamics well enough to break the rules when needed.” [20]. Our overall objective is to facilitate this maturation through tools that can help spread collective knowledge structures. This is accomplished by combining a tailoring process with a flexible methodology using feedback to learn from experience and refine processes to better meet organizational needs. These concepts are demonstrated through BORE (Building an Organizational Repository of Experiences), a prototype tool designed to further explore and refine the requirements for tools supporting experience-based approaches that support disciplined improvisational development practices.1 It combines a work breakdown structure (WBS) with repository tools to create a framework for designing software process methodologies and repository technology for capturing and applying knowledge artifacts. BORE uses a case-based architecture to represent attribute-based information in a structure similar to software design patterns [12]. Cases represent activities that are assigned to projects in a WBS. The Case Manager window on the left in Figure 2 shows a WBS for a set of activities assigned to the “Bogus Client Account Manager” project, which is in the process of performing a Scrum sprint. The icons to the left of the names in the WBS are color coded to indicate status. The darker activities (blue) represent completed activities, lighter activities (green) are actively being worked on. Icons with ‘?’ indicate options associated with the activity that further refine the WBS and/or provide decision points for transitions between processes and methodologies (see Section 2.1). The “Backlog List” activity for a Scrum iteration is shown in the right-hand window of Figure 2. The Description field is used to describe the organizational standard for the activity, in this instance providing a checklist of what needs to be accomplished to complete the activity. This field is used to deliver activity information directly to the software manager or developer, instead of forcing people to page through manuals or search Web pages to find information about the organization’s development processes and practices [15]. The Solution field shows project-specific documentation, and other fields (not shown) hold cross-referential links and other information. Projects are created from Methodologies that consist of 1) Process Models and Activities that are composed in a WBS the same tools available to projects, and 2) a set of tailoring rules that determine when the process models and activities should be applied to projects. On project creation, the user chooses an initial methodology and an instance of that methodology is created. These are represented by cases that refer to the activities defined in the methodology, allowing easy cross-referencing to other projects using the same activity 1
BORE is implemented as a Java applet with a relational database back-end. It can be reached at http://cse-ferg41.unl.edu/bore.html. New users can log in as ‘guest’ (no password) or contact the author for a user id. Unfortunately, being a research prototype, there is little in the way of help pages or tutorials, but we hope to change this soon.
Supporting Adaptable Methodologies to Meet Evolving Project Needs
37
Fig. 2. Case Manager (left) and a Project Activity (right) Windows.
The defined methodology can be tailored to meet project-specific needs through methodology rules, which define the conditions of when projects are required to execute a specific activity or process (where process is defined as a related set of activities). In other words, the methodology rules choose which activities a project must follow to conform to the process standard. Deviations are allowed, and can be used to update the activities in the methodology or process models to reflect emergent needs of the organization, changes in technology, or new techniques adopted from outside the organization. The following section explains this process in more detail through a scenario that demonstrates the process adaptations possible in BORE. 2.1
Adapting the Process to Project Needs
The BORE system is designed to seek a balance between the reality of “an acceptance of emergent order as a source of solutions in complex realms” [16] and drawing on the collective intelligence of an organization. This allows a combination of both imposed an emergent order that is rarely supported by agile or software process methodologies. In the following scenario, we show how BORE supports a combination of methodologies to suit a project’s individual needs. In terms of agile methodologies, BORE can design a methodology that 1) uses a specific agile process throughout the development, 2) different agile processes can be used to execute a development iteration, 3) agile processes can be mixed within iterations, even adding heavyweight
38
Scott Henninger et al.
processes when needed, and 4) adding organization-specific and team-specific adaptations to any of the previous three combinations. In the following scenario, the data and processes depicted are designed to demonstrate these features and are not meant as recommended processes. Suppose a project team is developing a customized Account Manager for the BogusClient company. It may include features like production revenue reports, net income and profit calculation, audit reports etc. Let us also assume that developers are relatively new to the domain of financial software development, and the clients are unsure of how they should replace their legacy system that has become obsolete for some new business ventures BogusClient has become involved in. In a project of this nature, changes in requirements can be expected, which can be attributed to new business practices, customer fluctuations, and/or revised government regulations. When the project is first created, it is provided with a minimal set of activities, shown in the Case Manger to the far left in Figure 3. The “Choose initial methodology” activity has options associated with it (note the ‘?’ in the icon) that brings up a standard set of questions, shown in the Options tab in the middle window of Figure 3. In our scenario, project members have chosen options indicating that the requirements are fairly well-known and stable. Given the choices shown in the “Answered Questions” pane, a development process is assigned to the project that is based on Feature-Driven Design. This means that, according to organization
Fig. 3. Choosing Options During Project Initiation.
Supporting Adaptable Methodologies to Meet Evolving Project Needs
39
standards, FDD practices fit the needs of this project. The activities show that the project should perform initial modeling, create a feature list, and do some planning based on those features. These activities are further broken down, as shown in the WBS in the Case Manager window to the right in Figure 3. After that, a FDD iteration is started. Note that the project is currently in the middle of the first iteration, as indicated by the lighter filled in status boxes. The darker boxes mean the activities have been completed, and the open squares means the activity has yet to begin. Although not all projects will follow the WBS sequentially as shown here, the status icons can give people an overall feel for project progress. Note that BORE has assigned activities to the project in an agent-like fashion. The software developer or manager does not have to fumble through dozens or three-ring binders or Web pages to find the information they need to perform the activity. It is delivered to the user in the activity. For example, the Description field of the Backlog List activity in Figure 2 is defined in the methodology and copied to the project when an instance of the methodology’s activities is created for the project. 2.2
Capturing Knowledge through Experience
The team, or team members with proper permissions, can deviate at any time from the assigned set of activities by adding new activities or deleting assigned activities. This invokes a “deviation rationale” process that escalates the deviation to the analysis phase (as shown in Figure 1). The user is asked for an explicit rationale explaining the circumstances necessitating the change. Users are instructed to put the rationale in a form that can easily be turned into preconditions for rules. I.e., the user should state “under these circumstances the following changes were necessary.” The change is given provisional status (editing is allowed immediately after entering the deviation rationale) until the deviation is approved. The approval process can be as agile or heavyweight as desired. BORE has policy interface (not shown) that allows a system administrator to assign permissions to roles as desired by the organization and project. In our scenario, the standard has a rule stating that when the Options are answered in the manner shown in the middle window of Figure 3, the activities shown in the right-hand window are assigned to the project. This involves a methodology modeling process consisting of two steps. The first is to set up the methodology in a work breakdown structure consisting of activities and process models. In this case, a FDD-basaed process model with the displayed activities is added to the project. The process model specifies the activities, activity ordering, activity structure, activity options, and other aspects of the model that are instantiated when a rule fires. The second step in the methodology modeling process is to create the rules. Rules are created in an easy to use window (not shown) where preconditions and actions can be chosen [15]. Preconditions are question/answer pairs such as the ones shown in the Answered Questions pane of the activity window shown in the middle of Figure 3. When all preconditions evaluate to true (the evaluation takes place whenever an option is chosen or another precondition event takes place), the rules fires all defined actions. Full backtracking is supported, allowing options to be changed at any time. The most common action is to add or delete activities to a project. Other actions include adding and deleting questions to the New Questions pane (allowing decision trees to be implemented), and other actions, such as sending alerts to people registered to actions taken by agents in the development process [19].
40
2.3
Scott Henninger et al.
Iterative Development
Iteration is a requirement for all agile methodologies. BORE accomplishes this through the process models used by methodologies. This organization’s methodology is iteration-based with a review at the end of each iteration. The review is guided by the options in the review activity. As shown in Figure 4, suppose the review reveals that the requirements are not as stable as originally assumed. The options chosen (left-lower window of Figure 4) lead the team to a Scrum iteration (right-hand window of Figure 4), which is better designed for projects with volatile or ill-defined requirements. But FDD and Scrum are not necessarily compatible. In particular the feature list of FDD may need to be translated into a Scrum backlog list. The methodology creators (perhaps drawing on lessons learned from other teams) have realized this, and the rule places a “Convert FDD Feature list to a Scrum Backlog list” activity in the project (in the middle of the WBS on the right-hand window of Figure 4). Although not shown here, the team can perform as many iterations of Scrum or other methodologies as needed, similar in spirit to the Spiral model [6]. 2.4
Supporting Hybrid Methodologies
Suppose the team encounters new and unexpected developments, such as Tax Law changes imposed by the government, during the Scrum iteration. The development team has to incorporate those changes into the system in the development phase. One of the activities, not shown here, has an option that inquires about the adequacy of the system architecture given current and upcoming requirements. This serves as advice to the team to review the architecture. During the Review activity, it is discovered that the new changes are not supported by the current architecture. They therefore choose an option to indicate that major revisions to the system architecture are needed. Therefore, instead of a pure Scrum process, the standard specifies that a XP refactoring process needs to be applied as part of the development activities in the iteration. Therefore a set of refactoring activities, based on XP refactoring, are assigned to the project. This ought to be natural, as Scrum tends to focus on management practices and XP on engineering practices [26], and projects will need varying amounts of each during its lifecycle. It is important to note that all the options are asked when needed in the development process, not just at the beginning of the project, but throughout the entire system lifecycle. 2.5
Disciplined Process Improvement
Although space prevents us from going into further detail, it should be clear at this point that BORE can support hybrid approaches that mix both agile and plan-driven approaches [5]. Furthermore, it provides a formal mechanism for capturing the circumstances in which a given approach is applicable in a form that people can control through a review and analysis process (Figure 2) that continuously improves the process and the knowledge supporting the process. Thus we have a kind of Adaptive Life Cycle [16] that ensures not just the project, but the process itself can adapt to dynamic needs experienced within the organization.
Supporting Adaptable Methodologies to Meet Evolving Project Needs
41
Fig. 4. Options Available After the Second Scrum Iteration.
While the options and the rule-based system are purposefully simplistic, the combination is capable of capturing complex phenomena. In effect, the options capture project requirements as they evolve through the iterations. Being a formal medium, people can discuss, cajole, or rant about which options should be chosen and whether they are applicable to the project. Thus they serve as a formal medium for communication and collaboration, one that people can build upon as more is learned about the development process, application domains, and technologies the organization uses. As organizations get larger, such tools will become essential for supporting the organization’s collaborative network that enables the ability to adapt and produce emergent results.
3
Analysis of Contributions
The main contribution of BORE and its associated methodology is to utilize process tailoring mechanisms to capture potentially complex software development phenomena. The BORE system allows people to draw from the collective intelligence of the organization while allowing necessary amounts of adaptation and
42
Scott Henninger et al.
emergent behavior. Using the BORE methodology ensures the process is not defined as a single entity and therefore does not fall into the chasm created by the observation that “The more universal a process, the more flexible it needs to be” and its corollary, “The more rigorous a process, the narrower its applicability” [16]. The process consists of many pieces, some of which are applicable under certain circumstances. BORE provides a formalized medium in which software development processes can be analyzed for improvement opportunities. A local IT organization is interested in BORE for just this reason. They have a defined process where managers adopt one of about 40 predefined development processes. Management realizes that teams are tailoring or not following the defined processes, but have no way of tracking how and when the tailoring occurs. This makes it extremely difficult to tailor the processes to better meet project needs. Their particular concern is with requirements engineering, which they have identified as particularly costly, as developers do not have adequate means to track requirements creep, scoping problems, requirements mismatches and related issues. We hope to begin prototyping their requirements process this summer in an effort to both track conformance and capture areas needing improvement. Another issue that plagues methodology efforts is obtaining adequate buy-in from development personnel to make improvement initiatives successful. While top-level management buy-in remains essential, the BORE system facilitates a collective ownership of the process that may help reduce these problems. Instead of being a purely top-down process, project personnel have a stake in the process, as their actions will help determine the processes adopted by the organization. In Highsmith’s terms, BORE creates an ecosystem with independent agents that produce emergent results that evolve over time [16]. To the extent that this evolution will lead to chaos or order over time is an empirical question and may depend heavily on the extent to which common development efforts occur within an organization. The BORE system can help achieve this consistency, through a positive feedback loop set up by pushing processes to the developers, instead of leaving the decisions to individuals. The path from pure rationalism to pure chaos is a gray scale, not a binary decision that software managers have to take, and BORE facilitates finding the right places within that continuum.
4
Conclusions and Future Directions
This research concentrates on “being agile by using adaptations of methods instead of finding agility in the methods themselves”2 by adapting development methodologies to meet the needs of individual organizations and projects within the organizations. This can be a challenging task in the current rhetorical climate. As Boehm states “…both agile and plan-driven approaches have a responsible center and overinterpreting radical fringes.” [5]. The BORE system and its methodology overcomes this barrier by allowing different parts of methodologies to be adapted as needed. Furthermore, the focus on basing decisions on the application of best practices embodied within the organizational context while using necessary improvisation to learn from experience, will help ensure that the process remains viable in the face of dynamic forces [15]. The term “improvise” is key here, as it connotes the fact that
2
A direct quote from an insightful reviewer.
Supporting Adaptable Methodologies to Meet Evolving Project Needs
43
innovation occurs within the known framework [10] – the process repository of best practices, in our case. The most important next steps include the difficult task of getting organizations to take part in efforts to evaluate emerging research in realistic settings. This involves both moving beyond the fragile prototype stage, and finding realistic contexts for evaluation. In this regard, we are beginning to stabilize the system and have a diverse range of development settings that will likely use the tool and associated methodology. Empirical evaluations are starting at local IT departments and year-long software development courses at the University. It is also being evaluated as an implementation technology for the MBASE methodology at the University of Southern California [4]. In addition, other organizations have shown interest and may engage in further pilot studies that will be the subject of subsequent research papers.
Acknowledgements We gratefully acknowledge the efforts a number of graduate students that have helped develop BORE, particularly Kurt Baumgarten, Kalpana Gujja, V. Rishi Kumar, and Sarath Polireddy. This research was funded by the National Science Foundation (CCR-9502461 and CCR-9988540).
Bibliography 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
Beck, K., "Embracing Change with Extreme Programming," IEEE Computer, 32(10), pp. 70-77, 1999. Beck, K., Extreme Programming Explained. Boston: Addison-Wesley, 2000. Beck, K., al., e., "Manifesto for Agile Software Development," 2001, http://agilemanifesto.org/, accessed 4/1/02. Boehm, B., "Anchoring the Software Process," IEEE Software, 13(4), pp. 73-82, 1996. Boehm, B., "Get Ready for Agile Methods, With Care," Computer, 35(1), pp. 64-69, 2002. Boehm, B. W., "A Spiral Model of Software Development and Enhancement," Computer, 21(5), pp. 61-72, 1988. Braiterman, J., Verhage, S., Choo, R., "Business: Designing With Users in Internet Time," interactions, 7(5), pp. 23-27, 2000. Cockburn, A., "Selecting A Project’s Methodology," IEEE Software, 14(4), pp. 64-71, 2000. Curtis, B., Krasner, H., Iscoe, N., "A Field Study of the Software Design Process for Large Systems," Communications of the ACM, 31(11), pp. 1268-1287, 1988. Dybå, T., "Improvisation in Small Software Organizations," IEEE Software, 17(5), pp. 8287, 2000. Fischer, G., Ostwald, J., "Knowledge Management: Problems, Promises, Realities, and Challenges," IEEE Intelligent Systems, 16(1), pp. 60-72, 2001. Gamma, E., Helm, R., Johnson, R., and Vlissides, J., Design Patterns: Elements of Reusable Object-Oriented Software. Reading, MA: Addison-Wesley, 1995. Gould, J. D., Lewis, C. H., "Designing for Usability - Key Principles and What Designers Think," Communications of the ACM, 28, pp. 300-311, 1985. Henninger, S., "Case-Based Knowledge Management Tools for Software Development," Journal of Automated Software Engineering, 4(3), pp. 319-340, 1997. Henninger, S., "Turning Development Standards Into Repositories of Experiences," Software Process Improvement and Practice, 6(3), pp. 141-155, 2001.
44
Scott Henninger et al.
16. Highsmith, J. A., Adaptive Software Development: A Collaborative Approach to Managing Complex Systems. New York: Dorset House, 2000. 17. Highsmith, J. A., Cockburn, A., "Agile Software Development: The Business of Innovation," IEEE Computer, 34(9), pp. 120-122, 2001. 18. March, J. G., "Exploration and Exploitation in Organizational Learning," Organizational Science, 2(1), pp. 71-87, 1991. 19. Maurer, F., Dellen, B., Bendeck, F., Goldmann, S., Holz, H., Kotting, B., and Schaaf, M., "Merging Project Planning and Web-Enabled Dynamic Workflow Technologies," IEEE Internet Computing, May-June, pp. 65-74, 2000. 20. McConnell, S., "Raising Your Software Consciousness," IEEE Software, 18(6), pp. 7-9, 2001. 21. Nonaka, I., Takeychi, H., The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. New York: Oxford Univ. Press, 1995. 22. Paulk, M. C., "Extreme Programming From a CMM Perspective," IEEE Software, 18(6), pp. 19-26, 2001. 23. Rising, L., Janoff, N. S., "The Scrum Software Development Process for Small Teams," IEEE Software, 17(4), pp. 26-32, 2000. 24. Schwaber, K., "SCRUM Development Process," OOPSLA’95 Workshop on Business Object Design and Implementation, 1995. 25. Schwaber, K., Beedle, M., Agile Software Development with Scrum: Prentice Hall, 2001. 26. Sliwa, C., "XP, Scrum Join Forces," Computerworld, 2002, http://www.computerworld.com/itresources/rcstory/0,4167,KEY11_STO69183,00.html. 27. Stapleton, J., Dynamic Systems Development Method: The method in practice. Harlow, Essex: Addison Wesley Longman Ltd, 1997.
Strategies for Introducing XP to New Client Sites Jonathan Rasmusson ThoughtWorks Canada Corporation, 805-10th Ave SW, 3rd floor Calgary, Alberta, Canada T2R 0B4 NVEWQYWWSR$XLSYKLX[SVOWGSQ
Abstract. In this article, the author shares how ThoughtWorks introduced XP into an organization and successfully completed a bleeding edge technology project with client staff that had no previous experience using an Agile development approach. This article illustrates not only how XP helped make the project a success, but provides other valuable lessons learned regarding the introduction of XP at client sites.
1
Background
Our client, TransCanada PipeLines Limited (TCPL), is a leading North American energy company that specializes in natural gas transmission and power generation. TCPL builds many custom applications using Sun Microsystem’s Java J2EE technology. TCPL engaged ThoughtWorks to prove that a series of new technologies and standards (web services) were ready for widespread enterprise adoption. The project’s mandate was two fold. First, determine whether the group of web service technologies and standards were mature enough to warrant building applications within the enterprise. Second, re-write a legacy application to demonstrate the ability of the technology to integrate with other applications. The legacy application was an event notification system written in Forte and Sybase stored procedures. Users monitored existing applications for specific business events, and then used the system to notify registered users once the event had occurred. When ThoughtWorks recommended XP as a development approach, TCPL naturally had some concerns. For instance, XP’s lack of investment in up front design and documentation was contrary to more traditional forms of application development. It was also unclear who would be responsible for the system’s overall architecture. Fortunately, converting skeptics was not a task assigned to the project team. Rather, our job was to demonstrate that XP worked by example. We had six months to prove out the technology, train the team in XP, learn the business, and produce a production ready application. The project team consisted of approximately twelve people: a project manager and iteration manager, a build master, a business analyst, four junior developers (no prior XP experience), and four intermediate/senior developers (had XP experience). Upon delivery, the code base had 21,000 lines of application code and 16,000 of test code.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 45–51, 2002. © Springer-Verlag Berlin Heidelberg 2002
46
Jonathan Rasmusson
2
Overview
This paper is targeted at those who are interested in introducing XP to client sites. It is written from the point of view of a senior developer (the author) who had experience with XP prior to the project. The project applied many software development best practices – not all of them exclusive to XP. In The Practices section we cover those practices I would fight most strongly for on any future project and offer suggestions on how best to apply them on greenfield projects. Hindsight is 20/20 and given a second chance there are some things we would do differently. Room for improvement looks at those areas and describes what we would do differently given the opportunity.
3
The Practices
3.1
Unit Testing
Unit tests have become such a fundamental part of the way we write software that it is hard to imagine life without them. If there was only one practice I would recommend to development, XP or not, it would be to write unit tests. The unit tests were invaluable. They allowed us to refactor [Fowler]. They provided immediate feedback when we broke something. They highlighted the sometimes subtle and difficult to detect differences between our production and development environments. In a sea of change, the unit tests were our anchor. Everyone on the team bought into the concept of unit tests and saw their value. Effective unit testing, however, took time to learn and was best implemented in pairs. By pairing up those with JUnit [7] experience and those new to unit testing, we quickly got everyone into the habit of writing tests. One area to watch out for when writing unit tests is duplication. The XP practice of ‘test everything that could possibly break’ needs to be applied carefully. Overzealous application of this practice can result in redundant testing of proven functionality. This can be detected when you are working in different parts of the system and you see very similar looking unit tests. Another sign is if you or your partner are cutting and pasting parts of unit test code from one test class to another. 3.2
Refactoring
If simplicity is the destination, refactoring [3] is the vehicle for getting us there. When taken in isolation, each individual refactoring is quite simple. What is not so obvious is how to refactor effectively. People new to refactoring are sometimes unclear about when to refactor, and to what extent. Getting people to refactor effectively on the project was best handled by pairing people new to refactoring with those possessing extensive refactoring experience. One of the cornerstones of XP was that the code must be in its cleanest and simplest state at all times [2]. This means the team must refactor all the time, to the full-
Strategies for Introducing XP to New Client Sites
47
est extent. When we did not follow this rule, the code became more cumbersome to work with, and we felt our velocity drop. 3.3
Simplest Thing Possible
Everyone on the team agreed that we wanted to always do the simplest thing possible when writing code. What became challenging was defining simplicity. One area this became particularly relevant was in pattern usage. There were some on the team who were quite comfortable with multiple layers of indirection and the application of design patterns. Popular pattern literature offers advice like “Program to an interface, not an implementation.” [4]. However, for others on the team theses solutions often seemed overly complex for the problem at hand. Furthermore, the XP literature reminds us that we should always “do the simplest thing that could possibly work” because any extra work we do anticipating future requirements is often unwarranted and misapplied. Pattern and XP practices appear to clash on the issue of simplicity. The best explanation I have seen regarding this conundrum was presented by Joshua Kerievsky [5]. Kerievsky points out that the XP simplicity practice has definite merit. Why over design a system in a vain attempt to anticipate future requirement changes? Instead, start simple and let the code tell you whether the use of a pattern is warranted. Design patterns should be targets for our refactorings. Refactor into the pattern rather then start there. This simple advice is powerful and yields a good balance between XP’s quest for simplicity and the expressive power of patterns. 3.4
Test First Design
Most developers on the team were unfamiliar with the concept of writing test code before the implementation. Pairing in these situations helped tremendously. While I can offer no proof that the code written test first was of a higher quality than the code written with tests after, my hunch points to the former. As an estimate, approximately 1/3 of the code was implemented test first. On our project, test first tended only to occur when one or both pairs were keen on its application. When people worked alone, or with others unfamiliar with the practice, they slipped back into their more comfortable non-test first world. Effectively applying test first on green field projects takes time. This is because of the number of skills that are needed before this practice can be applied. Developers must first be comfortable with writing unit tests and simple design before applying test first. This gives them the confidence to take those first tentative steps into the world of ‘write the code as if it already existed’ and worry about the implementation later. 3.5
Continuous Integration
Where JUnit [8] aided us in testing, another open source project called Ant [6] helped us continuously integrate our code into the master build. Ant build scripts are analogous to make files in UNIX. Our Ant build scripts could check out our latest code, compile it, release it to our target environment, and run all unit tests through a single command. Because Ant is written in Java, we were able to run the same build process on various development environments.
48
Jonathan Rasmusson
The other tool we used was an open source project started at ThoughtWorks called CruiseControl [7]. CruiseControl is a Java program that periodically scanned our CVS repository for newly checked in code. When new code was checked in, CruiseControl would run our build scripts and promptly report if there was anything wrong. If we did it again, I would put more emphasis on finding ways of making the tests run more quickly, without compromising the ability to test the system exactly as it would run in production. At the end of the project, our system took 24 minutes to build. As a result, developers started only running a subset of the tests before checking the code in. One possible solution to speed up our tests is the use of MockObjects [9] to fake out web container and database calls. Another option would be to direct our tests against fast, lightweight, in-memory databases instead of their full-blown production counter parts that continuously write to disk. 3.6
Pair Programming
Pairing is the most powerful XP practice and made the largest contribution to our overall project success. Unit testing, refactoring, test first, and simplicity are vital. Pairing, however, is the means by which we got the team applying these practices to their fullest extent. Pairing was the most effective way to communicate XP practices to new team members. Not unexpectedly, people embraced XP at different speeds. Not everyone immediately started writing test first code. It took time to learn effective unit testing. Refactoring was not second nature. Gentle pairing, however, brought a level of consistency to the team regarding XP’s practices. One piece of advice for senior developers when pairing with junior developers is to work at the junior’s speed. A junior developer commented that he initially found pairing difficult because the senior person would work too quickly. The junior felt more like a passenger traveling at mach speed while the senior developer was doing all the work. In situations like these, it is important for both pairs to feel comfortable in order to contribute effectively. 3.7
Standup Meetings
Daily standup meetings, lasting no more then 15 minutes, were a critical component to the success of the project. The simple act of collecting everyone into a room and sharing knowledge was a very efficient means of enabling mass communication between team members and stakeholders. When interviewing team members on factors they considered essential to the projects success, standups ranked very high. They were the town hall meetings that everyone wanted to attend to get an update on all the daily project events. 3.8
The Environment
Like any ecosystem in nature, there are certain environmental conditions that help the inhabitants live and thrive. The optimal ecosystem on this project was when the team was co-located working within earshot of each other. Big open spaces, large tables
Strategies for Introducing XP to New Client Sites
49
where developers could sit side by side and pair program were most conducive to a productive work environment. This was similar to the “caves and common” room layout described by Auer [1]. In this scenario there is a “common” area for pairs and groups of developers to congregate, and a “caves” area for people when they require privacy. This seems to strike a nice balance between the high bandwidth pairing environment and the more secluded private space we all sometimes need. 3.9
It’s the Little Things That Count
Like all things in life, it is often the little things that end up making the biggest differences. Here are some little things that had big impacts on the project. Little pats on the back were greatly appreciated. Management realized that by making the developers feel genuinely appreciated, developers in turn continuously went that extra mile. Tasty snacks for those long iteration-planning meetings and occasional mid afternoon coffee runs helped keep the project fun and the morale high. The QA department demonstrated their appreciation to developers who had written great unit tests by rewarding them with stickers (much like the ones we would have received in grade school). These soon became sought after items by other developers. While these small gestures may seem trivial by themselves, cumulatively they aided in their own little way to the success of the project. I encourage teams to look for ways of making their projects fun and to give each other occasional pats on the back for a job well done. 3.10 Release Management We found TCPL very receptive to how XP handles release management. Our release cycles were based on two week iterations, with a formal release in six months. Two weeks was sufficient to predictably add new functionality to the system, and give management a sense of how the project was progressing. We also found the two week period kept developers focused on delivering quickly, and not be lulled into a sense of complacency that can occur with longer iteration periods. Story planning worked best when someone with authority, who could speak on behalf of the customer, was present. The client also appreciated the fact that stories could be re-prioritized without having a huge impact on the project. As requirements changed, XP allowed the development plan to changed accordingly with minimal impact. As is discussed in Get a Real Business Customer, we did encounter challenges with prioritization of stories and identification of business problems.
4
Room for Improvement
4.1
Communication with External Groups
In the later stages of development, we needed the services and expertise of external groups. Specifically, we required database and system administrators to configure and setup our production environments. Naturally, we setup stories within our itera-
50
Jonathan Rasmusson
tions regarding the interaction with these external groups to track progress. These stories became the most difficult to complete and took the longest time. One mistake we made was assuming that others worked within our time frames – two week iteration periods. When working with external groups, it is helpful if you can send a scout ahead (preferably a project manager to talk to the external groups’ leaders) to ensure everyone is on the same page and minimize any roadblocks ahead of time. It may also have been beneficial if we had invited external group members to a few of our standup meetings. This would have gently shown them how we worked, and shared with them our objectives and goals. More communication with these groups would have made our lives easier. If you have a dependency on an external group, start working with them sooner rather then later. 4.2
Get a Real Business Customer
The direction of the project was periodically challenged. It was not always clear what business problem we were trying to solve. On one hand our mandate was to replace a legacy application. On the other hand, the new technology we were working with had the potential to help other groups within TCPL solve their business problems. Because the business backers of the project periodically changed, so did the requirements. As a result, story planning sometimes became tricky. Without one single business customer continuously present within the project, the analyst and project manager had to periodically juggle stories on behalf of multiple clients. Story planning would have been much easier if there was a single business customer continuously present on the project driving requirements. Looking back, I believe we did a good job of gathering and implementing requirements from multiple drivers. How might things have turned out if we had a single strong business partner from the beginning? Would certain technology investigative spikes been forgone? Might we have done things even simpler? Were we to do it again I would fight for a single business customer. 4.3
Resist the Urge – Do the Right Thing
One is always tempted to take short cuts when coding. “Our code is so well tested we can afford to skip that unit test.” “We don’t really need to refactor that code because this is the last iteration and there will be no more work after that.” “Its OK that 99% of the tests run.” You get the idea. Ignore those little voices in the back of your mind telling you that everything is alright and that you are home free. Whenever we gave in and took the easy way out we ended up paying for it later. When you have a successful project, remember what got you there and keep doing it. Do not let your guard down.
5
Summary
XP can be introduced to client sites with no previous XP experience. The key is to have enough people familiar with XP’s practices who can help teach others their effective application.
Strategies for Introducing XP to New Client Sites
51
The XP practices are best passed to new team members through extensive pairing. Some XP practices (unit testing, and refactoring) will be more readily adopted than others (test first). Pairing is the key to getting everyone applying the practices at a high level of disciplined consistency. As with any project it helps tremendously if you have good people. Good not only in their ability to work with technology and develop software, but more importantly good communicators. People who can express their ideas and work well with others. Teams made up with these sort of good people will always find a way to make projects successful.
Acknowledgements This article would not have been possible without the help and support of many people. I would like to thank Vivek Sharma, Dean Larsen, Jason Yip, Eric Liu, Paul Julius, David Fletcher, Olga Babkova, Janet Gregory, Brad Marlborough and Tannis Rasmusson for their insightful comments on drafts of this paper.
About the Author Jonathan Rasmusson can be reached at ThoughtWorks, 805-10th Ave SW, 3rd floor, Calgary, Alberta, Canada T2R 0B4;
[email protected]
References 1. Auer, K., Miller, R., Extreme Programming Applied: Playing to Win, Addison-Wesley, Boston, MA, 2002. 2. Beck, K.: Extreme Programming Explained. Addison-Wesley, 2000. 3. Fowler, M., et al.: Refactoring: Improving the Design of Existing Code, Addison Wesley, Reading, Mass., 1999. 4. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995. 5. Kerievsky, J., “Patterns and XP”, Extreme Programming Explained, Addison Wesley, Reading Mass., 2001 6. Ant. Build tool for Java. http://jakarta.apache.org/ant/index.html 7. CruiseControl. Continuous integration tool. http://cruisecontrol.sourceforge.net/ 8. JUnit. Testing framework for Java. http://www.junit.org/index.htm 9. MockObjects. A unit testing technique. http://www.c2.com/cgi/wiki?MockObject
Establishing an Agile Testing Team: Our Four Favorite “Mistakes” Kay Johansen and Anthony Perkins Lehi, Utah 84043 USA OE]$\QMWWMSRGSQ ERXLSR]TIVOMRW$IQEMPGSQ
Abstract. The authors have spent the past year building a test team in the highspeed, high-change environment of an Internet development company. As we’d hoped, the Agile values helped us tackle the difficulties of testing in such an environment. We also discovered that the Agile values helped us with the politics of working with other teams and obtaining support in the organization.
1
Introduction
Over the past year, the authors have learned the truth of Tom DeMarco’s statement that "the major problems of our work are not so much technological as sociological in nature." [1] This paper describes our experience building a software testing team using the Agile values. We'd used Extreme Programming on a previous development team [2], and were curious to find out if the Agile values could be extended to testing. When an opportunity came to join a large Internet development company and develop their testing team at their Utah site, we took it. We were drawn to the challenge of adding testing to the development process without bottlenecking their productivity or being rejected by their rather developer-driven culture, all the while maintaining management support. We thought we were up to the challenge. We were armed with the Agile values which we hypothesized would work in testing as well as they had in development, and with our development background which would help us create the kind of testing team that could fit into an overall Agile development process—specifically, we thought we'd support rapid incremental development by significant automation of acceptance tests. We were wrong about the latter, but right about the Agile values. Not only did they help guide our decisions in the one area we had control over—the actions and attitudes of the testing team—they also affected the way we tried to influence factors out of our direct control—the actions and attitudes of others. We cared about the actions and attitudes of others because we knew we weren't going to be in a position of power. We would be part of an interrelated community of teams, each team supporting and depending on others to accomplish objectives. Even if we succeeded in creating an effective testing team, we couldn't assume that everyone would recognize the value of our testing effort or understand and agree with our Agile approach. The risk of being rejected by the organization led us to try some D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 52–59, 2002. © Springer-Verlag Berlin Heidelberg 2002
Establishing an Agile Testing Team: Our Four Favorite “Mistakes”
53
things that seem counter-intuitive from a traditional testing perspective, but which (in hindsight) do seem to be supported by the Agile values. The counter-intuitiveness of these actions is why we call them "mistakes". Since they appear wrong at first glance, we wrote this paper to defend them. But first, we’ll give the defense of our actions some context by describing our more straightforward application of the Agile values to the part that was under our control—the establishment of a new software testing team.
2
Using the Agile Values to Build a Testing Team
We spent a lot of time initially thinking about how to add testing while maintaining or enhancing the company's productivity. We took time to assess the organization's current philosophy and methods before forming our own goals. We noticed that the product development group was essentially agile: self-organizing teams of motivated individuals were releasing software to customers frequently. They didn’t have a documented process they followed, or much internal documentation at all, but they were dedicated, skilled and worked together closely. However, they were starting to feel the effects of being a larger, more ambitious organization, in the form of schedule overruns on their new development projects. Recognizing their apparently chaotic process as essentially an agile approach to real-world constraints helped us focus our testing goals. Instead of trying to "measure and control" a process which was fundamentally not controllable, we found it more useful to learn from the Agile software development community and adopt the Agile values.1 2.1
Individuals and Interactions over Processes and Tools
We were hired to establish better testing processes, so naturally the first thing we were asked to do was to document what those processes would be. The company was working with an outside consulting firm to design a total development process to be used henceforth on all projects, and they wanted our input on the testing sections. Recalling James Bach’s warning, “If you don’t know how to work together, whatever your defined processes, I can all but guarantee that your processes are not being followed” [3], we believed our first priority should be team building, not process. We stacked the deck in our favor by hiring known team players, people we’d worked with before and whose skill sets we knew. We removed cubicle walls to create an open workspace, hoping to encourage good “convection currents of information” and give the team every opportunity to collaborate and invent, following the philosophy of Alistair Cockburn [4]. We stalled on every request from the company to formally document our test methodology, feeling that such an action would limit our team’s creativity. We set up a WikiWikiWeb2 to allow everyone on the team to contribute to and modify our own methods and processes. 1
The Agile values are part of the Agile Manifesto. For more information, see http://www.agilealliance.com. 2 The WikiWikiWeb is a collaborative online discussion format invented by Ward Cunningham. The original WikiWikiWeb is at http://c2.com/cgi/wiki.
54
Kay Johansen and Anthony Perkins
Without a process to tell them specifically what to do, the team became used to noticing and solving problems. For example, some test engineers developed a simple and powerful test framework in Perl to automate both command-line and GUI tests. Others created user interface standards guidelines. Many different contributions led to our system of task allocation using three-by-five cards and a Wiki-based task tracking system. Our emphasis on interaction and teamwork helped us gain the respect of other teams. In a recent survey we sent to fellow managers, all replies to the question “What has Testing done right” mentioned our level of teamwork. No one in the organization below the director level even talks about the company-wide process designed by the consultants. 2.2
Working Software over Comprehensive Documentation
While it might appear difficult for a testing team to accept the lack of specifications or documentation that prevails at many Internet software companies, the Agile values reminded us that documentation is a poor way of controlling software delivery. We put the energy we could have spent requesting specifications, requirements documents, design documents and schedules into obtaining the code itself. We wholeheartedly pursued establishing a regular build for testing, our experience concurring with Jim McCarthy’s statement that “the regular build is the single most reliable indicator that a team is functional and a product is being developed.”[5] We worked with the configuration management developer daily, learning about server administration so we could build and maintain our own test environment, and also so we could tell exactly what changes developers were making and when. The developer helped us set up a clean test environment that was capable of accepting new code daily. We took every opportunity to influence development to practice continuous integration and shorten the develop-test cycle. When developers wanted to wait until they're "finished" before they delivered to testing, we asked them if perhaps there's an order they could deliver the features in. We involved product and project managers and diagrammed the effect on schedule that waiting for a complete product would have. We called attention to the repeated occasions the schedule has been missed due to a particular component that’s always developed without testing involvement. Sometimes our influence worked, sometimes it didn’t, which is better than might be expected, and we think it’s been worth the effort. 2.3
Customer Collaboration over Contract Negotiation
When two parties have their own goals and interests to protect, contract negotiation is the process they must use to come to agreement. When both parties' goals and interests are the same, and they know and trust each other, they don't have to worry about keeping the equations balanced, freeing them to make the most progress possible on their shared goals. To get on the same team as the "customer" (Product Manager in our case), we aligned our goals with his. We shifted our goal of bug-free products more toward
Establishing an Agile Testing Team: Our Four Favorite “Mistakes”
55
profitability for the company, and we asked him to share the goal of quality instead of delegating that goal entirely to our team. We targeted open and timely flow of information as the best way Testing could contribute to collaboration and goal alignment between product management and the rest of the product team. Before the Testing team existed, the whole project team was caught up in a blame game because it wasn’t realized until very late that there were problems preventing delivery. When we provided early and regular bug prioritization meetings, everyone remained appraised of the problems and risks, and worked together to find solutions. 2.4
Responding to Change over Following a Plan
If you measure our results today against our original goals, you’d probably say that our implementation of testing was a failure. Our original plan featured test automation as a way to shorten the develop-test-delivery cycle; we did not in fact shorten the cycle, nor have we automated the majority of our testing. We planned to expand our team size and our operations gradually until we were testing every product; this also hasn’t happened. But as Jim Highsmith says, “In a complex environment, following a plan produces the product you intended, just not the product you need.”[6] We believe our implementation of testing has been appropriate to the circumstances, even though it didn’t go the way we intended. Our perception that it would be fruitless to try to control the way events unfold kept us optimistic and thriving during some dramatic changes in our environment. Products were canceled; others changed direction several times. Development managers came and went. A new usability/human factors team was thrown into the mix. We lost our manager and after a period of confusion, we were eventually transferred to a completely different management chain. Half our team was lost in company-wide layoffs. Each change gave us the opportunity to adjust and improve our process. Layoffs brought the remaining testers closer together as we consolidated our separate, product-centered test teams into one team. The task of deploying updates to customers shifted from development to the system administrators, which resulted in our being asked to test the update process more formally. When a product’s user interface was completely redesigned, invalidating all our automated user interface tests, we took the opportunity to write a better test framework. We realized that some pieces of our strategy were discardable and others were not. Two rounds of layoffs drove the point home: everything else could be flexible, but if we were to succeed in this dangerous environment, there were some things we had to accomplish: • • • •
Add value Be perceived by management as adding value Gain trust and support from the grass roots of the organization Get others to take responsibility for quality The following section describes our strategy for accomplishing these goals.
56
Kay Johansen and Anthony Perkins
3
Using the Agile Values to Work with Other Teams
Introducing change is difficult. It seems that organizations have an almost immunological response to personnel who act in a new or foreign way. Fortunately for us, we weren’t in a position of power, which immediately ruled out a heavyhanded, "ram it down their throats" approach, or even a self-righteous, evangelistic one. Indeed, we were a new team, without strong management support, understaffed, trying to do something that was unfamiliar to the company (and possibly to us.) Paradoxically, our very weakness allowed us to make several "mistakes" that were essential to our success. In hindsight, these “mistakes” are probably agile practices in disguise. 3.1
We Didn’t Protect the Customer
It's Testing’s purpose in life to protect the customer, isn't it? Perhaps in a safer environment you can get away with this approach. We believed our position in the company wasn't strong enough for us to be effective defending the customer by ourselves. We knew from experience that sometimes releases move forward like juggernauts—and a few quality devotees trying to stop the release by throwing themselves under the wheels probably wouldn't even attract attention, much less accomplish anything for the customer. We were afraid that if we undertook the responsibility of protecting the customer, others wouldn't have to. One of the first things we did was to make friends with the product manager. We deemed it best to get on good terms with him right away, so that later, when the pressure increased, he would continue to value our input even when we refused to make things easier for him by taking the responsibility to fail a release. We chose to be a “credible, high-integrity reporter of information” (Lesson 159 from Lessons Learned in Software Testing [7]) instead of the defenders of quality. We rarely spoke out in defense of a bug (we rarely had to) but when we did, we were careful to phrase our words as providing more information, rather than as making a recommendation. By never joining battle, over time we became unassailable. 3.2
We Didn’t Hire Testing Experience
All other things being equal, the more testing experience we could have gotten on our team, the better. Good testers do think differently from developers [8], and at this company the testing thought process was what was missing. So we should have hired some great, experienced testers and made that thought process part of the overall development process. Not so fast! We were, after all, novices, trying something new that could quite easily be rejected. Developers ourselves, we put ourselves in the developers' shoes, and imagined a team of people suddenly appearing and criticizing everything we did, without understanding our work or caring about what we thought was important. Would we trust that their decisions about our product would be good? Would we go out of our way to help them? In the beginning, we looked for people to hire who would have been attracted to the original startup, even though the office now belonged to a company of two
Establishing an Agile Testing Team: Our Four Favorite “Mistakes”
57
thousand people instead of twenty. It was more important for our testers to be capable of discussing the latest build of Apache or the security advantages of FreeBSD over Linux with developers, than to have the most or the best experience in testing. We found that the developers accepted people with a genuine interest in their product, and would go to great lengths to help the testers in any way they could. 3.3
We Didn’t Test Everything
We’re testers! Our job is to test everything! How could we justify picking and choosing which products we tested? Perhaps in a safe environment, we would have tried to test every product, assured that management would give us plenty of staff and lengthen the development schedules to accommodate us. But back in the real world, we weren’t established in the organization, we were unclear about our management support, schedules were aggressive, and we were fighting for every resource we had. The most important thing we could do for the products was to quickly demonstrate our value to the company, so that we could live to test another day. We understood the importance of early wins when introducing something new. As described in John Kotter’s Leading Change, we needed to “win in the short term while making sure [we] were in an even stronger position to win in the future”. [9] We were afraid that in trying to test as widely as possible, we could only deliver a mediocre (or worse) performance on everything. As we focused our effort more narrowly, we demonstrated the value of our work more clearly. We regularly communicated our intentions to management, forcing them to either agree with our prioritization or to make hard tradeoff decisions of their own. At one point we pulled every tester off other products to commit completely to a high-profile product. This wasn't a pleasant or popular decision, but we had communicated the decision so clearly in advance that when bugs escaped on the untested products, the reaction was "How many more resources do you need?" instead of "How could you have let that happen?" 3.4
We Gave in Easily
Our actual title in the company was Quality Assurance. Shouldn't Quality Assurance be the voice of reason against poorly documented requirements, changing requirements, scope creep, unrealistic schedules, insufficiently reviewed code, and so on? Isn't that QA's job? We thought, probably not. We believed that if everyone on the project team simply remained civil to each other, that would go the longest way toward getting a product out that was of value. With this mindset, we could pleasantly surprise everyone by cheerfully accepting things that are often resisted by QA teams. The requirements aren't documented? No problem, we can deal with it. We just added seventeen new features? Great, that's better for the customer! You want this code to go out without testing? We wish we could have been there for you, but we don't want to hold you up. We trust you! If a fight wasn't of critical importance to the customer or the organization, we stayed out of it. If we made a request and met resistance, unless it was for something absolutely necessary, we dropped it. Because we were perceived as very conciliatory and accommodating, people gave us more support when we actually had an issue we wouldn't give in on, such as requiring a clean test environment.
58
Kay Johansen and Anthony Perkins
The make-up of the testing team greatly assisted this. The zeal that the testers showed for wanting to test everything was felt by the other teams. When a version of the product was released without sufficient testing it was truly a sacrifice for the testers. Everyone knew it and a commitment to allow for better testing on the next release was easier to acquire.
4
Conclusion
We’ve described how the Agile values helped us build an effective testing team: • • • •
Valuing individuals and interactions helped us build a self-organizing team. Valuing working software caused us to develop a system for obtaining daily builds. Valuing customer collaboration brought us closer to the product manager. Valuing responsiveness to change made us pay attention to what we were learning and adjust our priorities.
Our first strategy on the project was wrong (Lesson 285 from Lessons Learned in Software Testing [7]). But because we were able to adjust our strategy as we went along, we became better at working with other teams. We shifted our priorities from the technical (test automation, complete coverage) to the social (securing support for our team, getting others involved in testing). To do this, we used some practices that seemed a little counter-intuitive to us at first, practices that might even seem like mistakes. In retrospective analysis, our actions are in fact supported by both the Agile and the context-driven testing school of thought, as summarized in Table 1. Table 1. Correlation between our experience, agile values, and context-driven testing Our “mistake” We didn’t protect the customer We didn’t hire QA experience We didn’t test everything We gave in easily
Agile value Customer collaboration over contract negotiation Individuals and interactions over processes and tools Ruthless prioritization Responding to change over following a plan
Context-driven testing lesson 12 – Never be the gatekeeper 150 – Understand how programmers think 10 – Beware of testing “completely” 176 – Adapt your processes to the practices that are actually in use
Although not specifically mentioned in the Agile Manifesto, we contend that "Ruthless Prioritization" is an important part of Agile software development. Of course, prioritization is key to any type of project management, but the "ruthless" part seems to be a signature theme of the various Agile methods. The opportunity to pursue agile methods from the testing department expanded on our prior experience in development. We discovered more of an emphasis on sociological factors in the testing literature as compared to development, and would like to point developers and testers struggling with introducing agile methods in this direction.
Establishing an Agile Testing Team: Our Four Favorite “Mistakes”
59
Acknowledgements The authors wish to thank Bill McLoughlin for his strong belief that people are more important than process, Alistair Cockburn for his “words of encouragement” on our move to the testing department, and Linda Rising for her help and advice as we submitted this paper.
References 1. 2. 3. 4. 5. 6. 7. 8. 9.
DeMarco, T., Lister, T.: Peopleware: Productive Projects and Teams. 2nd edn. Dorset House, New York (1999) Johansen, K., Stauffer, R., Turner, D.: Learning By Doing: Why XP Doesn’t Sell. In: XP Universe 2001 Conference Proceedings. Addison Wesley Longman, Reading, Massachusetts (2001) Bach, J.: What Software Reality is Really About. In: IEEE Computer. IEEE Computer Society, Los Alamitos, California (December, 1999) 148-149 Cockburn, A.: Agile Software Development. Pearson Education, Boston (2002) McCarthy, J.: Dynamics of Software Development. Microsoft Press, Redmond, Washington (1995) Highsmith, J.: Adaptive Software Development. A Collaborative Approach to Managing Complex Systems. Dorset House, New York (2000) Kaner, C., Bach, J., Pettichord, B.: Lessons Learned in Software Testing. A Context Driven Approach. John Wiley and Sons, Inc., New York (2002) Pettichord, B.: Testers and Developers Think Differently. In: STQE Magazine. Software Quality Engineering, Orange Park, Florida (January 2000) Kotter, J.: Leading Change. Harvard Business School Press, Boston (1996)
Turning the Knobs: A Coaching Pattern for XP through Agile Metrics William Krebs IBM Corporation, 3039 Cornwallis Blvd Research Triangle Park, North Carolina, USA 27709 OVIFW[$YWMFQGSQ
Abstract. I want to turn the knobs to 10, but my job position doesn’t allow me to dictate that my team do so. Even if it did, forcing XP may serve only to cause resentment and backlash. Though I’ve been learning XP for over a year, it’s still new to the rest of our team, and we’re used to our old habits. By giving team members control of how extreme to be through a ’teaching survey’, the team has started at a comfortable level of XP and has opened the door to future extremes. We’ve used the survey to define, coach, and track our XP process and have increased our use of XP by 10% in three months.
1
Our Approach: Light Metrics for an Agile Process
It didn’t seem appropriate to track a lightweight process with heavy metrics. An informal survey served as a good balance. (It’s listed in the appendix). The questions were worded with detailed examples to promote consistent meaning between different people’s numeric responses. But a key purpose of the survey was not just to collect data, but also to teach XP. When we started, most people were unfamiliar with XP’s techniques. After reading the examples, they learned the practices and the reasons behind them. The humor in some of the examples kept people reading but also made them recall their own real life situations that may have been similar. People already have habits. Some of the good habits looked like XP practices. The examples in the survey connect their subconscious habits with XP terms. By including successful (and unsuccessful!) habits in the examples, people became comfortable with the XP’s terminology. After the survey, people commented that they have done these in the past without having a name for it or seeing the connection between practices. 1.1
A Teaching Survey – A ‘Coaching Pattern’
The survey includes these elements: 1. Introduction text. It warns people that the 12 practices are interdependent and that the whole is greater than the sum of the parts. It tells people that o let people know that the practices are interdependent and the goal is for the team to define its process, not for me to force one. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 60–69, 2002. © Springer-Verlag Berlin Heidelberg 2002
Turning the Knobs: A Coaching Pattern for XP through Agile Metrics
2. A list of the twelve practices with one question per practice. additional one for XP in general. 3. Each question gives a concise description of a practice. 4. Both a ‘current’ and ‘desired’ level. 5. Scores with concrete (and perhaps humorous) examples.
61
There’s an
The shocking thing about the survey is we’ve added a column for ‘desired level’ of XP. Normally you want the level to be ‘10’ because the practices feed off each other. We did this because I wanted to sell XP by letting the team get proficient at their comfort level first, and later striving for more extreme levels. I believe that a moderately high use of all the practices is better than none at all because it serves as a comfortable temporary step on the way to more extreme levels. By giving people some control in the pace of our adoption of extreme practices, they feel more comfortable and feel less resistance. Instead of revolution, it’s rapid evolution. The price for this approach is that we have to be careful to watch out for problem that may appear if some our enthusiasm for some practices is out of f balance with others. Besides looking at practical metrics such as the number of automated unit test, the survey can help highlight areas that are falling behind as well. As a coach I had to be prepared to listen if the team said ‘no’. I gambled that they would at least want to try XP, and that once the tried it they would be sold. So far it’s been a successful bet. Whether it worked or not, as a secondary benefit, we would have a baseline description of the process we actually follow and could tune it from that starting point. Knowing what our process is and checking to see how well we follow it is a useful side benefit, especially if you need to consider the Capability Maturity Model. If we were to diverge from XP, I still wanted to follow a defined process, even if it was a custom process created by the team. Fortunately, this approach to introducing XP has been well received by our team, and we are on track to learn more about and focus on XP.
Collective ownership Current: ____ Desired: ____
People can change each other’s code. We don’t have to wait for the specialist 10 We regularly change code in any area. You can’t tell because our code looks the same. 8 We regularly change code in any area 6 We’ve changed each other’s code, but usually assign stuff in specialty areas 4 We can fix it if we have to. 2 We’ll have to wait for them to get back from vacation 0 I lock all my files and keep ’em locked. Fig. 1. Example Survey Question
2
Our Results
In three months, our average rating has increased from 6.0 to 7.0. Our goal has been set at a higher level, moving from 8.2 to 8.6. People’s increased appetite for XP is important because we are taking a risk by not being at an extreme level on every
62
William Krebs
practice. We sell XP by gaining confidence through experience at their comfortable level and then moving up. We tolerate the lower levels temporarily because we expect people will want more and that this will lead to higher levels over time. So far, far this has been true so the strategy is working so far. We haven’t scored all ‘10’s yet, so time will tell. Taking the survey before an after a set of major iterations let us see if we were successful in doing more of what we said we wanted. We delivered on areas we selected for special focus including ‘System of Names’ and ‘Coding Standards’. People’s desire, or ‘goal’ scores, also increased. The concern that appears is the vital but more automated unit testing area remains behind. The democratic approach of luring people into XP will run into trouble with the more difficult practices. But the survey helps highlight deficiencies and remind people of their goal. March
Goal
May
Pairing Sustainable Pace
10
Test Driven Design
8
System of Names
6
Small Releases
4
Collective Owndership
The Planning Story
2
Coding Standards
Customer Access
Continuous Integration
Refactoring
Simple Design
Fig. 2. Progress Chart. This chart shows out desired an actual XP levels attitudes changed over time
2.1
Who We Looked at
We collected data only from our small team (10 developers). This was our target audience for the feedback and improvements. This does not represent a large sample size, but does represent our small team. Besides being a self appointed XP coach, my role is to lead a sub team. That’s why I need to exercise indirect influence on the rest of the team. 2.2
Sum of the Parts
The survey asked how people felt about XP in general as well as each practice. It was interesting to find that the average of their desired scores for each practice was higher than their score for XP in general. Since our team was new to XP, I believe this indicates enthusiasm for practical techniques and less understanding of the XP
Turning the Knobs: A Coaching Pattern for XP through Agile Metrics
63
‘buzzword’. That bodes well for our adoption of XP because as people use more of the practices they’ll develop a higher opinion of XP. 2.3
Moderate Programming??
We intentionally allowed for a goal less than ten, especially as people startup with XP. This allows people to feel ownership of how they want to work. It avoids alarming people with the thought of sudden large change in how they are expected to work. It also allows room to respect opinions and ideas of the more experienced team members. However, our hypothesis is that as people get a taste of XP they’ll like it and want more so the ‘moderate’ levels are only a stepping stone on the way to full adoption. The catch is to make sure we avoid any pitfalls associated with imbalanced use of the practices. Programming 8 7 6 Team
Planning
5 4
Pair
Customer
Fig. 3.This chart shows our results in terms of William Wake’s Radar chart (Wake, 2001)
We used this format to help see if we were out of balance between practices. We learned that our team leans towards the Planning and Customer realms, but all areas were represented without the uneven shapes that indicate defective XP implementations. Bill Wake’s article shows example charts of ‘variations of ‘Not XP’’. 2.4
Want More
Our team wanted to use the following practices to a greater extent: Test Driven Design, Coding Standards, and System of Names. We had been weak on both coding standards and system of names so we gave them special attention. Our follow up survey revealed that we had indeed improved in both areas. I looked for a wide range between individuals in our perception of our current level and especially our desired level. I also looked for the difference between were the team felt it was operating today and were it wanted to be in the future. The other interesting benchmark was the difference between the degree of 'XP' desired compared to the average for the desired level of each practice.
64
William Krebs
The survey surprised me by identifying some unexpected areas that needed attention. For example, when I think of XP I first think of Pair Programming. But the survey results revealed that we feel we do well at pairing but want much more test first design and unit test automation. Coding Standards is another example. By itself, we could not get a critical mass of attention. But when seen in context of XP through this survey, folks understood the need coding standards in order to support the other appealing practices. The numeric results for our team are interesting, but it’s really an example. The important numbers are the ones for YOUR team! Table 1. This table compares desired and achieved results over time Practice Pair Programming Test Driven Design
March
Desired March
May
Desired May
5.9
7.4
5.9
7.8
4.6
8.6
5.9
8.4
Much more wanted!
Observations Close to what we want
5.0
7.3
7.6
8.1
Want shorter iterations (current is 3 to 4 a year)
7.5
7.8
7.4
8.6
Right on target
5.8
8.4
7.0
8.7
5.4 5.4
7.4 7.1
5.9 6.6
8.6 8.5
Wanted more and got it Want even more Did more, want more
7.0
8.1
7.4
8.3
Similar
6.0
9.3
7.1
9.1
Wanted more (a surprise) Got more
6.3
7.9
6.8
8.0
Slight improvement
5.5
9.4
7.0
9.3
Improvement
8.1
9.4
9.0
9.8
Average
6.0
8.2
7.0
8.6
XP
5.5
6.9
6.3
7.8
Small Releases The Planning Game Customer Access Refactoring Simple Design Continuous Integration Coding Standards Collective Ownership System of Names Sustainable Pace
Less Guilt when leaving Want more of the practices XP term didn’t match average
The average score was 6.0 now vs. a desired level of 8.2. However, when asked about XP, the current score was 5.5 and the desired level was 6.9. That indicates it was easier for me to sell XP by describing the practices first than by starting with XP as a framework. People can see the value in each practice and can move from there to consider the benefits of combining them.
Turning the Knobs: A Coaching Pattern for XP through Agile Metrics
65
It was also important to consider and discuss the range of answers in our team meeting. We didn’t want the averages to hide one person’s strong feelings. I looked for differences between ‘current’ and ‘desired’ and also differences between people for the same question and tried to understand that in the context of their work that I observe in our workplace. The standard deviation was usually 0.9 to 1.1, but for more controversial topics it was 2.0 or 2.1. Something simple like current coding standards showed most scores of 8, a couple 6 responses, but one 4. Digging further I found that this person was not aware of some of the standards we already had. It was rewarding to see that for practices we wanted to emphasize we did make progress, and for many practices our appetite increased as we gained experience.
3
Conclusion
Our experiment in teaching XP through a survey delivered the results we hoped for. People learned what XP was about, had a voice in our rate of adoption, and considered the possible pitfalls of partial adoption. Our team has grown from not knowing much about XP to having a positive opinion of both XP and each of the practices. Best of all, we have an appetite for more.
Acknowledgements The Team: Beth Schreiber, Radhika Aggarwal, Mohamad Salahshoor, David Styles, William Krebs Barnaby Court, Bimal Shah, Nell Palistrant, Balan Subramanian, Thomas W Young, Douglas Wong, John Goulah. The Sheperd: Christian Wege
References 1. Wake, William. XP Radar Chart. 2001. Web Site On-line at http://www.xp123.com/xplor/xp0012b/index.shtml This is an important article to read to see how to check for ‘unbalanced’ levels of adoption between XP practices. 2. Multiple Authors on Ward Cunningham’s Wiki Web Site. Discussion of XP and CMM on Wiki: 2001. On-line at http://c2.com/cgi/wiki?XpAndTheCmm 3. Williams, Laurie. Multiple papers on Pair Programming. Web Site. On-line at http://www.pairprogramming.com This site has links to Laurie Williams’ key articles on pair programming. 4. Jeffries Ron, Editor. Xprogramming.com 2002. Web Site. On-line at http://www.xprogramming.com This site has a good introduction to XP in general but also the relationships between the practices. 5. Kerievsky, Joshua: Continuous Learning, Paper at XP2001 http://industriallogic.com/xp/ContinuousLearning.pdf This paper give specific ideas on how to help the team learn and continue learning. The study group idea in particular would fit well with the grass roots survey approach.
66
William Krebs
6. Beck, Kent. “eXtreme Programming eXplained”, Addison Wesley, 1999. The original. This book helps explain the interdependencies between practices, and as a great ‘web’ chart showing the relationships. Don’t forget to read his Bibliography. 7. Wake, William. “Extreme Programming Explored” Addison-Wesley 2002. This work is very readable yet does a great job of explaining the XP practices and includes helpful down to earth examples. 8. Jeffries, Ron and Anderson, Anna and Hendrickson, Chet. “Extreme Programming Installed”. Addison-Wesley 2001. This book is most useful but also includes illustrations of the authors 9. Succi, Giancarlo and Marchesi, Michele. “Extreme Programming Examined”. AddisonWesley 2001. This book is good because it contains a collection of papers from many authors. It is a good source for Laurie William’s papers on Pair Programming. 10. Hightower, Richard and Lesiecki Nicholas. “Java Tools for eXtreme Programming”. Wiley Computer Publishing. 2002. This book has great explanations of many open source java tools that are key in supporting XP testing practices. If you don’t use XP, at least read this book for the tools!
Appendix: An XP Experience Scorecard Some of us have been around a while and have seen both mistakes and successes. Some of us are new and are comparing what we’ve learned in school with our first experiences in the business world. Some people study and improve their process; some people just do what comes naturally and works.
No mater what category you fall in, you may do things in a way the follows common practices to various degrees. Let's see how close your methods are to 'Extreme Programming'. I'm not here to make you do anything. It's okay to mold your own process. Just do so from an informed basis. Don't just follow your old habits out of instinct. I just want to learn what we do and make us think about how to improve it. Rate your answer to the question on the right from 1 - 10 (10 is highest). Practice
Pairing Current: ____ Desired: ____
Small Releases Current: ____ Desired: ____
Comments Two people work together at one computer. They trade turns typing or reviewing and thinking about the big picture 10 We wouldn’t want to write any critical code without pairs taking turns thinking or typing. We rotate pairs. 8 We often work in pairs. It gives us some ergonomic relief. 6 We often have whiteboard chalk talks, chat messages, or office visits. Some people pair program at the keyboard, but some prefer not to try it 4 We try to pair but can’t due to flextime and meetings. Some people are just too slow or fast for me to have patience to sit with. Our furniture makes it hard anyway. 2 I find it distracting when people interrupt me. My office mate asks me not to have so many visitors. 0 I wear earphones so people won’t disturb me. Actually, I prefer to work at home with the phone off the hook and my chat program set to do not disturb. We deliver more frequent, smaller iterations to customers 10 Every week or two the customer can take the system as a working unit. 8 We have one month iterations. They can pick new function for the next iteration at that time. 6 Every few months we have another iteration for the customer. 4 We ship beta drivers and fix packs about 4 times a year, with bigger releases in 8-12 month cycles 0 We have a grand vision. Next year’s release 1.0 will tide you over until the real function comes out in release 2.0 in the 18+ month timeframe.
Turning the Knobs: A Coaching Pattern for XP through Agile Metrics Practice Continuous Integration Current: ____ Desired: ____
Test Driven Development Current: ____ Desired: ____
The Planning Story Current: ____ Desired: ____
Customer Access Current: ____ Desired: ____
Refactoring Current: ____ Desired: ____
Comments When working on a big set of code I sync up or checkin as follows: 10 several times per day, 8 once per day, 6 several a week, 4 once a week 2 a few weeks can go buy. I only check it in when it’s ready 0 I usually have problems because a lot of changes have occurred between the time I check it out and the time I try to sync up. I have to resync several times because stuff is back leveled or left out. The build seems to break often. Do we have test cases and automated drivers for each product class? 10 The automated tests are the design. The customer runs acceptance tests. 8 After we design and prototype the code we write some testcases. 6 We are careful to unit test our code after it’s done before it goes to the test team 4 We’ve heard of tools like JUnit, but haven’t really tried it. 2 Our formal system test phase at the end of our cycle takes much longer than planned because there are so many bugs and fixes going in. 0 We don’t really have any formal testing. Customers often let us know if there are any problems though. We move items in and out of the plan based on updated customer needs while keeping the dates steady 10 Before every short iteration the customer chooses the most valuable features based on the developer’s sizings. Each morning we review the outstanding user stories in a 5 minute stand up meeting and volunteers pair up to implement them. Since we know change happens, we have a competitive advantage because we’ve optimized our process to accept and exploit change. 8 We trade function in and out of our milestone drivers from time to time after customers change their priorities and development is completed ahead of time or falls behind for some items. It’s the customer’s product after all. They should get what they want. Change happens. 6 Plans should not change. We meet our dates, even when planned a year in advance We create a lot of ’artifacts’ like design specs. We try to keep them up to date. 4 We are careful to follow the ’waterfall’ process. We don’t start coding until the designs are complete and reviewed. We don’t start test until all the code is delivered. Sometimes we’ve changed or missed our dates because the customer changed a requirement. 0 We lose customers because we tell them they’ll have to wait for the next release. After all, we're busy finishing what they asked for last year We have access to our customer and get feedback from her 10 We don't consider a line item done until the customer has run their acceptance test 8 We frequently interact with customers to show them prototypes to see how they want it changed 6 We get requirements from customers 4 Requirements come from somewhere, but I don't think their customers. 2 We ship functions but are never sure if we made what they wanted. They've probably changed their mind by now anyway. 0 We know what's right. They'll use our stuff weather they like it or not. It'll be good for them. We rewrite or redesign code that smells bad or will help position us for new requirements 10 We often use it as a tool to steer or flex the design to meet changing requirements. We also do it to make it more streamlined and easy to change in the future. 5 We've done some cleanup from time to time. 0 We have old baggage code that has had a lot of ungraceful changes. We're afraid to touch it. We turn down requirements because the code is 'works as designed' and can't be changed.
67
68
William Krebs
Practice
Simple Design Current: ____ Desired: ____
Coding Standards Current: ____ Desired: ____
Collective ownership Current: ____ Desired: ____
Metaphor, or System of Names Current: ____ Desired: ____
Sustainable Pace Current: ____ Desired: ____
Comments We keep the design simple now so we can change it as needed in the future 10 YAGNI (You ain’t gonna need it). We often refactor needed changes in later 8 It’s clean - it does just what’s needed in the simplest way 6 Our design is mostly straight forward, with a few lumpy spots 4 We’ve proudly engineered a full feature, full function product that does everything people will need. 2 We have big chunks of code that are not finished or end up being thrown away 0 I thought I’d write this framework incase we need it in the future. You never know. Do you have and follow enough standards so you can read and change each other’s code? Have ’em? Detail? not too much, not too little, 10 We have standards, follow them, train new people in them. By they way, they are industry standards as well 8 We have most standards written down, and most people usually follow them. 6 We do the same stuff for some things, but some things are treated differently. Our braces are in the same place, but our error handling is often different. 4 We have many standards and each follow a different one 2 We don’t have standards 0 How dare you tell me what to do. People can change each other’s code. We don’t have to wait for the specialist 10 We regularly change code in any area. You can’t tell because our code looks the same. 8 We regularly change code in any area 6 We’ve changed each other’s code, but usually assign stuff in specialty areas 4 We can fix it if we have to. 2 We’ll have to wait for them to get back from vacation 0 I lock all my files and keep ’em locked. So you have naming conventions for your objects? 10 I can tell what everyone’s code does from the name, without looking at the comments We often think of the same name for things. When discussing design we phrase the discussion on a common metaphor for the system. 8 I like the names in the system. They make sense to me 6 I can follow the names, with a little help from the comments 4 The names are misleading, You really have to read the code. I’m not sure what to name new classes 2 You have to understand the history, the names are one thing but the methods to different stuff because evolved differently 0 Why did they call it that? I can’t read their abbreviations Do people work at a rate that’s effective for them over the long run? 10 We work a steady comfortable pace. We sprint only when needed, which isn’t often 8 Sometimes I’m too busy, sometimes too bored 6 Whenever there is a deadline we go into death march mode. 4 We’ve been ordering dinners for several months now. It seems it’s always like that 2 More than once I’ve had to cancel my vacation or classes. 0 I don’t have time to fill out this survey.
Additional survey questions beyond the 12 XP practices Do you stop periodically to consider ways to improve? 10 People often come up with new ideas, and they are implemented. We share Lessons Learned the new techniques with other groups 8 We think of what went wrong, what went right, and make changes for our Current: ____ team 6 We come up with ideas but they seem to die on the vine and never get into Desired: ____ the culture.
Turning the Knobs: A Coaching Pattern for XP through Agile Metrics Practice
Overall XP Score Current: ____ Desired: ____
69
Comments 4 We repeat the same mistakes 0 We never have ’em. I have gripes and they never seem to get addressed. Obviously no one cares. To what extent to you feel you implement XP practices? 10 I coach XP, have written books, or presented at conferences 8 We’re a good example of an XP team 6 We often fold XP practices into our daily routine We’ve read about XP, but since lives or huge amounts of money depend on our software and we have a very large team, we use formal methods with lots of documentation and reviews. 4 I don’t know what XP is, but the scoring principles on this survey sound intriguing. Maybe we can try them sometime. 2 I’m not sure what XP is, but it sounds bad 0 I don’t believe software development should be tied down by a ’process’ - or – We've always done business the same way. I see no need to change now.
Author Biography Bill has developed software for 20 years everywhere from Albuquerque to Stockholm Sweden. This has included tours with IBM’s Microelectronics and Software Divisions. He’s learned both from heavy failures and light successes. He was infected with XP over a year ago and attended XP Universe 2001 which just made it worse. When he’s not refactoring JUnit test cases or pair programming he is trying to make his synthesizer sound like his French Horn, playing Go, watching Star Trek, or camping with his son’s Boy Scout troop.
Agile Project Management Methods for ERP: How to Apply Agile Processes to Complex COTS Projects and Live to Tell about It Glen B. Alleman Niwot Ridge Consulting Niwot, Colorado 80503 KEPPIQER$RM[SXVMHKIGSQ
Abstract. The selection, procurement, and deployment of an Enterprise Resource Planning (ERP) system is fraught with risk in exchange for significant business and financial rewards [26]. In many cases the packaged ERP product does not provide the entire solution for the business process. These gaps can be closed with third party products or by customizing existing products. Management of this customization, as well as the selection of the core ERP system has traditionally been addressed through high–ceremony, science–based, project management methods [13]. Well–publicized failures using this approach creates the need for new methods for managing ERP projects [11]. This compendium paper describes an alternative to the traditional high–ceremony IT projects management methods. Although many of the methods described are not new assembling them into a single location and focusing on a single issue provides the tools to make decisions in the presence of uncertainty, focus on the critical success factors, and address the managerial and human side of project management Agility allows the project management methods as well as the system to be adaptively tailored to the business needs.
1
Introduction
Using accepted standards for doing business significantly reduces the coordination efforts between business partners as well as internal information and workflow processes [46]. ERP provides the means to coordinate and manage this information, by integrating enterprise information and business processes. Managing an ERP project is not the same as managing a large scale IT project. IT projects emphasize requirements elicitation, detailed planning, execution of identified tasks, followed by end–to–end delivery of business functionality. Even though this project methodology faces difficulty when scaled to larger projects, applying it to ERP projects creates further difficulties. The ERP environment faces constant change and reassessment of organizational processes and technology [67]. The project management method used with ERP deployments must provide adaptability and agility to support these evolutionary processes and technologies [33]. The use of agile methods in the ERP domain provides:
• Increased participation by the stakeholders. • Incremental and iterative delivery of business value. • Maximum return on assets using a real options decision process. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 70–88, 2002. © Springer-Verlag Berlin Heidelberg 2002
Agile Project Management Methods for ERP
1.1
71
What’s the Problem Here?
The major problem with software development (and deployment) is managerial, not technical.
The notion that Commercial Of The Shelf (COTS) products are the solution to business problems out of the box has pervaded the literature [13]. The application of scientific management principles to these projects is understandable. The use of predictive strategies in this environment is inappropriate as well as ineffective since they do not address the emergent and sometimes chaotic behaviors of the market place, the stakeholders, and the vendor offerings. This paper describes a method of augmenting structured project methods with agility to produce a new approach to managing ERP projects. This agile approach requires analytical tools for making the irrevocable decisions in the face of uncertainty found in the ERP domain. This approach provides methods for dealing with the interpersonal, stakeholder, and business process issues that arise in the rapidly changing ERP environment. Agile methods provide the means to deliver not just pretend progress but real progress, measured as business value to all the participants – buyer, seller, and service provider. 1.2
What Is an ERP Project?
The term Enterprise Resource Planning, coined in the early 1990’s, is a software application suite that integrates information and business processes to allow data entered once to be shared throughout an organization. While ERP has its origins in manufacturing and production planning systems, it has expanded to back–office functions including the management of orders, financials, assets, product data, customer relations, and human resources. Thinking about an ERP project as a large–scale IT deployment leads to several unacceptable propositions [13]:
• Spend $2 million, $20 million, or even $200 million up front for a new technology with a 50% to 70% probability of a partial or complete write off of the investment. • If unwilling to write off the investment, double the original investment to complete the project successfully. 1.3
ERP Project Management and Normal Science
Modern project management is heavily influenced by the belief that a project management process can be improved by scientific methods [16, 26]. These include the beliefs create the myth that:
• Clear–cut investment opportunities with an explicit purpose, beginning, duration, and end can be identified early in the project. • Low opportunity costs for each business or technical decision exist, in most instances with a reversible decision process. • Feasible, suitable, and acceptable project attributes can be identified. • Accurate predictions of project duration and resource demands are possible once the requirements have been defined.
72
Glen B. Alleman
• Worst–case consequences can be determined in advance. • The failure of the project was due to lack of skills rather than inappropriate feasibility, suitability, or acceptability of the solution. This is a normal–science view of project management. In the ERP domain it can be replaced with a post–modern view 1, in which there are: • Highly uncertain facts about the project attributes. • Constant disputes about the values and expectations. • High decision stakes with irreversible consequences. • Urgently needed decisions in the presence of insufficient information. • Outcomes that affect broad communities of interest. Agile methods do not mean that the normal–science model is irrelevant, just that such a model is applicable only when uncertainty and decision stakes are low [37]. A fundamental attribute of post–normal science is the reliance on heuristics [32, 51]. Using heuristics to guide the development using agile methods allows the management of ERP projects to be placed in a post–normal science context. 1.4
ERP Projects Are New Ventures
The agile methods used to manage an ERP project can be taken from the Venture Capitalist approach rather than the IT Managers approach [3, 7, 8]. These methods include: • Staged Investments – capital must be conserved. • Managed Risk – all participants must share the risk. • It’s the people stupid – the composition of the participants is “the” critical success factor. 1.5
ERP Is Also Enterprise Transformation
Three major processes make ERP projects significantly different from traditional IT projects. • Process reengineering – is about replacing business processes that have evolved historically within the organization with new and innovative processes embodied in the ERP system. If the business needs aren’t met in some way by the ERP system, there is a temptation to customize it. If this is done, an instant legacy system is created with the similar maintenance and support problems as the previous system. • Package the delivery of IT capability – is about staging the delivery of system components and their business value to maximize these resource investments by the continuous delivery of business value. • Shift toward business processes modularity – is about modularizing the architecture of the organization as well as the software. There is technical architecture, data architecture, application architecture, and enterprise architecture. The deployment of ERP impacts all four of these architectures. 1
Classical science and conventional problem solving were labeled “normal science” by Kuhn [53]. Post–Normal science acknowledges there is high system uncertainty, increasing decision stakes, and extends the peer review community to include the participants and stakeholders, who insure the quality and validity of the conclusions [37].
Agile Project Management Methods for ERP
1.6
73
What Is Architecture and Why Do We Care?
One approach to agile deployment of ERP systems is to begin with system architecture. Several benefits result: • Business Processes are streamlined – through the discovery and elimination of redundancy in the business processes and work artifacts. • System information complexity is reduced – by identifying and eliminating redundancy in data, software and work artifacts. • Enterprise–wide integration is enabled through data sharing and consolidation – by identifying the points to deploy standards for shared data, process, and work artifacts. • Rapid evolution to new technologies is enabled – by isolating the data from the processes that create and access this data. Architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how these parts fit and work together [69]. Many of the attributes of building architecture are applicable here. Form, function, best use of resources and materials, human interaction with these resources, reuse of design, longevity of the design decisions, and robustness of the resulting entities are all attributes of well designed buildings and well designed software systems [1, 2]. While architecture does not specify the details of any implementation, it does establish guidelines to be observed in making implementation choices. These conditions are particularly important since ERP architectures embody extensible features that allow additional capabilities to be added to previously specified parts [56]. In the COTS domain, architecture provides the guidance to the development team to direct their creativity.
2
How to Implement an ERP System
IT projects traditionally use formal management processes for the acquisition or development, deployment, and operation of the system that emphasizes planning in depth. This approach organizes work into phases seperated by decision points. Supporters of this approach emphasize that changes made early in the project can be less expensive than changes made late in the project. In the past this approach has been called waterfall2. The waterfall approach contains several erroneous assumptions that negatively impact ERP projects:
2
The term waterfall has been used many times as a strawman by the agile community. In fact very few pure waterfall projects exist today. This is not to say there are not abuses of the concept of waterfall – sequential development based on the simple algorithm REPEAT [Design, Code, Test] UNTIL Money = 0. In practice, development and deployment processes based on incremental and iterative methodologies are the norm. The literature contains numerous references and guidelines to this iterative project management approach dating back to the 1980’s [65].
74
Glen B. Alleman
• Planning – It is not humanly possible to produce a plan so that its implementation is merely a matter of executing a defined set of tasks. • Plans for complex projects rarely turn out to be good enough for this to occur. • Unanticipated problems are the norm rather than the exception. • Change – It is not possible to protect against late changes. • All businesses face late changing competitive environments. • The window of business opportunity opens and closes at the whim of the market, not the direction of the project manager. • Stability – Management usually wants a plan to which it can commit. By making this commitment, they give up the ability to take advantage of fortuitous developments in the business and technology environment [72]. • In a financial setting this is the option value of the decision. • Deferring decisions to take advantage of new information and new opportunities is rarely taken into account on IT projects [74]. 2.1
The Road to Hell Is Paved with Good Pretensions
The erroneous assumptions in §1.3 create a dysfunctional relationship within the project that undermines its effectiveness. This dysfunctional relationship is created when:
• The client pretends it is possible to define milestones and deliverables far in advance. The client then creates a project plan that formalizes these milestones. • The vendor pretends that it can meet these milestones in order to get the business. Both parties maintain the illusion of good project management by pretending they know how to meet these milestones, when in fact they are headed for failure. 2.2
Planning in the Presence of Uncertainty
Plans are unimportant; planning is essential – D. D. Eisenhower
The rules of thumb for applying agile processes are built around the increasing levels of uncertainty experienced by the project [31]. • A clear future – a single consistent view of the outcome. • Alternative futures – a small set of outcomes, one of which will occur. • A range of futures – many possible outcomes. • True ambiguity – no specified range of outcomes. The higher the degree of uncertainty the more effectively agile methods can replace high–ceremony methods [10, 70]. In the presence of, the difficulty of planning does not remove the need for planning – it simply changes its purpose: • Plan in order to gain understanding. • Plan for unanticipated events – this is called risk mitigation. • Don’t take planning too seriously – the original plan is simply a guide to the future – it is not the future.
Agile Project Management Methods for ERP
2.3
75
Avoiding Dysfunctional Relationships
Using the three key aspects of a Venture Capital methodology reviewed in §1.4, ERP projects can as if they were of as business ventures [13]. Using a post–normal methodology, ERP management includes: • Staging – deploying all the ERP features at once to gain the benefits of the integration and infrastructure is not a good Venture Capital decision. • Different projects have different cash flow requirements therefore different deployment requirements. • Capital investment moves to locations with acceptable or low cash flow requirements. • The risk / reward proposition must be reasonable for the capital investment requirements. • Incentive alignment and risk sharing – among the parties, cooperative problem solving is a critical success factor. • Vendor and system integrator payments should be linked to the accomplishment of real tasks, not milestone dates. • Senior managers’ compensation should be based on successfully delivering components of the project in an incremental, iterative manner with measurable business value. • There must be no conditional support. Every one should have some skin in the game. It’s going to get ugly no matter what happens, so conditional support is the kiss of death for an ERP project. • People are the key to success – any successful venture is based on having the right people. The right team with a mediocre idea is better than the wrong team with a good idea.
3
Agile Methods and ERP Systems Agility is the ability to create and respond to change… agile organizations view change as an opportunity, not a threat [43].
3.1
Agile Method Background
In the 1980’s the development of many large software applications was factory–centric. Large volumes of code were generated by equally large volumes of programmers [15]. The consequences of this horde approach have been well documented [24, 25]. As early as 1956 the concept of software process entered the lexicon [18]. The discussion of software process improvement has a long history, with varied results even to this day [14, 22, 65, 64]. In recent years, the landscape has changed dramatically for both the suppliers and consumers of software. Time to market pressures, rapidly changing requirements, the Internet, and powerful programming languages have placed new forces on traditional software development organizations [10]. These forces have been felt in the COTS integration domain as well [57, 78]. One source of modern process improvement was initiated by Royce [65]. From this, iterative methods improved on the original waterfall process. The mid–1980’s
76
Glen B. Alleman
produced several new processes including the spiral model of Boehm, which evolved from a risk management point of view [3]. Process programming emerged from formal modeling techniques in the late 80’s [58, 59]. Software process improvements continue to occupy an important place in research as well as the commercial market place [3, 19]. The concept of agility has been discussed in detail in the hardware domain [41]. Similar research and discussion is just starting to take place in a manner for the software domain. This leaves a gap in the academic approach to the subject. This gap has been filled by anecdotal accounts of agile processes being applied in a variety of development domains, but an extensive survey of the taxonomy and processes have not been conducted [9, 10, 27, 28, 43]. 3.2
Pre–paradigm Issues with Agility
The gap in the agile process theory represents the normal evolution of any intellectual venture. The current agile processes could be considered to be in a pre–paradigm state3. This is a state in which the inconsistencies in the current paradigm (high–ceremony methods) are resisted until a new paradigm emerges [53]. Some questions are appropriate for these emerging agile methods: • Can these methods be evaluated using the scientific principles found in the high– ceremony methods? • Can the management of ERP systems acquisition and deployment be reduced to a set of scientific principles? • How does the paradigm of agility compare with the more traditional methods described in §3.1? • How are gaps in the current high–ceremony methods filled by agile methods? 3.3
Agile Project Management Principles
It is common to speak of agile methods in the context of the lightweight activities used to manage the development or acquisition of software. These activities include requirements, design, coding, documentation, and testing processes using a minimal set of activities and artifacts needed to reach the end goal – a working software system. Applying the concept of agility to the management of a software project is a natural evolutionary step from high–ceremony processes. However, several questions need to be answered by the agile process before proceeding: • How can these minimalist approaches be applied in a COTS integration environment while still maintaining the necessary integrity of the delivered product – cost control, functional capabilities, resource management, and timely delivery? • Which project management process simplifications are appropriate for the ERP domain and which are not? • Are all lightweight and agile project management process steps applicable to the ERP problem domain? If not, which steps are applicable [48]? 3
A paradigm is “… essentially a collection of beliefs shared by scientists, a set of agreements about how problems are to be understood.” A pre–paradigm is characterized by an abundance of initiatives, the development of standards, and the increasing use of methods and structure [42, 60].
Agile Project Management Methods for ERP
3.4
77
An Agile ERP Delivery Process
Agile methods emphasize rapid and flexible adaptation to changes in the process, product, business, and deployment environment [9, 10]. This is a generic definition of agile and not very useful without some specific context. Before establishing this context though, any agile process must include three major attributes. It must be: • Incremental, Iterative, and Evolutionary – allowing adaptation to both internal and external events. • Modular and Lean – allowing components of the process to come and go depending on specific needs of the participants and stakeholders. • Time Based – built on work cycles, which contain feedback loops, checkpoints, and guidance on using this information in the next cycle. 3.5
An Options Approach to Decision Making
In the 1980’s Barry Boehm established a framework for an economics–oriented approach to software development focused on cost estimation [20, 21]. These concepts have been extended in many directions, including the economic tradeoffs made during COTS product deployment [35]. The selection, deployment, and operation of a complex software system is subject to a high degree of uncertainty. Reasons for this uncertainty are numerous: general macroeconomic influences, changing stakeholder requirements, and changing demands from customers and consumers for specific capabilities [38]. Classical financial analysis techniques, such as discounted cash flows (DCF), calculation of net present value (NPV), or internal rate of return (IRR), are not capable of dealing with this uncertainty. DCF treats assets as passively held, not actively managed, as they would be in an ERP project. ERP projects have the flexibility to make changes to investments when new information is obtained. Treating this flexibility as an option allows decisions to be made in the presence of uncertainty. The fundamental advantage of this real options framework (versus financial options) over the traditional DCF legacy IT project framework is that the resulting valuations incorporate the value by making smart choices over time in the presence of changing information and risk assessments4. Many of the choices in the selection and deployment of an ERP system are made without the theoretical or conceptual foundations described in the previous paragraphs [72]. An important distinction between software development decision–making and COTS decision– making is that COTS decisions are often irrevocable. Individual software modules cannot be refactored or redacted since the source code is not available.
Performing the economic tasks above without some quantitative tools to guide the decision maker leads to poor choices at best and chaos at worst. Chasing the next optimization, gadget, or latest vendor recommendation has become all too common.
4
There are other methods to aid in decision–making as well as options pricing – utility theory and dynamic discounted cash flow are examples. Each of the approaches makes assumptions as to the applicability, advantages, and disadvantages [75].
78
3.6
Glen B. Alleman
A Quick Options Tutorial
An options based decision process can be used in agile ERP deployment to great advantage [10, 35, 36, 45]. An option is a contract that confers its holder the right, without obligation, to acquire or dispose of a risky asset at a set price within a given period of time. The holder may exercise the option by buying or selling the underlying asset before its expiration date if the net payoff from the transaction is positive. If the holder does not exercise the option by the expiration date, the option expires as worthless. The value of the option is the amount one would pay to buy the contract if it were traded in an open market. An option that gives the right to acquire an asset is a call option; an option that gives the right to dispose of an asset is a put option. The value of an option is linked to its asymmetric nature – the holder has the right, but not the obligation, to exercise the option. The exercise takes place only if and when it is beneficial to do so [29, 30, 55]. 3.7
Important Assumptions about Real Options
The use of an Options–Based decision process in software development has been popularized in the eXtreme Programming methodology [17]. For the options based decision process to be properly applied several conditions must exist: • An option has value only if there is uncertainty in the outcome, resulting value, or impact on future decisions. In software defining the dimensions of this uncertainty is difficult [61]. • The decision process and the consequences of the decision must be irreversible. • Irreversibility implies that the optionable asset is scarce and difficult to replicate in a timely manner. In the case of software decision processes, the scarce item is knowledge about the underlying technical and business processes in the form of core competencies [51]. Project success is related to the maturity of an organization, it capabilities in dealing with projects, uncertainty, and abilities to learn from the past [62, 68]. In the absence of these conditions, an options–based decision making process may have little to offer. There are several theoretical difficulties as well with the options concepts presented in [17]: • In software development the underlying asset is not actually traded. • In other cases the asset exists only as a result of exercising the option and is not tradable independently from the decision process. • In richly traded markets there is information about uncertainty and values. In the low volume world of IT projects, obtaining valid data about future values, treating this data consistently, and dealing with the unqualified effects of staff, business processes, and changing markets results in a very different valuation process5. 5
This argument is presented in Strassmann’s The Squandered Computer [73]. In this work he dismisses the use of real options in constructing values in incomplete markets. Such markets are where prices are not in the space of the market. Strassmann is correct in that the use of arbitrage–based pricing techniques are not theoretically appropriate for IT projects. Gathering valid data is the source of the problem here. However when markets are incomplete, the data required for pricing can be obtained from experts in the problem domain. An estimate of the likelihood of change is produced in the normal course of software engineering management [23].
Agile Project Management Methods for ERP
79
Using the natural uncertainty of the ERP domain core competencies can be used to produce complete market information, to apply the options–based decision process to advantage [6, 34, 71]. 3.8
Measuring Value in COTS Project Management
Much of the agile literature discusses value creation. Several questions arise in the context of options theory: • In what dimensions and units is the value measured? • How are the contingent future payoffs valued? • What is the role of risk–aversion in valuing contingent payoffs? • How can tradeoffs in multi–dimensional value spaces be evaluated? • How can the value of an option be determined in the presence of uncertainty and incomplete knowledge? • How can core competency be used as the source of the options value as discussed in §3.7? These questions are just beginning to be addressed in the agile literature. Some answers can be found in utility theory and multi–objective decision–making [42]. Even in the absence of the answers to these questions, agile methods can be valuable in the management of ERP systems deployment, since asking the questions focuses the attention of all participants on the uncertainty and irreversibility of the decision process [70].
4
Applying Agile Methods to ERP
Agility implies a systematic vision of the outcome – an intelligent action or ingenium that makes it possible to connect separate entities and their outcomes in a rapid and suitable manner. Ingenium: … the way which we build while going [77].
Agile methods possess values and principles that can be considered heuristics that guide the process application using the mechanism of Ingenium. 4.1
Agile Project Values
The set of underlying values for an agile project include6:
• Communication – of information within and outside an agile project is constant. These communication processes are essential social activities for the project participants. • Simplicity – defines the approach of addressing the critical success factors of the project in terms of the simplest possible solution. See Fig. 3 for the ERP CSF’s. 6
These values are taken from the agile Modeling and eXtreme Programming resources. They are not unique since they can be traced to the earliest project and program management sources. See [66] for a good review of adaptive and agile process in the aerospace business. But they do represent a cohesive set of values and principles articulated by the agile community.
80
Glen B. Alleman
• Feedback – “optimism is an occupational hazard of software development, feedback is the cure” [17]. • Courage – important decisions and changes in the direction of the project must be made with courage7. This means having the courage not to engage in non–value added activities or artifacts. • Humility – the best project managers acknowledge they don’t know everything and must engage the stakeholders to close the gaps. 4.2
Applying Agile Principles
Using these agile values, the following principles create the foundation for managing ERP projects in an agile manner8. • Assume Simplicity – as the project evolves it is assumed that the simplest solution is best9. Overbuilding the system or any artifact of the project must be avoided. The project manager should have the courage to not perform a task or produce an artifact that is not needed for the immediate benefit of the stakeholders. • Embrace Change – since requirements evolve over time, the stakeholder’s understanding of these requirements evolve as well. Project stakeholders themselves may change as the project makes progress. Project stakeholders may change their point of view, which in turn may change the goals and success criteria of the project. These changes are a natural part of an ERP project. • Enabling The Next Effort – the project can still be considered a failure even when the team delivers a working system to the users. Part of fulfilling the needs of the stakeholders is to ensure the system is robust enough to be extended over time. Using Alistair Cockburn’s concept, “when you are playing the software development game your secondary goal is to setup to play the next game” [27]. The next phase may be the development of a major release of the system or it may simply be the operation and support of the current system. • Incremental Change – the pressure to get it right the first time can overwhelm the project. Instead of futilely trying to develop an all–encompassing project develop a small portion of the system, or a high–level model of a larger portion of the system. Evolve this portion over time, and discard portions that are no longer needed in an incremental manner. • Maximize Stakeholder Value – the project stakeholders are investing resources – time, money, facilities, and etc. – to create a system to meet their needs. Stakeholders expect their investment will be applied in the best way. • Manage With A Purpose – by creating artifacts that have stakeholder value. Identify who needs the artifact. Identify a purpose for creating the artifact. • Multiple Project Views – considering the complexity of any modern information technology system construction or acquisition process, there is need for a wide range of presentation formats in order to effectively communicate with the stakeholders, participants, and service providers. 7
8
9
This term is used in [17]. I do not consider it the same courage found in soldiers, firefighters, and police officers. These principles are attributed to Scott Ambler and are adapted with permission to the ERP acquisition and deployment domain. [4, 5] This may not always be the case for ERP, but it is a good starting point.
Agile Project Management Methods for ERP
81
• Rapid Feedback – the time between an action and the feedback on that action must be minimized. Work closely with the stakeholders, to understand the requirements, to analyze those requirements, and develop an actionable plan, which provides numerous opportunities for feedback. • Working Software Is The Primary Goal – not the production of extraneous documentation, software, or management artifacts. Any activity that does not directly contribute to the goal of producing working software should be examined to determine its value. • Travel Light – since every artifact must be maintained over its life cycle. The effort needed to maintain these artifacts must be balanced with their value. These principles need a context in which to be applied. More importantly they need specific actionable outcomes within that context.
5
An Agile Application Example
Much of the agile literature provides recommendations and guidelines independent of a business domain. What is needed is a domain specific set of principles and practices that can serve as a checklist for getting started [44]. 5.1
ERP Functional Domains
There are numerous ERP business domains and functions within those domains. Narrowing the domain from this long list will help focus the case study context. The business domains in which ERP plays a critical role includes: Domain Product Line Management Supply Chain Management Customer Relationship Management Financials Human Resources Procurement
Functions Program Management, Product Data Management, Quality Management, Asset Management Networking, Planning, Coordination, Execution Customer Engagement, Business Transactions, Order Fulfillment, Customer Service Financial Operations, Accounting, Corporate Services Administration, Payroll, Organizational Management and Development, Time Management, Legal Reporting, Strategies Indirect Materials Procurement, Direct Materials Procurement, Electronic Tendering, Integrated Analytics
Fig. 1.
These functions are too broad for a useful example of agile deployment. One functional area that impacts many business processes is Product Data Management (PDM). PDM systems manage product entities from design engineering through release to manufacturing. In the ERP taxonomy in Fig. 1., PDM is a good starting point for an example.
82
5.2
Glen B. Alleman
PDM Domain Relationships with ERP
Engineering processes drive product development, so engineering tools are at the heart of the PDM systems interaction with the engineering user community. Engineering processes are good candidates for agile deployment, since the process improvement aspects of engineering processes can usually only be discovered by putting them into practice, experimenting with various tools and user interactions, and evolving these business processes to deal with unknown and possibly unknowably demands from the market place. 5.3
Agile PDM Domain Practices
Using the agile principles stated above describes specific actions that implement these principles for a PDM deployment within an ERP project. Principle Assume Simplicity
Embrace Change
Enable the next effort
Incremental Change
Applied in the PDM Domain • COTS products define the requirements, more than the users do. Don’t make changes in the system if it can be avoided. Start with the Out Of The Box system and discover gaps. Fill the gaps with other COTS products when ever possible • Separation of concerns is a critical success factor for both products and processes. Base these separations on the business architecture of the system, and then apply the technical architecture. • Decoupled work processes create architectural simplicity. Search for opportunities to decouple work processes along technical architecture boundaries. • Stateless management of connections between application domains isolates components. • Minimize product structure attribute creation early in the deployment cycle. • Provide a means to add product attributes and relations to the object model later in the deployment cycle. • Start with the simplest business process; verify the system can be deployed against these processes. Progress to more difficult processes but always search for the simplest solutions. • Object architectures enable change, but ruthlessly maintain proper object attributes. Modularity, information hiding, and other object attributes can pay large dividends over time. • Model based thought processes focus requirements. • Continuous delivery using selected products. Focus on vertical versus horizontal delivery (making the disk move on the first instance and every instance after that). • Isolate components to provide a replaceable architecture. • Architecture driven planning in depth is the primary role of the project manager and the architecture staff. • Use a Battle Planning paradigm for the daily project activities – it’s chaos at the low level and big picture strategy at the high level. • Focus on values for today, while keeping the generation of value in the future in mind. • Continuously evaluate the future opportunity costs. • Plan globally, implement locally, guided by architecture. • Rapid planning in depth is not an oxymoron. COTS integration is an experience– based discipline. Skills are important, but are secondary since most decisions are irrevocable.
Fig. 2.
Agile Project Management Methods for ERP Maximize Value
Manage with a Purpose
Multiple Views
Rapid Feedback Working Software
Travel Light
83
• Put tools in the hands of the users. Discover what we have to do for the people who have to do the work. • Provide these tools in a rapid, efficient, and beneficial manner, with the minimum of resources and disruptions to the ongoing operation. • The stakeholders define the dimensions of value, ask them what they want, when they want it, and how much they’re willing to pay. • Architecture centered management places the proper boundaries on creativity. • Always define the outcome of an action: who benefits? How can this benefit be recognized? What does this benefit cost? • Never confuse effort with results. • Objects (static and dynamic) are nice but they don’t show the business process. • Data flow is nice but it doesn’t show the underlying business object architecture. • Control flow is useful for business process improvement, but be careful about redundant data and persisted entities. • Event and data source and sink can be used for isolating business process boundaries. • Business processes can be used to define the highest level boundaries. • Interface exchange artifacts are critical for maintaining separation of concerns. • Inversion of control – the identification and management of the interface control points is a critical success factor. • Continuous engagement with the stakeholders. • War room mentality in which the participants are fighting the system not each other. • Continuous delivery of functionality. • COTS products change this concept, but system integration efforts are just as difficult and important. • Continuous delivery using standard products with the minimum of customization. • Use the vendor’s tools to get something working fast. • Avoid customizing a COTS product if at all possible. • Analyze and Model once, publish many. Use technology to reduce the white space in the process and organization. • Move fast and light. Use experience based behaviors and high–level specifications to guide architecture. Low–level specifications add NO sustaining value in an ERP system. Working code is the value to the stakeholders. • Working software is the final specification. Use specifications to capture knowledge that will be needed independent from the working software. This can be interface specification for 3rd parties, justifications for the decisions, and other tribal knowledge conveyance materials.
Fig. 2. (continued)
5.4
Agile ERP Heuristics
Agility is about being adaptive. Heuristics are a way of learning from the past and adapting to the future. There is a large body of heuristic–oriented guidelines for programming languages and other low level development activities. The following are broadly applicable heuristics in the ERP integration problem domain [66]:
• Choose components so they can be implemented independently of the internal behavior of others. Ask the vendor: Can I replace your product with another? • The number of defects remaining undiscovered after a test suite is proportional to the number of defects found during the test. The constant of proportionality depends on the thoroughness of the test, but is rarely less than 0.5 in the traditional test– last environment.
84
Glen B. Alleman
• Very low rates of delivered defects can be achieved only by very low rates of defect insertion throughout the development process. This is the primary contribution of development processes like Extreme Programming [17]. • The system must be grown not built. The use of evolutionary development and deployment are critical indicators of an agile organization. Without this the process is not agile. • The cost of removing a defect from the system grows exponentially with the number of cycles, since the defect was inserted. Constant and complete test processes are an indicator of agile organizations [22]. • Personnel skills dominate all other factors in productivity and quality [49]. • The cost of fixing a defect does not rise with time. It may be cheaper to discover a requirements defect in final use testing than in any other way. Continuous releases are critical in agile organizations. Put the system in the hands of the stakeholder as often as possible. • Architecture–based processes provide a touchstone when things go wrong – and they will go wrong. The project manager should always ask the question how does this proposed change fit into the architecture, change the architecture, or affect the architecture in some way? 5.5
The Critical Success Factors for Agile ERP
The question of what specific agile processes are to be applied in the ERP domain can be addressed by focusing on the Critical Success Factors related to ERP [63]. One such list includes [70]: Top Management Support Management of Expectations Use of Vendor’s Tools Project Management Use of Consultants Business process reengineering Dedicated resources Change Management Education on Processes Interdepartmental Cooperation
Project Champions Vendor/Customer Relationships Careful package selection Steering Committee Minimal Customization Defining the Architecture Project Team Competence Clear Goals and Objectives Interdepartmental Communication Ongoing Vendor Support
Fig. 3.
Applying agile principles and practices in support of these CSF’s will guide the project manager to the agile methods of addressing the complexities of ERP deployment.
6
Call to Action It is both interesting and significant that the first six out of sixteen technology factors associated with software disasters are specific failures in the domains of project management, and three of the other technology deficiencies can be indirectly assigned to poor project management practices [50].
The management of an ERP deployment involves requirements gathering, vendor selection, product acquisition, system integration and software development, and
Agile Project Management Methods for ERP
85
finally system deployment and operation. It involves risk management, stakeholder politics, financial support, and other intangible roles and activities that impact project success. By applying the values and principles of agile methods along with risk management, clearly defined and articulated critical success factors, architecture driven design, people management, the agile team can deliver real value to the stakeholders in the presence of uncertainty while maximizing the return on assets and minimizing risk.
References The following resources have been used as assemble the compendium of ideas presented in this paper. Since many of the ideas presented here are not mine, I give full acknowledgement to the original source and authors of the materials presented here. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
19. 20.
Alexander, Christopher, A Timeless Way of Building, Oxford University Press, 1979. Alexander, Christopher, Notes on the Synthesis of Form, Harvard University Press, 1964. Agresti, New Paradigms for Software Development, IEEE Computer Society Press, 1986. Ambler, Scott, www.agilemodeling.com. Ambler, Scott, Process Patterns and More Process Patterns, Cambridge University Press, 1998 and 1999. Amram, Martha, and John Henderson, “Managing Business Risk by IT Investment: The Real Options View,” CIO Magazine, March 1999. Amram, Martha, Nalin Kulatilaka, and John Henderson, “Taking an Option on IT,” CIO Magazine, June 15, 1999. Amram, Martha and Nalin Kulatilaka, Real Options: Managing Strategic Investments in a World of Uncertainty, Harvard Business School Press, 1999. Aoyama, Mikio, “New Age of Software Development: How Component Based Software Engineering Changes the Way of Software Development,” 1998 International Workshop on Competent–Based Software Engineering, 1998. Aoyama, Mikio, “Agile Software Process and Its Experience,” International Conference on Software Engineering, 1998. “APICS Survey,” CPIM Journal, 46, 17 July 2000. Arthur, W. Brian, “Increasing Returns and the New World of Business,” Harvard Business Review, 74(4), pp. 100–110, July–August 1996. Austin, Robert D. and Richard L. Nolan, “How to Manage ERP Initiatives,” Working Paper 99–024, 1998. Baker, F. T. “Chief Programmer Team Management of Production Programming,” IBM Systems Journal, 11(1), 1972. Basili, Victor, “Iterative Enhancement: A Practical Technique for Software Improvement,” IEEE Transactions on Software Engineering, 1(4), December 1975. Bateman, T. S. and C. P. Zeithaml, Management: Funciton and Strategy, Irwin, 1990. Beck, Kent, Extreme Programming Explained: Embrace Change, Addison Wesley, 1999. Bennington, “Production of Large Computer Programs,” Symposium on Advanced Computer Programs for Digital Computers, sponsored by Office of Naval Research, June 1956. Reprinted in Annals of the History of Computing, October 1983, pp. 350–361. Reprinted at ICSE ’87, Monterey, California, March 30–April 7, 1987. Boehm, Barry, “Get Ready for Agile Methods with Care,” IEEE Computer, pp. 64–69, January 2002. Boehm, Barry, and Kevin Sullivan “Software Economics: A Roadmap,” in The Future of Software Engineering, special volume, A. Finkelstein, Ed., 22nd International Conference on Software Engineering, June, 2000.
86
Glen B. Alleman
21. Boehm, Barry, “Software Engineering Economics,” IEEE Transactions on Software Engineering, 10, January 1984. 22. Boehm, Barry, “Anchoring the Software Process,” IEEE Software, pp. 73–82, July 1996. 23. Boehm, Barry, et. al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000. 24. Brooks, Fred, “No Silver Bullet: Essence and Accidents of Software Engineering,” IEEE Software, 20(4), pp. 10–19, April 1987. 25. Brooks, Fred, The Mythical Man–Month, Addison Wesley, 1995. 26. Charette, Robert N., “Large–Scale Project Management is Risk Management,” IEEE Software, pp. 110–117, July 1996. 27. Cockburn, Alistair, http://crystalmethodologies.org/ 28. Cockburn, Alistair, http://members.aol.com/humansandt/crystal/clear/ 29. Copeland, Thomas E. and Philip T. Keenan, “How Much is Flexibility Worth?” The McKinsey Quarterly, 2, 1998. 30. Copeland, Thomas E. and Philip T. Keenan, “Making Real Options Real,” The McKinsey Quarterly, 3, 1998. 31. Courtney, Hugh, “Making the Most of Uncertainty,” The McKinsey Quarterly, 4, pp. 38– 47, 2001. 32. Davis, Alan M., “Fifteen Principles of Software Engineering,” IEEE Software, 11(6), pp. 94–96, November/December, 1994. 33. Earl, Michael, Jeffery Sampler, and James Short, “Strategies for Reengineering: Different ways of Initiating and Implementing Business Process Change,” Centre for Research in Information Management, London Business School, 1995. 34. Earl, Michael, “Information Systems Strategy: Why Planning Techniques are Never the Answer,” Centre for Research in Information Management, London Business School, 1995. 35. Erdogmus, H., “Valuation Of Complex Options In Software Development,” First Workshop on Economics–Driven Software Engineering Research, EDSER–1, May 17, 1999. 36. Flatto, Jerry, “The Role of Real Options in Valuing Information Technology Projects,” Association of Information Systems Conference, 1996. 37. Funtowicz, S. and J. Ravetz, “Post–Normal Science: A New Science for New Times,” Scientific European, pp. 95–97, March 1992. 38. Gattiker, Thomas and Dale Goodhue, “Understanding the Plant Level Costs and Benefits of rd ERP: Will the Ugly Duckling Always Turn Into a Swan?” Proceedings of the 33 Hawaii International Conference on System Sciences, 2000. 39. Georgescu–Roegen, Nicholas, The Entropy Laws and Economic Progress, Harvard University, 1971. 40. Glass, Robert, “Failure is Looking More Like Success These Days,” IEEE Software, January / February 2002. 41. Goldman, Steven. L., Roger N. Nagel and Kenneth Preiss, Agile Competitors and Virtual Organizations: Strategies for Enriching the Customer, Jossey–Bass, 1994. 42. Hammond, J. S., R. L. Keeney, and H. Raiffa, Smart Choices: A Practical Guide to Making Better Decisions, Harvard Business School Press, 1999. 43. Highsmith, Jim, Adaptive Software Development, Dorset House, 1999. 44. Hoffman, Hubert and Franz Lehner, “Requirements Engineering as a Success Factor in Software Projects, IEEE Software, pp. 58–66, July / August 2001. 45. Holland, Christopher and Ben Light, “A Critical Success Factors Model for ERP Implementation,” IEEE Software, pp. 30–36, May / June 1999. 46. Huber, Thomas, Rainer Alt, and Huber Österle, “Templates – Instruments for Standardizing rd ERP Systems,” Proceedings of the 33 Hawaii International Conference on System Sciences, 2000. 47. Jeffries, Ron, Extreme Programming Installed, Addison Wesley, 2000. 48. Jones, Capers, Software Assessments, Benchmarks, and Best Practices, Addison Wesley, 2000. 49. Jones, Capers, “What it Means to be Best in Class,” Version 5, February 10, 1998.
Agile Project Management Methods for ERP
87
50. Jones, Capers, Patterns of Software Systems Failures and Success, International Thompson Computer Press, 1996. 51. Kogut, Bruce and Nalin Kulatilaka, “Strategy, Heuristics, and Real Options,” The Oxford Handbook of Strategy (2001), Chapter 30, 2001. 52. Kogut, Bruce and Nalin Kulatilaka, “What is Critical Capability?” Reginald H. Jones Center Working Paper, Wharton School, 1992. 53. Kuhn, T. S., Structure of Scientific Revolutions, Chicago University Press, 1962. 54. Kulik, P. and R. Samuelsen, “e–Project Management for the New Reality,” PM Network Online, PMI Management Institute, 11 April 2001. 55. Leslie, Keith J. and Max P. Michaels, “The Real Power of Real Options,” McKinsey Quarterly, 3, pp. 97–108, 1997. 56. Morris, C. and C. Ferguson, “How Architecture Wins Technology Wars,” Harvard Business Review, pp. 86–96, March–April 1993. 57. Oberndorf, Patricia and David Carney, A Summary of DoD COTS–Related Policies, SEI Monographs on the Use of Commercial Software in Government Systems, Software Engineering Institute, Carnegie Mellon University, 1998 58. Osterweil, Leon J., “Software Processes are Software Too,” Proceedings of the 9th International Conference on Software Engineering (ICSE 1987), pp. 2–13, March 1987, Monterey, CA. 59. Osterweil, Leon J., “Software Processes Are Software Too, Revisited,” Proceedings of the 19th International Conference on Software Engineering (ICSE 1997), pp. 540–548, May 1997, Boston, MA. 60. Pajares, Frank, “The Structure of Scientific Revolutions,” Outline and Study Guide, Emory University, http://www.emory.edu/EDUCATION/mfp/Kuhn.html. 61. Potters, Marc, et. al. “Financial Markets as Adaptive Ecosystems,” May 31, 2001. arXiv:cond–mat/9609172. 62. Remy, Ron, “Adding Focus to Improvement Efforts with PM3,” PM Network, July 1997. 63. Rockart, J. F. and C. V. Bullen, “A Primer on Critical Success Factors,” Center for Information Systems Research, Working Paper No. 69, Sloan School of Management, MIT, 1981. 64. Royce, Walker, Software Project Management, Addison Wesley, 1998. 65. Royce, Winston W., “Managing the Development of Large Scale Software Systems,” Proceedings of IEEE WESCON, pp. 1–9, August 1970. 66. Rechtin, System Architecture: 2nd Edition, CRC Press, 2000. 67. Ross, Jeanne, “Surprising Facts About Implementing ERP,” IT Pro, pp. 65–68, July / August 1999. 68. Saures, Isabelle, “A Real World Look at Achieving Project Management Maturity,” Project th Management Institute 29 Annual Seminars/Symposium, October 9–15, 1998. 69. Shaw, Mary and D. Garlan, Software Architecture, Prentice Hall, 1996. 70. Sommers, Toni and Klara Nelson, “The Impact of Critical Success Factors across the th Stages of Enterprise Resource Planning Implementations,” Proceedings of the 34 Hawaii International Conference on System Sciences, 2000. 71. Sullivan, Kevin. P., William G. Griswold, Yaunfang Cai, and Ben Hallen, “ The Structure and Value of Modularity in Software Design,” Proceedings of the Joint International Conference on Software Engineering, September 2001 72. Sullivan, Kevin, P. Chalasani, S. Jha, and V. Sazawal, “Software Design as an Investment Activity: A Real Options Perspective,” in Real Options and Business Strategy: Applications to Decision–Making, edited by Lenos Trigeorgis, Rick Books, 1999. 73. Strassman, P. A., The Squandered Computer, The Information Economics Press, 1997. 74. Szulanski, Gabriel, “Unpacking Stickiness: An Empirical Investigation of the Barriers to Transfer Best Practices Inside the Firm,” INSEAD Study, Academy of Management Best Paper Proceedings, pp. 437–441, November 1995.
88
Glen B. Alleman
75. Teisberg, E. O., “Methods for Evaluating Capital Investment Decisions under Uncertainty,” in Real Options in Capital Investment: Models, Strategies, and Applications, edited by L. Trigeorgies, Praeger, 1995. 76. Thorburn, W. M. “The Myth of Occam's Razor,” Mind 27:345–353, 1918. 77. Vico, Giambattisa, (1668–1744) “Method of the Studies of Our Times,” Naples, Italy, 1708. 78. Wallnau, Kurt, Scott Hissam, and Robert Seacord, Building Systems from Commercial Components, SEI Series in Software Engineering, Addison Wesley, 2002. Glen Alleman is the Chief Technology Officer of Niwot Ridge Consulting specializing in enterprise application integration, system architecture, business process improvement, and project management applied to manufacturing, electric utility, petrochemical, aerospace, process control, publishing, and pharmaceutical industries.
Extreme Programming in a Research Environment William A. Wood and William L. Kleb NASA Langley Research Center, Hampton VA 23681, USA {W.A.Wood,W.L.Kleb}@LaRC.NASA.Gov
Abstract. This article explores the applicability of Extreme Programming in a scientific research context. The cultural environment at a government research center differs from the customer-centric business view. The chief theoretical difficulty lies in defining the customer to developer relationship. Specifically, can Extreme Programming be utilized when the developer and customer are the same person? Eight of Extreme Programming’s 12 practices are perceived to be incompatible with the existing research culture. Further, six of the nine “environments that I know don’t do well with XP” [Beck, 2000] apply. A pilot project explores the use of Extreme Programming in scientific research. The applicability issues are addressed and it is concluded that Extreme Programming can function successfully in situations for which it appears to be ill-suited. A strong discipline for mentally separating the customer and developer roles is found to be key for applying Extreme Programming in a field that lacks a clear distinction between the customer and the developer. Keywords: XP, extreme programming, customer, scientific application, testing, research, software development process
1
Introduction
Extreme Programming (XP), as an agile programming methodology, is focused on delivering business value. In the realm of exploratory, long-term, small-scale research projects it can be difficult to prioritize near-term tasks relative to their monetary value. The assignment of even qualitative value can be particularly challenging for government research in enabling fields for which business markets have not yet developed. This fundamental conflict between near-term business value and long-term research objectives is manifested as a culture clash when the basic practices of XP are applied. A brief introduction to these problematic practices follows. XP places a premium on the customer/developer relationship, requiring an on-site customer as one of its twelve practices. Both the customer and developer have clearly defined roles with distinct responsibilities. Both interact on a daily basis, keeping each other honest and in sync. The customer focuses the developer on the business value, while the developer educates the customer on the feasibility and cost of feature requests. In the context of long-term research, the technologies being explored may be immature or uncertain, years removed D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 89–99, 2002. c Springer-Verlag Berlin Heidelberg 2002
90
William A. Wood and William L. Kleb
from commercial potential. In this situation the researcher can become the only customer, at least for the first several years, of their own development effort. What happens to the balance of power between customer and developer when they are the same person? Can a person serve two masters? The government research lab environment conflicts with the pair programming and collective code ownership practices of XP because the compensation system, based on the Research Grade Evaluation Guide [2], emphasizes individual stature. Another practice, the 40-hour week, is problematic, though perhaps for an inverse reason than encountered in programming shops. The experience of the present team is that only about 10 hours per week are mutually available for joint programming, with the rest of the time absorbed by responsibilities for other tasks or unavailable due to conflicting schedules. Another practice that is a potential show-stopper is the requirement for simple designs. Performance is always an issue for numerical analysis, and past experience with procedurally implemented and speed optimized algorithms has verified the exponentially increasing cost to change the fundamental design of elaborate codes. The lure of premature optimization for the developer is very strong, particularly in the absence of a business-value oriented customer. Three more of the core practices were perceived to be a poor fit with the research environment because it was not clear how to implement them for a science application as opposed to a business application. Continuous integration conflicts with the traditional approach of implementing algorithms in large chunks at a time. Testing, perhaps ironically for a scientific research community, was not commonly done at the unit level, and in fact the appropriate granularity for testing was not evident. Finally, only the naive metaphor seemed to present itself. The following section discusses the existing culture at a research laboratory, detailing the inherent conflicts with the XP values. The next section provides the background for a pilot project to evaluate the applicability of XP for scientific computing. The project was conducted under the auspices of a plan to explore nontraditional but potentially high payoff strategies for the design and assessment of aerospace vehicles. Specific observations concerning the implementation of XP practices in a research programming environment are enumerated. The results of the pilot project are then presented with conclusions drawn as to the effectiveness of XP in the context of research-oriented programming.
2
Culture
Beck [1] presents a list of nine “environments that I know don’t do well with XP.” Six of these nine are counter to the existing culture at this research center. Beck prefaces his assertions of inapplicability with the caveat that the list is based upon his personal experiences and that, “I haven’t ever built missile nosecone software, so I don’t know what it is like.” The software developed for the research situation considered here is in fact intended for aerothermal predictions on nosecones of hypervelocity vehicles, and so the present study accepts
Extreme Programming in a Research Environment
91
Beck’s challenge that, “If you write missile nosecone software, you can decide for yourself whether XP might or might not work.” The counter-indicators to using XP as they apply to the present research situation are detailed in this section along with strategies for coping with them. Addressing the issues in the order presented by Beck, the “biggest barrier to the success of an XP project” arises from an insistence on complete up-front design at the expense of “steering.” In February 2002, NASA announced a $23.3M award to Carnegie Mellon “to improve NASA’s capability to create dependable software.” Two-week training courses in the Personal Software Process (PSP) developed by Carnegie Mellon have already begun, complete with a 400-page introductory textbook. The PSP assigns two-thirds of the project time to requirements gathering, documenting, and design. Coding, with the possibility for steering, is not allowed until the final third of the project. Further, significant steering can trigger a ‘re-launch’, where the requirements and design process is started all over again. The present project blended PSP and XP in a 0:100% ratio, and so far has not encountered any administrative consequences. Another cultural practice at odds with XP is “big specifications.” The ISO 9001 implementation at the Center includes a 45-page flowchart for software quality assurance (LMS-CP-4754) and a 17-page flowchart for software planning and development (LMS-CP-5528), in which only one of the 48 boxes contains “Code and Test”, located 75% of the way through. Despite threats of being ISO non-compliant, the present project simply ignored the approved software process, deferring the issue to when, or if, an ISO audit uncovers the discrepancy. Beck observed, “Really smart programmers sometimes have a hard time with XP,” because they tend to “have the hardest time trading the ‘Guess Right’ game for close communication.” The members of the research teams typically have doctoral degrees, though not in computer science. The reward structure under which the researchers operate is based upon peer review of one’s stature in the field, leading to individual success or project management being highly valued, whereas team membership is not as valued. Adopting XP for the first time required a lot of trust, suppressing some long-held programming styles in the belief that two people doing XP would be more productive than the sum of their individual efforts. While the adoption of XP for large teams has been a frequent subject of debate, the present study faces the opposite problem, a small team of only two people. Maintaining the distinct roles of programmer, customer, recorder, and coach were perceived to be challenges to the adoption of XP. With very small teams the literature was unclear as to which tasks could safely be performed solo and which others would rapidly degenerate into cowboy coding. Also, with only two developers there would not be the cross-fertilization benefit of rotating partners. Another potential problem for the small team is inter-personal conflicts. When communication turns to confrontation, there are no other team members to play the role of mediator. Addressing these concerns required diligence in delineating roles and a conscious decision to keep the team focused on productive work. To reign in the cowboy coding, test-driven pair programming was used
92
William A. Wood and William L. Kleb
exclusively when implementing features. Pair programming was also preferred during refactoring, but solo refactoring was permitted when scheduling conflicts precluded pairing and no tests were broken or added. “Another technology barrier to XP is an environment where a long time is needed to gain feedback.” A role of a government research center is to pursue long-term, revolutionary projects. Development cycles can be over a decade in length. The feedback loop on whether or not the project is headed in a fruitful direction can be measured in years. XP prefers steering inputs on a days-toweeks time frame. It remained to be seen if long-term research goals could be recast in small, tangible increments suitable to XP’s 2–3 week iteration cycles. In practice, the research feedback time scale remains large, but for development purposes the technology features were able to be decomposed into small iteration chunks, following the simple design XP practice. Beck cautions against “senior people with corner offices,” because of the barriers to communication. At research centers the senior engineers typically have individual offices1 . Further, colleagues are spread over multiple buildings at the local campus. Projects could also involve a collaboration with an off-site co-worker, such as a university professor. A trip to the junk furniture warehouse and borrowed time in a wood shop allowed for the one-to-a-cubical office layout to be refactored into a commons-and-alcoves [3], Figure 1.
3
Background
Despite the counter-indicators to the use of XP for scientific programming needs, the present project successfully competed for one-year funding to perform a spike2 evaluation of XP. The funding source had specifically solicited bids exploring nontraditional methodologies for the field of aerospace engineering research that might produce extraordinary gains in productivity or enable entirely new applications. This evaluation of XP for a research environment was conducted by two researchers in three phases: learning, preparing, and implementation. Neither investigator had prior experience with XP. Learning was achieved through a combination of personal reading3 and sponsorship of the Modern Programming Practices Lecture Series4 through the co-located Institute for Computer Applications in Science and Engineering. The investigators also transitioned from procedural programming to object-oriented technologies, believing that switch was a necessary, though not sufficient, prerequisite for flattening the cost-of-change curve for software development and maintenance. In preparation for the XP experiment, environmental barriers were addressed. The office was refactored into an open development room with copious marker 1 2 3 4
Or cubicles. A short-term prototyping assessment. Bibliographic information is provided for books [1, 4–23], articles [24], and websites [25–27] that were found to be helpful. For a list of speakers and supporting material, see http://www.icase.edu/series/ MPP.
Extreme Programming in a Research Environment
(a) Original
93
(b) Original
Fig. 1. The 15 × 17 office layout transitioned from large, isolated work spaces with desks separated by towering bookcases and joined by a narrow aisle to small isolated work spaces employing tables and a large common area consisting of a Beowulf cluster, a pair programming station, a conference table, and white boards. Note: the partition at the upper right of (b) can be moved to further isolate one or the other private work areas and all three areas can now accommodate pair programming
board space and a pair programming station was constructed with simultaneous dual keyboard/mouse inputs as shown in Figure 2, connected to a 16-processor Beowulf cluster. The development environment is: GNU/Linux operating system, Emacs IDE, and the Ruby programming language [28, 29]. The research value to be delivered by the spike project was a software testbed for evaluating the performance of an optimally adaptive Runge-Kutta coefficient strategy for the evolution of an advection-diffusion problem, ut + λ · ∇u = µ ∇2 u , advection
(1)
diffusion
in a multigrid context [30, 31]. Integration of Eq. (1) with application of Gauss’ Divergence theorem leads to ∂ u dΩ = (−uλ + µ∇u) · n ˆ dΓ . (2) ∂t Ω Γ The desired Runge-Kutta strategy would optimize the damping of the highfrequency errors in the discrete representation of the temporal evolution in Eq. (2), while the multigrid scheme applied to the spatial discretization serves to alias discrete low-frequency errors into higher harmonics, which are efficiently damped by the temporal operator.
94
William A. Wood and William L. Kleb
Fig. 2. Pair programming station consisting of two Herman Miller Aeron task chairs, a 60”-wide Anthro AdjustaCart, Logitech wireless keyboards and mice, Belkin keyboard and mouse switches, and two Viewsonic 18” LCD displays supporting a merged 2, 560× 1, 024-pixel desktop. The sustenance items, refrigerator, microwave, fresh-air supply, and plants, can be seen at the right
Both investigators had independent prior experience programming related algorithms for the advection-diffusion equation using Fortran. Neither investigator had experience in team software development, object-oriented design, unit testing, or programming with the Ruby language.
4
Methodology
A serious effort was made to apply the 12 XP practices by the book. As described in Sect. 1, eight of the practices presented challenges for implementation. These challenges were caused by perceived environmental, historical, or cultural barriers. The biggest challenge was to have an on-site customer, specifically when the customer and developer are the same person. In the present case, the developers were writing software for their own use. With two team members it was decided that the individual with the most to gain from using the software would serve as the customer while the other individual would serve as the developer during the
Extreme Programming in a Research Environment
95
planning game. During coding, both individuals served as developers until questions arose, at which point one individual would have to answer in the customer role. This switching of roles proved to be challenging for the individual performing dual jobs. During the planning game it was a challenge to think of stories without simultaneously estimating their cost. The game required a lot of communication and conscious efforts to think goal-oriented and remain focused on end results when playing the customer, rather than thinking of the work about to be performed as the developer. It was found that forcing a user-oriented viewpoint helped to focus the research effort, and it is believed that, while difficult and uncomfortable, the explicit role of customer during the planning game improved the value of the research project. Even outside the context of a programming assignment, a planning game with a customer role is recommended for other research projects as a highly effective focusing tool. The simple design practice was accepted with skepticism, as poorly conceived numerical analysis algorithms can be prohibitively time consuming to run. Past experience with procedural algorithms suggests that performance issues need to be planned up-front. The approach of the present team was to include performance measures in the acceptance tests to flag excessive execution times, and then to forge ahead with the simplest design until the performance limits were exceeded. Once a performance issue was encountered, a profiler was used to target refactorings that would speed the algorithms just enough to pass the performance criteria. The speed bottlenecks were not always intuitive, and it became evident that premature optimization would have wasted effort on areas that were not the choke points while still missing the eventual culprits (cf. [32]). Also, the adherence to simple designs made the identification and rectification of the bottlenecks easier at the appropriate time. As discussed in Sect. 2, pair programming appeared to be a poor fit, and the small team would suffer from not being able to rotate pairs. However, productivity gains were in fact achieved through pairing. The pair pressure effect led to intensive sessions that discouraged corner cutting, and the constant code review produced much cleaner, more readable code that was much easier to modify and extend. Also, even though each developer had over 23 years programming experience, there was still some cross-fertilization of tricks and tips that accelerated individual coding rates. The collective code ownership practice was counter to the established practice at this research center and conflicted with the promotion criteria. The present team agreed to collective code ownership and did not experience any problems. The long term impact of not having sole code ownership with regards to promotion potential is not yet known. The research environment is different from a programming shop, in that other activities occupy most of a person’s time, and the present effort found only about 10 hours per week for pair programming, instead of the recommended 40 hour practice. New functionality was always added during joint sessions in a test driven format. With the pair-created tests serving as a safety net, solo
96
William A. Wood and William L. Kleb Table 1. Work effort for two-week iterations, over two release cycles Iteration Estimated hours Actual hours Velocity
1.1 19 22 1
1.2 14 8 2
1.3 15 8 2
2.1 8 8 1
2.2 17 30 1 2
2.3 29 18 1 12
Total 102 94 1
refactoring was permitted to increase the rate of progress. Also, disposable spikes and framework support were occasionally conducted solo. Unit testing was not commonly done prior to the present effort, and it was not clear what to test, in particular the appropriate granularity for tests. Four levels of fully-automated testing were implemented. Unit tests using an xUnit framework were written for each class, and the collection of all unit tests along with an instantiation of the algorithms devoid of the user interface was run as the integration test, running in a matter of seconds. Smoke tests, running in under a minute, exercised complete paths through the software including the user interface. Full stress tests, taking hours, included acceptance tests, performance monitoring, distributed processing, and numerical proofs of the algorithms for properties such as positivity and order of accuracy. All levels of testing could be initiated at any time, and all forms are automatically executed nightly. A search for a system metaphor was conducted for a while and eventually the naive metaphor was selected as no other analogy seemed suitable. The naive metaphor worked well, as both the customer and developer spoke the same jargon, being the same people. Continuous integration was addressed by assembling a dedicated integration machine and by crafting scripts to automate development and testing tasks. The planning game and simple design helped pare implementations down to small chunks suitable to frequent integration.
5
Results
The pilot project consisted of two release cycles, each subdivided into three twoweek iterations, for a total project length of 12 weeks. The estimated and actual time spent working on stories and tasks for each iteration is listed in Table 1. The times reported do not include the time spent on the planning game. Typical lengths for the planning game at the start of each iteration were two hours. The overall average velocity for the project was about one, and the average time per week spent on development was about eight hours. The team produced 2,545 lines of Ruby code, for an average of 27 lines per hour (productivity of a pair, not individual). A breakdown of the types of code written shows that the average pair output was the implementation of one method with an associated test containing six asserts every 45 minutes. This productivity includes design and is for fully integrated, refactored, tested, and debugged code. Prior performance by the team on similarly scoped projects not developed using XP has shown an average productivity of 12 lines per hour, or
Extreme Programming in a Research Environment
97
24 lines per hour for two workers. However, this historical productivity is for integrated, but neither tested nor debugged, code. Further, a subjective opinion of code clarity shows a strong preference toward the pair-developed code. Of the total software written, 912 lines were for production code, 1135 lines were for test code, and 498 lines were for testing scripts and development utilities. The production code contains 120 method definitions, exclusive of attribute accessor methods. The automated test code, both unit and acceptance, contains 128 specific tests implementing 580 assertions. A prior, non-XP project by the present team implementing comparable functionality required 2,144 lines of code, approximately twice as large as the current production code. The reduction in lines of code per functionality is attributable primarily to merciless refactoring and secondarily to the continuous code review inherent to pair programming.
6
Conclusions
Despite counter-indicators of XP being at odds with the existing research software development culture and the initial awkwardness of several of the practices, an XP spike project was successfully implemented. Attention to the XP rules, blind trust in the XP values, and the diligent role playing of the customer and developer parts were key to this success. The conscious and deliberate separation of the customer role from the developer, even when embodied by the same individual, was found to provide a benefit to the research project in general, beyond the scope of the software development. This benefit was manifest as a focusing of the research effort to tangible, targeted goals. The team consisted of two people, and undoubtedly missed the benefits XP brings through pair rotation. While not preferred, it was found that some refactoring could be safely performed solo when supported by sufficient automated testing. This compromise was necessitated by the realities of conflicting schedules with a part-time work commitment. Predominantly, the initial cultural counterindicators to using XP were found in fact to not preclude the use of XP in the research context, although the long-term impact on promotions and prestige due to lack of clear code ownership is not known. It is anticipated that the more prolific research output enabled by XP will more than compensate for the loss of single code ownership upon prestige in the field. The results of the present study indicate that the XP approach to software development is approximately twice as productive as similar historical projects undertaken by members of the team. This study implemented functionality at the historical rate, but also supplied an equal amount of supporting tests, which are critical to the scientific validity of the research effort, and which were not included in the historical productivity rates. Further, the functional code base is about half the lines of code as would be expected from past experience, and the readability of the code is considered to be much improved. Continual refactoring, emergent design, and constant code review as provided by XP are largely responsible for the improved code aesthetics.
98
William A. Wood and William L. Kleb
References 1. Beck, K.: Extreme Programming Explained: Embrace Change. Addison-Wesley (2000) 2. Workforce Compensation and Performance Service: Research grade evaluation guide. Transmittal Sheet TS-23, Office of Personel Management, Washington, DC (1976) Also available as http://www.opm.gov/fedclass/gsresch.pdf. 3. Alexander, C., Ishikawa, S., Silverstein, M.: A Pattern Language: Towns · Buildings · Construction. Center for Environmental Structure. Oxford University Press (1977) 4. Beck, K., Fowler, M.: Planning Extreme Programming. XP. Addison-Wesley (2001) 5. Jeffries, R., Anderson, A., Hendrickson, C.: Extreme Programming Installed. XP. Addison-Wesley (2001) 6. Succi, G., Marchesi, M., eds.: Extreme Programming Examined. XP. AddisonWesley (2001) 7. Newkirk, J., Martin, R.C.: Extreme Programming in Practice. XP. Addison-Wesley (2001) 8. Wake, W.C.: Extreme Programming Explored. XP. Addison-Wesley (2002) 9. Auer, K., Miller, R.: Extreme Programming Applied: Playing to Win. XP. AddisonWesley (2002) 10. Fowler, M.: Refactoring: Improving the Design of Existing Code. Addison-Wesley (1999) 11. Yourdon, E.: Death March: Managing “Mission Impossible” Projects. PrenticeHall (1997) 12. Brooks, Jr., F.P.: The Mythical Man-Month: Essays on Software Engineering. Anniversary edn. Addison-Wesley (1995) 13. Kernighan, B.W., Pike, R.: The Practice of Programming. Addison-Wesley (1999) 14. Cockburn, A.: Surviving Object-Oriented Projects. Addison-Wesley (1998) 15. Fowler, M.: UML Distilled: A Brief Guide to the Standard Object Modeling Language. Object Technology. Addison-Wesley (2000) 16. Booch, G.: Object Solutions: Managing the Object-Oriented Project. ObjectOriented Software Engineering. Addison-Wesley (1996) 17. Booch, G.: Object Oriented Design with Applications. Ada and Software Engineering. Benjamin/Cummings (1991) 18. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Resuable Object-Oriented Software. Professional Computing. Addison-Wesley (1994) 19. Meyer, B.: Object-Oriented Software Construction. 2nd edn. Prentice-Hall (1997) 20. Hunt, A., Thomas, D.: Pragmatic Programmer: From Journeyman to Master. Addison-Wesley (1999) 21. DeMarco, T., Lister, T.R.: Peopleware: Productive Projects and Teams. 2nd edn. Dorset House (1999) 22. Highsmith, III, J.A.: Adaptive Software Development: A Collaborative Approach to Managing Complex Systems. Dorset House (2000) 23. Machiavelli, N.: The Prince. Bantam classic edn. Bantam Books (1513) 24. Gabriel, R.P., Goldman, R.: Mob software: The erotic life of code. http://oopsla. acm.org/oopsla2k/postconf/Gabriel.pdf (2000) ACM Conference on ObjectOriented Programming, Systems, Languages, and Applications (OOPSLA). 25. http://www.c2.com/cgi/wiki?ExtremeProgramming (2000) 26. http://www.xprogramming.com/ (2000) 27. http://www.extremeprogramming.org/ (2000)
Extreme Programming in a Research Environment
99
28. Matsumoto, Y.: Ruby in a Nutshell: A Desktop Quick Reference. O’Reilly & Associates (2002) 29. Thomas, D., Hunt, A.: Programming Ruby: The Pragmatic Programmer’s Guide. Addison-Wesley (2001) 30. Kleb, W.L., Wood, W.A., van Leer, B.: Efficient multi-stage time marching for viscous flows via local preconditioning. AIAA Paper 99–3267 (1999) 31. Kleb, W.L.: Optimizing Runge-Kutta Schemes for Viscous Flow. PhD thesis, University of Michigan (2003) In preparation. 32. Goldratt, E.M., Cox, J.: The Goal: A Process of Ongoing Improvement. 2nd edn. North River Press (1992)
Biographies Bill Wood: PhD Virginia Tech, Aerospace Engineering. 1987–Present: NASA Langley Research Center. Currently in the Aerothermodynamics Branch. Bil Kleb: PhD Candidate University of Michigan, Aerospace Engineering. 1988– Present: NASA Langley Research Center. Currently in the Aerothermodynamics Branch.
Tailoring XP for Large System Mission Critical Software Development Jason Bowers, John May, Erik Melander, Matthew Baarman, and Azeem Ayoob Motorola, Commercial Government and Industrial Solutions Sector, Private Radio Networks Engineering .EWSR&S[IVW$QSXSVSPEGSQ
Abstract. A plethora of subjective evidence exists to support the use of agile development methods on non-life-critical software projects. Until recently, Extreme Programming and Agile Methods have been sparsely applied to Mission Critical software products. This paper gives some objective evidence, through our experiences, that agile methods can be applied to life critical systems. This paper describes a Large System Mission Critical software project developed using an agile methodology. The paper discusses our development process through some of the key components of Extreme Programming (XP).
1
Introduction
We develop "soft real-time" software for an infrastructure product for public safety communication systems. Our group has been using a 8 year-old common process that fits well into the business needs of our organization. The business needs drive the process to move relatively slowly and produce a product with extremely high quality. We are currently rated at SEI-CMM level 3. The infrastructure product is 10 years old and consists of over a million lines of, primarily, C code. The latest release required significant changes to one of the oldest, most complex, architectural components in the box. We took this opportunity to redesign this component (over 88 staff-months) using object-oriented techniques and integrating the C++ implementation with the legacy C code. Our team had varying degrees of familiarity with OOD and C++. We decided to use an iterative life cycle to mitigate risks with early, frequent feedback. The team caught the XP buzz and adopted it as a coherent set of best practices for iterative development. Our team was just one of several teams “piloting” XP in our division of Motorola. We use the word pilot carefully here because we cannot say we used every bit of the standard XP – right out of the book. From our point of view, we took XP, adopted some practices, dropped others, and supplemented others with practices from the world of the heavyweights. An outsider could easily interpret this process as a CMM based process with some of the XP practices added to it. It is not our objective in this paper to discuss the marriage between CMM and XP. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 100–111, 2002. © Springer-Verlag Berlin Heidelberg 2002
Tailoring XP for Large System Mission Critical Software Development
101
This paper will relate what we saw as the advantages and disadvantages of our tailored version of XP as applied to our project. The paper also highlights how we diverged from the standard XP as defined in “Extreme Programming Explained” by i Kent Beck .
2
Small Releases
The goal of small releases is to deliver working software sooner by breaking the release into small increments. These increments, or releases, should be made as small as possible without breaking up a feature [1, 56]. 2.1
Divergence
The redesigned software component had fifteen different features. Thus, our project was broken into fifteen milestones with three lined up simultaneously into a single increment. See Fig. 1 for an illustration of the first six milestones.
Fig. 1. Our Small Releases
The features were defined by more requirements than could be implemented in a two week release - as XP suggests. We chose to extend the duration of our releases instead of breaking up the feature. Our releases were extended to 3 months mainly due to the fear of iterating through individual requirements of a feature. In essence, we were afraid to be agile within the scope of one feature. The team was accustomed to releases that spanned one year or more. As a result, a move to 3-month development cycles was an extreme jump. If we were to plan a project now we would be less afraid to break the feature up into requirements for smaller one to two week releases. We chose a longer milestone length, also, because the three-month duration aligned each release with System Integration and Testing (SIT) delivery schedules. In our project, two releases were completed before the first SIT load delivery. Each subsequent release only contained the features needed in the next SIT load. 2.2
Pros
The smaller releases provided the opportunity to alleviate development risks much earlier than the traditional waterfall lifecycle. Our first few milestones focused on proving the feasibility of a new compiler and the C++ Standard Template Library
102
Jason Bowers et al.
(STL) while implementing a limited feature set. The smaller release allowed us to learn these new areas faster and gave us confidence in the compiler and the STL very early in the project. 2.3
Cons
Narrowing our focus on a single feature did not allow enough time to consider all feature interactions. Many defects were traced back to feature interactions that were not thoroughly investigated. Developing software incrementally, using small releases, shifted the most complex design tasks toward the end of the project. In the early releases, many feature interactions had not yet been introduced. In the later releases, all feature interactions and exception cases must be considered. At this point in the project, the team must balance defect repair and requirements investigation. In our experience, the requirements investigation often suffered. In hindsight, we realize how important a dedicated customer, who can focus on these feature interactions, is to a project using small releases.
3
Continuous Integration
Continuous integration assures developers that a common, tested baseline exists on a frequent basis. XP suggests an integration scheme where developers can integrate frequently and on their own free will. 3.1
Divergence
The illustration in Fig. 2 shows our branching scheme and merging process to foster parallel development.
Fig. 2. Our configuration management scheme to allow parallel development
Tailoring XP for Large System Mission Critical Software Development
103
Continuous integration had a few different connotations in our process. First, each milestone branch was “integrated” daily, sometimes hourly. This integration simply consisted of a pair checking in a file. The integration was not complete until the unit test suite was completely passing on the milestone branch. Second, each of three milestone branches were integrated, or merged, to the Submainline every couple of days. The merging had to be completed one milestone at a time. It may sound complex but in practice it was pretty smooth. Our “build monkey” set up scripts that performed these integrations at the push of a button. Finally, the sub-mainline branch was integrated to mainline less often. Actually, as we were building up the initial code baseline, we were integrating to mainline every day or two. These merges were also automated with buttons on our configuration management and version control tool. As the project progressed, integrations to the mainline were taking place less often because the rest of our product team caught up with us in their coding phase. At this time, the Change Control Board (CCB) began exercising its power to control what changes were integrated. The CCB is clearly a non-agile component of our process. The board met weekly to plan the next build; therefore, integrations took place weekly. Besides the obvious managing and limiting of code changes, the purpose of the CCB is to make sure a change request is ready to be built into the product. Our full product builds take a full day to complete (6-8 hours). As a result, it is very important to make sure a change will not affect the build. 3.2
Pros
The introduction of the CCB forced the team to slow down. At times, we felt like we were flying through each task. Sometimes, in our environment, it is better to slow down and think about the effects of a change. For example, the extra time gave us an opportunity to further test a private build. 3.3
Cons
The CCB clashed with many of the XP principles. XP recommends frequent integrations for a reason. The staler a branch got the more difficult the merge became. For example, the CCB postponed integration of a “below the line” defect. Meanwhile, significant changes were made to the code before the defect was approved. By this time, the file version with the defect fix is very different from the mainline version. In this situation, the merge was non-trivial and had to be done manually.
4
Pair Programming
Historically, programming has been an individual activity because of the fact there is only one keyboard and one mouse. This is a fallacy considering the work being done when programming is much more than typing and using the mouse [3].
104
Jason Bowers et al.
4.1
Divergence
In the beginning of our project, we were highly disciplined pair programmers. We began by pairing for all development activities: design, writing unit tests, and code. We skipped the formal technical reviews (FTR), which were mandated by the CMM based process. We added informal reviews of the design to supplement the pair designing activities. As the project progressed, we paired less often for a number of reasons: less need for mentoring, schedule pressure, office ergonomics, responsibilities on other projects, difficulty finding a pair, management displeasure with the idea of pair programming and no formal reviews, and a new corporate FTR initiative. These are nothing but excuses. We are quick to admit that pair programming is an essential component to an XP-like development process. Our pair programming discipline descended a slippery slope. We transitioned into a process where pairing was only mandated when writing code. Test cases were written in pairs, if possible. Soon each developer became busy with other things and pairing on the code was happening less often as well. Eventually we fell into a process where pairing on code and tests occurred, at the minimum, when making risky changes. We instituted formal reviews of test cases and code because of pressure from our formal review centered organization, defects missed in pair programming, and less pair programming. We began reviewing test cases by looking at descriptions of the tests (this tied the unit tests to higher level box testing scenarios) and not the actual code of the test. We realized this was filling holes in the test suite but it was not improving the current tests. Thus, the bodies of the test cases were inspected as well. 4.2
Pros
When we formally reviewed test cases we found missing tests and ways to improve the test suite through better assertions, helper test classes, and helper test methods. We have found that the discussions that take place in reviews, involving developers with many different points of view, are very valuable to the development process. Through pair programming, we have spread the knowledge of the Standard Template Library (STL) and OOD patterns. In doing so, we were able to re-use STL implementations and design patterns throughout our code. We certainly saw no decrease in productivity because of pair programming. Fig. 3 shows the increase in productivity from example projects representing a waterfall lifecycle (Project 1, 2, and 3), to an iterative OO lifecycle (Project 4), and finally four XP pilot projects (Project 5-8). The productivity is measured in KAELOC per staff month. Our project came out with the highest productivity numbers out of them all. 4.3
Cons
The likelihood of pair programming eliminating all mistakes during coding is small. We feel that, in our environment, formal code and test case reviews can complement pair programming.
Tailoring XP for Large System Mission Critical Software Development
105
Fig. 3. Productivity in KAELOC/Staff Month
In our complex environment, it is not a safe assumption that a pair of people in the group will be able to consider the effects on the entire system. We live in a group where experts and feature owners emerge. This attitude tends to prevail over common knowledge across the team. Our dynamic pairing scheme projected a feeling of collective ownership. However, you usually could associate a feature (but not classes or methods) with a particular developer or pair. Sometimes it is difficult to find a pair. If you are forced to code with a pair, what happens when you cannot find someone? If you followed pure XP you would not be able to code without a pair. We ran into this problem many times. To handle this we balanced pair programming and solo programming with the level of risk involved with a change that was being made. Our office is a sea of cubicles. The ergonomics that result from a cube are ideal for an individual programmer but far from ideal for a pair. We attempted to set up our cubes in a pair-friendly manner by moving the keyboard, mouse, and monitor to a corner of the cube. This set up provided a better environment to pair program. However, it was far from perfect. A large room with no dividers between each pair would have been nice but the space limitations (at the time) would not allow it.
5
Simple Design
The goal of simple design is to only design for the requirement (s) of the current milestone. By doing so, developers ignore what impact future milestones may have on the current software design.
106
Jason Bowers et al.
5.1
Divergence
Our team did not design for one specific test case, implement it, and then move on to the next test case as is outlined in standard XP. When designing, we took into consideration the full set of use cases and scenarios that composed the current milestone. We did not "design for future milestones," but we did develop a detailed design model that we felt sufficiently covered all outlined scenarios. Doing this was possible, in part, due to the static nature of requirements in our environment. Typically, our projects are based on contracts between the business team and a single customer. The requirements rarely change because of the diligent creation of these contracts. 5.2
Pros
The extra time spent designing for the entire 3-month milestone gave us more confidence going into the implementation phases. We were able to design for some of the feature interactions appearing across the concurrent milestones. 5.3
Cons
Had we spent more time in the design phase, we may have better identified high-risk areas of code during design discussions. This may seem counterintuitive to XP principles. Since our project focused on redesigning one side of a well defined interface, however, the assumptions and dependencies across our interface were not sufficiently explored. Until the entire application was built, these dependencies were not understood.
6
The Planning Game
The goal of the planning game is to maximize the value of the delivered software by prioritizing the functionality. As defined in the standard XP, ideally, the actual user will provide priorities for high-level functionality. The developers will then refine the high-level functionality descriptions into a detailed implementation plan. 6.1
Divergence
We did not have the luxury of a business team or customer. Our business team was a system design group that we rarely had access to. Our customer was a system expert with limited time to offer to our project. Each developer played the role of the customer by learning as many requirements as possible during each milestone. Team members did not choose their assignments. The project manager selected appropriate developers for certain tasks. Prior to the planning game, use cases were completed and reviewed in the place of user stories. Each person was responsible for estimating the effort for each task. The tasks, along with their weights, were individually written on 3x5 work cards. The cards were based on specific implementation tasks. The relative cost of change determined the weight. Our weighting scheme is described in Table 1.
Tailoring XP for Large System Mission Critical Software Development
107
Table 1. Task weighting guidelines
Weight 1 3 5 7 9
Task New test case only Logic change and accessor methods New methods or interface change Class collaboration change New class
120 102
Velocity
100 80 67
60
53
40
67
59
50
47
47
45
40
33
20 0 1
2
3
4
5
6
7
8
9
10
11
Week Fig. 4. Weekly velocity
Upon completion, work cards were placed in an empty James Brown compact disc case hanging in the project manager’s office. The activity of handing over completed work cards gave developers the feeling of accomplishment on a regular basis. The team’s velocity was calculated by summing the weights from every completed work card. The velocity was used as a guideline for how productive we were in any given week. Fig. 4 shows the velocity we had during each week of one milestone. During this milestone, we created 12 new classes and eliminated 5 classes. The average was 54.5 per week or 9 per developer per week. The project manager used these numbers (Yesterday’s Weather) as an estimate as to how long a particular milestone would take to complete. 6.2
Pros
The planning game provided a stage to clearly define functionality and schedule responsibility for each team member. The exercise gave each team member a focus on what the milestone’s goals and priorities were. It also allowed the team to estimate the amount of work for each detailed task. By measuring the work necessary for each task, we were able to evaluate a reasonable workload for each team member. In addition, the planning game also provided a stage for the team to share initial design and implementation ideas.
108
Jason Bowers et al.
6.3
Cons
In the beginning, we did not have a baseline as to how to measure each task. Hence, it is difficult to determine how much can be done in the duration of a milestone. In addition, it is difficult to adjust the work effort based on previous milestones. For example, if we need to make a change that is similar to a previous change the weight was not adjusted. However, the work put forth for the previous change made things easier the second time around. As a result, the team may feel that they can move faster than they did before for all types of changes, when they were only able to move faster because of previous work The single most significant flaw in our planning game was the absence of a dedicated customer or business team. With each developer playing the role of the customer, we were able to catch 80-90 percent of the requirements of each milestone. The remaining 10-20 percent of requirements were discovered during implementation or testing. We highly recommend a dedicated customer since mixing the roles creates a conflict of interest between the developer and customer. The developer wants to be done and move on while the customer is more focused on certifying a milestone for acceptance. If a dedicated customer role is not an option, the majority of requirements should be established before beginning the first iteration.
7
Refactoring
Refactoring encompasses any changes to the system that leaves its behavior unchanged but enhances some nonfunctional quality (i.e. simplicity, flexibility, understandability, or performance). 7.1
Divergence
The main difference in our refactoring methods was the degree to which they were controlled. Each developer was encouraged to think of refactoring ideas but to ask permission before implementing them. Our “courage” for refactoring changes was taken away by managers and the Change Control Board to minimize risk to the product release. This control increased in intensity as the final release grew closer (and as refactorings occasionally broke a previously working feature). To help manage the refactoring work, the team kept a "refactoring wish list" on a white-board in a public area. When conditions allowed, developers would take on refactoring work identified in the list. Refactoring jobs can come in both large and small sizes. Large refactorings were often managed in their own smaller milestones, while very small jobs could sometimes be completed in conjunction with a defect repair. A Change Control Board (CCB) was always there to approve the change based on the size and risk of the job. The CCB would classify some of its weekly builds as "restricted" when the build could not afford any additional risk (i.e. just prior to a release turnover). Other weekly builds were classified as "anything goes", and open to refactoring changes. These builds were completed far enough in advance of a turnover to allow adequate boxlevel regression testing to mitigate any risks introduced by refactoring changes.
Tailoring XP for Large System Mission Critical Software Development
7.2
109
Pros
Because we were encouraged to look for opportunities to simplify and otherwise improve the code base, the system evolved without getting too unwieldy. Since the test cases in the regression suite act as an "insurance policy" against refactoring changes, refactoring positively reinforced the need to write high quality unit test cases and supplement those with box testing. 7.3
Cons
The desire to encourage refactoring changes clashed with the CCB’s desire to minimize changes to the code base. Groups that have always used waterfall life-cycles are quick to label the develop-extend-refactor activities of XP as "hacking.” In a culture that encourages a "get it right the first time" approach to development, the need to refactor is seen as a failure of the process. Large refactorings created some significant defects on our project. These defects were mainly due to extreme feature interaction in our design. The flaws in the refactored design were not exposed in either the unit test suite, during pair programming, or during the formal review of the code or test cases. We relied on our test suite too heavily at times. It is very difficult to write a complete test suite; therefore, there is a risk that the changes will create defects that go undetected. There is seemingly no limit to the refactoring opportunities presented during a project. For this reason, it is critical that the refactoring work be managed with the same scrutiny as the feature development. Refactoring changes were more common in the later stages of the project.
8
Testing
One of the main goals of automated regression unit testing is to decrease the risk of introducing defects through source code changes. 8.1
Divergence
Of the principles and behaviors outlined in XP, automated regression testing is the one that our group has most passionately adopted. We have found its inclusion to be critical when refactoring. Our group has relied on our regression test suite to uncover new and existing bugs when refactoring since devoting time to comprehensive box testing after minor refactoring changes is often unrealistic. The team as a whole was not very disciplined when it came to test first coding (with new code). While everyone saw the value of thinking with the unit test hat on before writing any code, team members had varying degrees of test first compliance. The test first protocol mandated by the team with new code was to write test case descriptions and titles, at a minimum, before writing code. All bug fixes followed the test first process as is defined in standard XP. A unit test was written to reproduce the defect, it failed, then the code was written to fix the defect. Sometimes this was done even with new code, it just depended on who you were pairing with that day.
110
Jason Bowers et al.
After all was said and done, we ended up with over 1,500 automated unit test cases. All 1,500 tests were executed after every change. Execution of the test suite was completed in a short time frame (about one minute). 8.2
Pros
The automated regression test suite quickly exposed some of the bugs introduced during refactoring. The suite gave the developers confidence that a change did not break existing functionality. This confidence promoted faster development and reduced the risks associated to refactoring. Watching the tests pass is an enjoyable, uplifting, addicting experience. At times we would run the suite over and over just to see the successful completion of all of the tests. 8.3
Cons
The confidence inherent with the automated unit test suite was a double-edged sword for our team. For example, we have occasionally found bugs that should have been caught in the unit test suite but the test was missing or written poorly. Since the test suite is only as good as the tests included in it, and tests are only as good as the tester who writes them, a test suite is never perfect. As a result, it is the developer’s and the project leader’s responsibility to strike a balance between code reviews, automated regression tests, and box tests (our acceptance tests). As the regression test suite grows, so does the effect of a code change on the test suite. For example, some interface changes led to hundreds lines of test code modifications. Unfortunately, the fear of major test suite changes would intermittently creep into some of our design decisions. The absence of an existing regression test suite that tested the dependencies of every legacy interface was a key error on our part. The majority of the defects we found were related to dependencies embedded within the legacy code. Had we created a suite that tested the dependencies of this interface, any incorrect assumptions we made would have been immediately exposed.
9
Key Lessons Learned
? In a mission critical product development culture, management is reluctant to allow pair programming without formal reviews. ? The XP defined customer role is crucial. Mixing developers, managers, and coaches as the customer is not recommended. ? There are many benefits to completely developing 80-90% of the full set of requirements prior to beginning the first milestone. ? The Unit Test Suite cannot be solely relied upon to counteract negative effects of refactoring. Automated box level (acceptance) testing, expert blitz testing, and FTRs are valuable quality assurance techniques as well. ? In a redesign, the unit test suite should be created to test all existing requirements of the old code. This should then be used as the minimum acceptance tests for the redesigned code.
Tailoring XP for Large System Mission Critical Software Development
111
10 Conclusion We feel very positive about the future of agile processes in our organization. Despite our “XP rookie mistakes,” the quality of our product was well within expectations. In the project post mortem, very few defects could be traced directly to XP practices. However, many defects could be traced to the uneven execution of the practices. See Fig. 5 for an illustration of defect density of our project. Defect density is defined as the number of SIT (System Integration and Test) found defects per KAELOC. Defect Density
Defect Density
0.07
0.0659
0.06 0.05 0.04
0.0325
0.0289
0.03
0.0172
0.02 0.01 0
Division Average
Project 1
Project 2
Our Project
Fig. 5. The density of defects that were discovered in Systems Integration and Testing
The division average is calculated from many different projects developed with multiple languages. Both Project 1 and Project 2 were XP projects that were developed with an object oriented language. The process needed tailoring along the way to fit into our mission critical customer’s needs. There are two ways to view our tailoring efforts. In one sense, we “added a few pounds” to a lightweight process to better meet our business needs. Alternately, we attempted to pull some XP practices into our heavyweight culture to improve our agility. The bottom line is we feel we have found some middle ground that we intend to build upon.
References 1. Beck, Kent: Extreme Programming Explained. Addison-Wesley Publishing Company (1999) 2. Grenning, James: Using XP in a Big Process Company: A Report from the Field. XP Agile Universe (2001) 3. Williams, Laurie. http://www.pairprogramming.com, http://collaboration.csc.ncsu.edu/laurie.
Acceptance Testing HTML Narti Kitiyakara NOLA Computer Services, Inc., New Orleans, LA ROMXM]EOEVE$RSPEGSQGSQ
Abstract. NOLA Computer Services, Inc. has been conducting an XP project since late 2000. In this time it has experimented with many tools and techniques for acceptance testing. This paper will discuss the relative costs and benefits that we’ve found in using manually executed tests, a commercial testing tool, and hand-coded Java tests. It will conclude with a discussion of Avignon, an XML-based, extensible, scripting language developed in-house that allows the customer to specify acceptance tests in advance at a high level with relative ease.
1
Project Background
In late 2000, NOLA Computer Services, Inc. embarked on its first XP project with two developers and a project manager/customer. The goal was to develop a webbased J2EE application for commercial release. Over the next eighteen months, the project would grow to six developers and consist of over 650 Java classes, 80 Java Server Pages and 35 database tables. Our initial process was based on Extreme Programming Explained [1] and the developers tried to follow all of the programmer practices, including pair-programming, unit-testing, iteration planning, etc. Automated acceptance testing, however, was to prove a troublesome issue.
2
Tools for Testing HTML
2.1
Manual Tests
When we first began the project, the customer prepared “stories” of up to thirty typed pages along with detailed acceptance tests. The plan was for the developers to use the tests to get detailed information about the story, but the customer would manually execute the test to determine whether or not to approve the story. The Quality Assurance Department was also supposed to run the previous acceptance tests on a regular basis to make sure that adding the new story had broken no previous stories. The developers dutifully ran the acceptance tests by hand for each completed story, but it turned out that the tests were so detailed that it was easy to fail on trivial points that were not noticed. It also turned out that some aspects of the manual acceptance tests were subject to differing interpretations. Thus the developers would legitimately feel that they’d passed an acceptance test, but the customer would say that they had not. Apart from that, neither the developers nor the customer nor the QA Department D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 112–121, 2002. © Springer-Verlag Berlin Heidelberg 2002
Acceptance Testing HTML
113
was running the previous tests on a regular basis. It was only by luck, therefore, that the development team would discover that modifying existing code had changed the previous functionality of the application. This state of affairs led NOLA to purchase a full-fledged acceptance-testing package. 2.2
Commercial Testing Tools
In order to try to alleviate the problems of manual acceptance testing, NOLA evaluated several commercial testing packages, including Segue Software’s SilkTest, Empirix’s eTester and Compuware’s QARun. All of the packages were rated on the basis of the applicability to projects being undertaken by NOLA, both the XP project and others, ease of use, ability to test multiple server and client environments, and the cost. As a result of this evaluation, NOLA purchased Compuware’s QARun [2]. Shortly after the evaluation of automated testing tools, two of the developers were sent to ObjectMentor’s XP Immersion. It was here that Ron Jeffries impressed upon us the need for automated acceptance tests that the developers could run. [3] When we returned from the Immersion, we set about trying to convince our customer of the importance of getting the acceptance tests automated and giving the developers the ability to run them. Unfortunately, it turned out that the customer would find the scripting facility in QARun was not amenable writing the acceptance tests in advance. 2.3
HTTPUnit
The customer had never disputed that the acceptance tests should be automated but still felt that they should be done after the fact by the QA Department. At this point, however, QA Department was still not running the previous tests in a timely enough manner to warn the developers of impending problems, so the developers, on their own initiative, started using HTTPUnit [4], which they’d discovered perusing the XP web sites, to manually code the acceptance tests. This proved quite effective at stopping us from changing old code, but interpreting the customer’s ideas of how to test the details in the visual components was difficult. Our customer has always been very concerned with the appearance of the application. HTTPUnit provides an API for examining the HTML output of an application, but we found that it was awkward to use for the level of detail that our customer wanted. It turned out, however, that HTTPUnit would also expose the generated HTML as an XML document, so we could use XPath [5] to make very fine-grained tests of the HTML without a lot of coding effort. Unfortunately, as the tests were being developed simultaneously with the code, they still did not help us maintain a consistent HTML coding for a given visual effect. They were also quite brittle because the format of the HTML was compiled into the tests. Every change to the appearance of the application required laborious changes to the acceptance tests as well. Ironically, the very success the developers achieved in using the acceptance tests to prevent unintentional functionality changes helped hide their value from the customer. The most visible thing to the customer was the amount of time that it took to code the tests in Java. At that point, however, the developers were still the only group actually automating the test, so the customer couldn’t object too strongly. But there was still
114
Narti Kitiyakara
the feeling that the developers should not be taking up valuable coding time with acceptance tests. For that matter, the developers tended to view it as a necessary evil. No one wanted to do away with automated tests that we could run, but they were tedious to write and there tended to be a lot of debate with the customer about what should be tested.
3
What Can Be Tested?
3.1
Everything
At first, having discovered the value of automated acceptance tests, nothing seemed too trivial to test. Not only were the correct results supposed to be put into the database, but also almost every visual element was checked. Not only were the correct results to be displayed, but every formatting element had to be correct as well. This proved to be very time consuming for the developers and quite brittle as well. The acceptance tests generally needed a specific state to be set up in the application before they could be run. This made the developers generate a lot of code to initialize the state of the application. The customer was also finding that even a small change in appearance required a fair amount of rework in the tests. Switching to using a snapshot of the expected HTML would have alleviated this problem, but would have also put us back in the position of having to have the application ready before the acceptance test could be done. The snapshot method of testing that HTML would also have led to even more brittleness in the tests, since the coded version could always choose how to interpret a visual element. (Ignoring non-visual elements of the HTML tags, such as id attributes, unless they were important in some way.) 3.2
Functionality Only
When the cost of testing everything became prohibitive, NOLA scaled back testing to only what was necessary to the functioning of the application. Thus, on the HTML side, we tested only things like the name and value attributes of input tags, references for the anchor tags, and basic text of the output. This took some of the burden off the developers, but did not relieve them of the problem of initializing the application state before each test. It also did nothing to solve a problem discovered by the customer even when “everything” had been tested: visual consistency between the different HTML pages. 3.3
Visual Consistency
By this time, the project had six developers. It turned out that they could pair in fifteen different ways and that each of these possibilities could render the same idea in a different way. Although the developers also wanted to maintain a visual consistency for the customer, it was a very difficult task to achieve. The customer, rightly, did not want to specify too much detail about which HTML to use to achieve a certain visual effect, but different pairs ended up using different HTML to do so and the
Acceptance Testing HTML
115
results were not always exactly the same. One pair might put a page title into an H1 tag, while another might put it into a P tag with a same font size as the H1, for example. But this broke down when the customer wanted to define a default CSS style for page titles. The development team would have to go back and standardize, after the fact, on a common way for doing titles. Detailing the visual elements in an acceptance test before the story was started would have helped matters, but no one wanted to go back to the massive coding involved in testing everything. One proposed solution was to write a separate set of tests that checked the visual appearance of the application without trying to test functionality beyond what was necessary for generating the different pages. This would help the visual consistency issue, since it focused on one visual element at a time rather than confusing the matter with other tests, but it would still be brittle and require a large amount of coding. Before embarking on this testing method, though, NOLA decided to try another approach.
4 4.1
Avignon – A Language for Acceptance Testing HTML Applications Origins
One of the proposed solutions for dealing with the general problem of visual consistency was to allow the developers to generate XML that would be transformed into HTML by a standard XSL. [6] This would have removed the problem of deciding how to generate the HTML on a page-by-page basis, and allowed the developers to concentrate on the higher-level concepts of what should appear on the page. Unfortunately, the customers were concerned that XML/XSL was not, at the time, supported by enough platforms to allow them to meet the goal of making the application portable to a large number of environments. Later, however, one of the developers realized that they could still use this system in the acceptance-testing environment to actually run the tests. It could also remove the burden of coding the acceptance tests from the programmers by allowing the customer to make detailed, yet easy to change, specifications of what was expected. 4.2
Overview
Thus came about Avignon, a combination of an XML based scripting language and a high-level page description language for executing acceptance tests against an HTML application. The customer also helps produce an XSL that converts the page descriptions into HTML. This ensures that the developers generate the same HTML to produce the same high-level concept. Avignon is a living language that is expanded, in consultation with the customer, whenever necessary. Although the page comparisons performed by Avignon return to the brittleness of testing everything, it has proved easy enough to change the expected HTML for every page in the application by changing the XSL that generates it for the test. Because pages are described at a high-level, it has also become much easier to pre-generate the expected results. The original page description language was designed to be easy to transform into HTML. It quickly became apparent, however, that an even more
116
Narti Kitiyakara
abstract description could be given on a per-page basis. The abstract description consists of only the elements that can vary on the page. It is then transformed into the original page description language, which is itself transformed into HTML. This makes it extremely easy to change the expected results for any individual page or for every page simultaneously. (An interesting side effect has been that the customer can now very rapidly prototype the HTML pages entirely outside of the application. This allows the customer to more quickly decide on the desired look-and-feel.) 4.3
Implementation
The implementation of Avignon was quite simple. A JUnit test finds all of the files in a directory with the pattern “*TestSuite.xml.” It passes each of these files through a SAX-based XML parser that validates the syntax of the file and fires an event for the start and end of each XML element. Each element must have an associated Java class matching ElementNameHandler.class that implements the AvignonTagHandler interface. This interface defines two methods:
ZSMHWXEVX%ZMKRSR8IWX7XEXIWXEXI%XXVMFYXIWEXXVMFW and
ZSMHIRH%ZMKRSR8IWX7XEXIWXEXI The AvignonTestState class allows each tag to access its parent tags, add messages to the error log and request web pages from the actual application. 4.4
Scripting Language
At this time there is no formal schema or DTD for Avignon, but a brief overview follows: Test Definition Elements These elements allow the user to integrate the acceptance tests with NOLA’s ISO 9001 framework and initialize the application state before a test. TestSuite This is the top-level element for any Avignon test suite file. It has one attribute, unitName, which defines the name of the unit being tested. NOLA uses this name to help track test results. When this tag ends, it records all of the test results in a database as per NOLA’s ISO 9001 policies. This element contains one or more AcceptanceTest elements. AcceptanceTest This element defines an actual test to be performed. It also has one attribute, testName, which it gives to its parent TestSuite element along with a pass/fail test result. This tag is also responsible for restoring any application state that has been modified by the test. This element may contain one DatabaseState element and any number of other test command elements.
Acceptance Testing HTML
117
TestScript & ScriptParam These elements allow the user to execute a separate file containing an Avignon script. The TestScript element takes a name attribute that tells it what file to execute. Before executing the script, however, it will do a textual substitution of the information provided by the ScriptParam elements contained within. This allows the user to execute the same script with different data without having to actually duplicate the script. DatabaseState & DatabaseTable These elements allow the user to perform low-level database operations before a test is run. This requires the customer to have knowledge of both SQL and the application database, but it is easy enough to add application specific tags, implemented either with an XSLT that converts them into the low-level tags or with Java handlers that simplify the initialization of the database for the customer. The DatabaseState is used simply to group DatabaseTable elements. Each DatabaseTable element is responsible for saving the state of the table specified in its attribute. Each DatabaseTable element may also have zero or more of the following elements in any combination: DatabaseInsert This element allows the user to define a SQL statement of the form INSERT INTO table_name [(columns)] VALUES(values). The table name is defined by the surrounding DatabaseTable element, columns is defined by an optional attribute to DatabaseInsert tag. Finally, values is defined by a required attribute to the DatabaseInsert tag. The user is required to format both columns and values in such a value that the SQL statement will work correctly when it is executed. DatabaseUpdate This element allows the user to define a SQL statement of the form UPDATE table_name SET field=value[,field=value]* [WHERE condition]. The value of condition is defined by an optional attribute to this tag, while the set clauses are defined by one or more sub-elements. These sub-elements, UpdateField, each take two parameters, a field name and the value to set it to. DatabaseDelete This element allows the user to define a SQL statement of the form DELETE FROM table_name [WHERE condition]. Again the condition comes from an optional attribute to this tag. Test Command Elements These elements allow the user to perform operations through the application’s web interface. Each of these elements take an optional attribute, pageDescription, that allows the user to specify an XML description of what the resulting page should look like. The comparison is done by generating the HTML for the actual page and the expected HTML (transformed from the XML description) and comparing them, without regard to white space. The comparison is done without regard to white space because it proved too difficult to generate the same HTML, including non-significant
118
Narti Kitiyakara
white space, as was expected. Given that the possibility of incorrectly generating significant white space was small, NOLA chose to ignore white space altogether. Each of these elements also has an optional preTranslation attribute. If this attribute is present, the XML specified by pageDescription is first transformed by the style sheet defined in preTranslation and then transformed into HTML. MenuClick & PageClick These two elements are essentially the same, but the former operates on the application’s HTML menu while the latter operates on the content frame of the application. Both use HTTPUnit to click a link on the page given its text. SubmitPage This tag allows the user to fill out an HTML form on the current content page and submit it back to the application. It takes the value of the submit button as an attribute and fills out the input values in tab order (unless otherwise specified) with values given by its sub-elements, InputValues. Each InputValue element must have a value attribute and may have either a name or a position attribute. If no name or position is specified, the value goes into the current input box and the focus is moved to the next input box. If a name or position is specified, the focus is first moved to the correct input box, the value is put into that box and the focus moves to the next input box. DatabaseAssertion This element allows the user to specify that the application’s database must be in a given state for the test to pass. The user specifies three attributes, a tableExpression, an optional whereCondition and an expectedCount. The system generates a SQL statement of the form: SELECT COUNT(*) FROM tableExpression [WHERE whereCondition]. The assertion passes if the result of this statement matches the expected count. User-Defined Elements The preceding elements represent the code of the Avignon language, but it was never meant to be a static language. In order to make the customer completely comfortable with the scripts he writes, he is encouraged to create his own elements. This can lead to some interesting element names (an EggClick element, for example), but has proved reasonably successful when the customer consults with the developers about how to phrase what he wants to do. (Finding out whether attributes or sub-elements would be easier, for example.) It has also helped to reduce the developers’ misunderstandings of what the acceptance tests were doing. Instead of having to describe the whole test to the developers, the customer now needs to concentrate only on the new elements in the test. For each new element the customer defines, the developers must add a Java class implementing the AvignonTagHandler interface. This class implements the customer’s intentions for the tag, be it an assertion or some action command. Because the handlers are dynamically loaded, no recompiling or relinking of the Avignon system is necessary when adding new elements.
Acceptance Testing HTML
4.5
119
An Avignon Example
The following test script simply setups up the test information and calls the “CreateScript.xml” script:
#\QPZIVWMSR!IRGSHMRK!98*#" 8IWX7YMXIREQI!'SQTSWIV-RJSVQEXMSR" %GGITXERGI8IWXREQI!%HH2I['SQTSWIV" 8IWX7GVMTXREQI!'VIEXI7GVMTX\QP" 7GVMTX4EVEQREQI!'SQTSWIV0MWXMRK ZEPYI!)QTX]'SQTSWIV0MWXMRK4EKI\QP" 7GVMTX4EVEQREQI!(EXEFEWI7GVMTX ZEPYI!2S'SQTSWIV7IXYT\QP" 7GVMTX4EVEQREQI!(MWTPE]2EQI ZEPYI!8IWX(MWTPE]" 7GVMTX4EVEQREQI!%GXYEP2EQI ZEPYI!8IWX%GXYEP" 7GVMTX4EVEQREQI!6IWYPX4EKI ZEPYI!8IWX%HHIH*SVQ\QP" 8IWX7GVMTX" %GGITXERGI8IWX" 8IWX7YMXI" The “CreateScript.xml” file itself is as follows:
#\QPZIVWMSR!IRGSHMRK!98*#" 7GVMTX&PSGO" (EXEFEWI7XEXI" 8IWX7GVMTXREQI!(EXEFEWI7GVMTX" (EXEFEWI8EFPIREQI!8C7)59)2')7" (EXEFEWI9THEXI [LIVI!7)59)2')C2%1)!k'31437)67k" 9THEXI*MIPHREQI!7)59)2')C:%09) ZEPYI!" 9THEXI*MIPH" (EXEFEWI9THEXI" (EXEFEWI8EFPI" (EXEFEWI7XEXI"
120
Narti Kitiyakara
1IRY'PMGOMXIQ8I\X!&VS[WI'SQTSWIVW TEKI(IWGVMTXMSR!'SQTSWIV0MWXMRK TVI8VERWPEXMSR!'SQTSWIV&VS[WIV8VERWJSVQ\QP" 1IRY'PM GO" 4EKI'PMGOMXIQ8I\X!%HH'SQTSWIV TEKI(IWGVMTXMSR!)QTX]%HH7GVIIR\QP TVI8VERWPEXMSR!'SQTSWIV-RJSVQEXMSR8VERWJSVQ\QP" 7YFQMX4EKITEKI(IWGVMTXMSR!6IWYPX4EKI TVI8VERWPEXMSR!'SQTSWIV-RJSVQEXMSR8VERWJSVQ\QP" -RTYX:EPYIREQI!(MWTPE]2EQI ZEPYI!(MWTPE]2EQI" -RTYX:EPYIREQI!%GXYEP2EQI ZEPYI!%GXYEP2EQI" -RTYX:EPYIREQI!(EXI3J&MVXLZEPYI!&SVR" -RTYX:EPYIREQI!(EXI3J(IEXLZEPYI!(MIH" 7YFQMX4EKI" 7GVMTX&PSGO" When the test is run, it will call yet another test script (defined by a parameter) to set up the database, then click on the “Browse Composers” menu item, checking the results against the HTML generated by the “EmptyComposerListingPage.xml” data file and the “ComposerBrowserTransform.xml” XSL. It will then click the link to add a composer and submit the resulting page with the given input values. When the test finishes, it restores the original state of any database tables that were modified inside the DatabaseState element.
5
Conclusion
NOLA has tried various forms of acceptance testing, from manual tests to fully automated systems. The manual tests were unsatisfactory all around: they were subject to differing interpretations and expensive to execute. The hand-coded tests reduced the costs of running the tests but increased the costs of creating them. The developers, knowing how often it saved them from making unintentional changes to the application’s functionality, felt that they were very worthwhile, but this value was somewhat hidden from the customer. It also left too much of the interpretation of the tests in the hands of the developers. NOLA also tried a commercial tool for acceptance testing. This would have helped the customer code the tests himself, but the nature of the tool meant that the tests would have to be written after the stories were completed, thus depriving the developers of the opportunity to run the tests on the current story.
Acceptance Testing HTML
121
Avignon is an attempt to provide a satisfactory solution to both the customer and the developers. The use of XML/XSL has made the cost of implementing Avignon quite low, yet it gives the customer a relatively easy way to specify his tests in advance without any ambiguity. This allows the developers to work with confidence that they’re aiming for the right target and know how far off the mark they are while they’re coding. In the near future I’d expect to see Avignon brought into the realm of testing more traditional user-interfaces and perhaps even take the burden of UI coding off the developers’ shoulders.
References 1. Extreme Programming Explained, Kent Beck (Addison-Wesley, 2000) 2. For more information about QARun, see Compuware’s web site: http://www.compuware.com/products/qacenter/qarun/detail.htm. 3. Also expressed in: Extreme Programming Installed, Jeffries, Anderson and Hendrickson (Addison-Wesley, 2001). 4. For more information about HTTPUnit, see http://www.httpunit.org. 5. For more information about XPath, see http://www.w3c.org/TR/xpath. 6. For more information about XSL, see http://www.w3c.org/Style/XSL. Neil Bradley’s The XSL Companion (Addison-Wesley, 2000) is also a good introduction.
Probe Tests: A Strategy for Growing Automated Tests around Legacy Code Asim Jalis Hewlett-Packard Company, Windows Development Lab, 14335 NE 24th Street, Suite B-201, Bellevue, WA 98007 EWMQCNEPMW$LTGSQ LXXT[[[LTGSQ Abstract. Legacy code lacking unit tests is difficult to refactor. Without unit tests all changes to the code are dangerous. On the other hand in its unrefactored state the code lacks the modularity necessary for adding unit tests. Therefore, use Probe Tests. Probe tests are assertions placed in code which instead of crashing the program on failure simply log their success or failure to a file. The logged success or failure can then be presented through any xUnit framework. Probe tests balance these forces: time to recover from test failure, ease of parsing log files, difficulty of testing poorly refactored code, separation of production and test code. Keywords. Unit testing, functional testing, NUnit, xUnit, legacy code, refactoring, logging, automated testing, mock objects, embedded tests, .NET, C#.
1
Introduction
Applying XP [1] to legacy projects is difficult because the code lacks automated unit tests and is frequently not well refactored [2]. Any change to the code is considered dangerous, including changing it to support unit testing. This creates an unpleasant chicken-and-egg problem: you cannot refactor till you have some unit tests, and you cannot have unit tests until you refactor. We propose the technique of probe testing. This can break the deadlock and begin the process of building an automated testing framework around the legacy code. This can be the first step towards refactoring the code and writing unit tests for it. As the unit testing gains momentum the probe tests can be removed and replaced with similar unit tests. Probe tests provide a scaffolding to begin the process of renovating crufty code. We have used this technique on legacy code with positive results. Probe tests can also be used as an alternative for classic C style asserts. Probe asserts do not cause the program to crash. When an error occurs it is logged to a file. The file can be parsed and used to generate xUnit style output. In this paper we discuss the architecture of the probe testing framework, how it complements existing approaches to refactoring legacy code, and finally we talk about some other applications of probe testing. Legacy code is a dirty word in XP circles. In a perfect world it would not exist and all code would have unit tests around it. However, in real life most of the code in D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 122–130, 2002. © Springer-Verlag Berlin Heidelberg 2002
Probe Tests: A Strategy for Growing Automated Tests around Legacy Code
123
production does not have unit tests and is not friendly to change. It is important to have a way to bootstrap legacy code into XP. This will make it easier to sell XP to companies which have substantial investments in pre-XP code. Also this can provide a way to rescue and recycle old legacy code. Finally, this provides a way for people working on legacy systems to enjoy the benefits of automated testing and XP.
2
Probe Testing Legacy Code
In this section we provide a case study for how we used probe testing to pull legacy code into our XP project. After this we describe the steps you can take to use probe testing on legacy projects that you might be working on. 2.1
A Probe Testing Case Study
This work arose as a solution to a problem we periodically face in our team. We receive code from other development teams. This code usually has no unit tests and is in a poorly refactored state. The lack of refactoring makes it difficult to add unit tests to it. As this is networking code it is not possible to print output to a console or to run it in a debugger. Before probe testing we had three techniques to deal with code of this type. The first was logging. We would put log statements in the code and then study the logs later to understand what was going on. A second option was to add C style asserts into the code. We avoided this because these asserts crashed the program on failure and we did not want to put the system in an ill-defined state by crashing it. Finally we had some simple automated functional tests. We relied heavily on the functional tests as we refactored the code. The main problem with the functional tests was the coarse grained nature of the output. To get more detailed information about the internal state of the program we complemented the functional tests with logging. The log provided an inside view of what was going on in the system and gave us a much more microscopic view into the code. Probe testing arose as an attempt to automate the log reading. Here is how probe tests work: probe tests combine logging, C style asserts, and unit testing. We place special asserts in the code that log their boolean outcomes (success or failure) to a log file and do not crash the system. Then we run the system on some sample data. This causes the log file to be written. After the system has run its course the unit testing framework parses the log file and then displays the successes and failures from it in its familiar green-bar/red-bar format. Probe testing has many advantages over traditional C style asserts and logging. And it complements automated functional testing well. The processing of the asserts and the analysis of the log files is completely automated. The programmer does not have to read the log files. Automated tests are easy and cheap to run. The programmer can run them much more frequently than manual tests. In fact we ran ours after every small refactoring to have some assurance that we had not broken the system. The testing framework is non-intrusive. It does not throw exceptions inside the main thread or disrupt the execution sequence in any way. It is like the diagnostic
124
Asim Jalis
system on a plane which generates reports on what is happening in the system without directly intervening. The tests can be more powerful than unit tests in some situations because they are reporting conditions inside a live system, rather than in a simulation of the system. For example, a unit test of a servlet application cannot test that the application has access to its configuration information through the servlet container. This can be tested using a probe test or a C style assert. 2.2
How to Apply Probe Testing
Next we discuss how to use probe testing to build unit tests around legacy code. First, figure out a way to run the system. This can be done through automated functional tests. You could also give it some sample input without worrying about the outputs. After this add probe tests to the code. Use the probe tests to test hypotheses about what the code is doing. This can help you understand the unfamiliar system. As the probes increase your understanding of the code will also increase. When you feel the system is fairly well tested you can begin refactoring it without being fearful of accidentally breaking it. As you refactor the code, run the probe tests after every change. As the code becomes more modular through the refactoring you can begin writing unit tests for it. Over time the unit tests can completely replace the probe tests.
3
Instrumenting Code with Probe Tests
There are three main steps in instrumenting your code with probe tests. The first step is to figure out a way to run the system. If you have automated functional tests then you can simply run those. In our networking code we hit our system with requests on a particular port which would cause it to execute its main processing loop. The second step is to add probe tests into your code. Here is an example of how to do this. For each subroutine create a Probe object passing the name of the subroutine into the constructor. During the subroutine call Assert (or one of its specialized versions) on the data accessible from the subroutine. The calls to Assert cannot cause an exception. They are completely non-intrusive. TYFPMGWXEXMGZSMH4VSGIWW;IF6IUYIWXVIUYIWX ;IF6IWTSRWIVIWTSRWI _ 4VSFITVSFI!RI[4VSFI4VSGIWW TVII\MWXMRKWYFVSYXMRIGSHI TVSFI%WWIVXE!!E!! TVSFI%WWIVX)UYEPWFF TVSFI%WWIVX2SX2YPPII a
Probe Tests: A Strategy for Growing Automated Tests around Legacy Code
125
Note that our Assert methods follow the NUnit conventions: the first argument to the assert is the message, and the expected value always precedes the actual value. Here is an example of more realistic and non-trivial probe assertions. TYFPMGWXEXMGZSMH4VSGIWW;IF6IUYIWXVIUYIWX ;IF6IWTSRWIVIWTSRWI _ 4VSFITVSFI!RI[4VSFI4VSGIWW LIEHIVLEWQIERMRKJYPZEPYIWSRP][LIRXLI XLIW]WXIQMWVYRRMRKEWEPMZI[IFWIVZMGI ,XXT,IEHIVLIEHIV!)\XVEGX,XXT,IEHIVVIUYIWX ZMVXYEPEHHVIWWIWVIEHJVSQGSRJMKYVEXMSR JMPIWEGGIWWIHXLVSYKL[IFWIVZMGIGSRXI\X :MVXYEP%HHVIWWEHHVIWW!RI[:MVXYEP%HHVIWW ZEPMHEXIYWMRKTVSFIEWWIVXW TVSFI%WWIVX1EXGLIWLIEHIV6IUYIWX9VM BLXXTLIEHIV6IUYIWX9VM WXVMRK?AI\TIGXEXMSR0MWX!RI[WXVMRK?A_ XI\X\QP XI\X\QPGLEVWIX!%7'-- XI\X\QPGLEVWIX!98* XI\X\QPGLEVWIX!98* a TVSFI%WWIVX'SRXEMRIH-R7IXLIEHIV'SRXIRX8]TI I\TIGXEXMSR0MWXLIEHIV'SRXIRX8]TI TVSFI%WWIVXEHHVIWW'SYRX" EHHVIWWGSYRX" VIWXSJXLIWYFVSYXMRI WGSHI a Here is a list of the different Assert methods provided by the Probe class: TYFPMGZSMH%WWIVXWXVMRKQIWWEKIFSSPGSRHMXMSR TYFPMGZSMH%WWIVX)UYEPWWXVMRKQIWWEKI SFNIGXI\TIGXIHSFNIGXEGXYEP
126
Asim Jalis
TYFPMGZSMH%WWIVX2YPPWXVMRKQIWWEKISFNIGXEGXYEP TYFPMGZSMH%WWIVX2SX2YPPWXVMRKQIWWEKI SFNIGXEGXYEP TYFPMGZSMH%WWIVX1EXGLIWWXVMRKQIWWEKIWXVMRKVIKI\ WXVMRKEGXYEP TYFPMGZSMH%WWIVX'SRXEMRIH-R7IXWXVMRKQIWWEKI SFNIGX?AI\TIGXEXMSR0MWXSFNIGXEGXYEP The third and final step is to add the probe tests to your AllTests suite so that their output is integrated into your normal unit testing output. If you prefer to keep your unit tests separate you can invoke the ProbeTests suite directly from your TestRunner. However, in that case you want to make sure that your system ran and generated the probe log before you run ProbeTests through TestRunner. ProbeTests simply reads the log generated by individual probe objects and then presents the results through the TestRunner. Here is some sample code to embed the test cases from ProbeTests into your AllTests suite. TYFPMGWXEXMG-8IWX7YMXI _ KIX _ 8IWX7YMXIWYMXI!RI[8IWX7YMXI%PP8IWXW WYMXI%HH8IWXRI[8IWX7YMXIX]TISJ6SYXIV8IWX EHHSXLIVWYMXIWERHXIWXGEWIW WYMXI%HH8IWX4VSFI8IWXW7YMXI EHHSXLIVWYMXIWERHXIWXGEWIW VIXYVRWYMXI a a
4
Probe Testing Issues
4.1
Probe Tests Compared to Asserts
Probe tests are an alternative to C style asserts. They offer trade-offs which might be desirable in some situations. The big advantage of traditional asserts is that they are simple and easy to use. Their main problem is that they cause systems to abort execution. Also they expect to print their message to standard output.
Probe Tests: A Strategy for Growing Automated Tests around Legacy Code
127
In a network application or a web service these characteristics of asserts make it awkward to use them. If the network application crashes it can leave parts of the system in an undefined state. Also network applications usually are not attached to consoles. Probe asserts do not cause system crashes. This can reduce the time required to recover from failure and can have a positive effect on the speed and rhythm of refactoring. Another case where probe testing might be preferable to asserts is when a system is shipped and deployed at a customer site. The customer usually does not appreciate the system crashing inexplicably because of a failing assert. This is one reason why asserts are usually removed when a product is shipped. With probe tests the system will continue to run even if probe asserts fail as long as a real exceptional condition does not occur. This behavior is usually more desirable from a customer perspective. Another problem with asserts in remotely deployed systems is that the output of the assert is usually meaningless to a customer. This can make diagnosing the problem quite difficult, especially if it only occurs on the customer’s machine. With probe tests when there is a problem with the system, the customer can be asked to e-mail the log file generated by the probe tests back to the developers. The developers can then run it through an xUnit TestRunner and get a quick snapshot of all the things that went wrong. This can give them information useful in diagnosing the problem. The log file generated by probe tests has a higher value than a traditional log file. This is because it can be parsed automatically and its results can be presented through an xUnit TestRunner. 4.2
Making Probe Tests Like Unit Tests
One of the strengths of unit tests is that because the inputs to methods are specific and known it is possible to make strong statements about the outcomes of the methods. For example, with a square root method it is possible to write unit tests which check that it works for specific values. Here is how this can be done using probe tests: HSYFPI7UVXHSYFPI\ _ 4VSFITVSFI!RI[4VSFI7UVX TVSFI%WWIVX\"!\"! GSQTYXIVIWYPX TVSFI%WWIVX)UYEPWVIWYPX VIWYPX\ VIWYPX VIWYPX VIXYVRVIWYPX a
128
Asim Jalis
This is an example of using probe tests to implement design-by-contract style pre and post conditions on subroutines. Of course, this is a special case because the square root function has a well-known and easy inverse. In many cases it might not be possible to state elegant pre and post conditions based on the inputs to the functions. For example, if I don’t want to make a universal statement about the square root function, but rather just to assert that the square-root of 4 is 2, this is much easier with a unit test than with a probe test. This is a reason to prefer unit tests over probe tests. However, probe tests can be useful in situations where the class or the method is difficult to test because of its design, or where the outcome of the test depends on context that is not present when the code is run through unit tests.
5
Porting Probe Tests to Other Languages
The current probe test framework [3] is in C# and works with Microsoft’s .NET platform. The Probe object simply writes to a log file. The ProbeTests class extends NUnit framework’s ITest interface and implements the Suite property. ProbeTests reads the log file and generates TestCase objects for it. The project is open source and can be ported easily to other languages for which xUnit based unit testing frameworks exist.
6
Other Applications of Probe Testing
The main application of probe testing is in instrumenting legacy code with asserts that can then make the code safe for refactoring. Legacy code is difficult to test with white-box unit tests. Black-box functional tests are possible but they do not provide fine-grained assertions against the internal state of the system. Probe testing can be viewed as a kind of gray-box testing. It is non-intrusive and the probes are only active if the code they are in is executed. So they are not completely white-box. At the same time they provide a much more intimate view into the code than the black-box functional tests. This ability to non-intrusively observe running code can be useful in some other cases as well. Here are some other applications of probe testing: Performance Testing. We usually test performance by asserting that a block of code will take less than some fixed amount of time. These asserts allow us to place upper bounds on latency. A simple example is when we were deciding between using a file and a database for storing some information. Instrumenting the code with probe tests allowed us to determine that files were faster than the database for a data set of this size. The probe tests were reporting on how the system performed with real data. This made the numbers more credible than if we had benchmarked using what we considered “typical” data.
Probe Tests: A Strategy for Growing Automated Tests around Legacy Code
129
An Alternative to Mock Objects. Mock objects [4, 5] are a technique for using stand-in objects to replace system objects for unit testing. This allows the unit tests to exercise production code even when the system environment is not present. This technique has merits. However, it assumes that the mock objects interact with the program just like real objects. This is not always the case. Real system objects frequently behave in pathological ways. For example, we have experienced servlet containers that use proprietary HTTP headers to indicate special information to servlets. Probe tests can be used to assert that the real system environment is precisely how the programmer expects it to be. In this way probe tests can provide a reality check to mock tests. Debugging. Network applications frequently cannot be run inside debuggers. For example a web service might be invoked by a server which runs in its own thread. It might be not be possible to run the server in a debugger. Probe tests provide a way to confirm hypotheses about how the code really works in the real environment and can be used for debugging in this way.
7
Probe Testing Caveats
The programmer should be aware of the performance penalty incurred by instrumenting code with probe tests. The Probe class provides a static method (Streamline()) which causes log file writes to occur only on failing probe asserts. There is still a small performance cost involved with each call to an assert but depending on the situation this cost might be tolerable in production. Probe tests are not a substitute for unit tests or functional tests. They complement these testing frameworks. They can be quite useful in bootstrapping a legacy system into XP and beginning the process of building tests around it. However, once the unit tests become comprehensive the probe tests should be removed. They can also be used for investigating the system environment. However, once the programmer understands the environment he can create mock objects that behave just like system objects and replace the probe tests with unit tests based on mock objects.
8
Summary
Probe tests are useful for instrumenting a legacy system with tests as a first step towards refactoring it and unit testing it. They can be used to make assertions about the runtime environment of the program. Since they live in production code they can degrade performance if they are left in the system when it goes into production. A simple flag can turn off probe logging for passing asserts. In C# and some related languages the ifdef macro preprocessor can be used to eliminate them completely from the production code. In Java you can remove them using the grep utility if this is a big concern.
130
9
Asim Jalis
Conclusion
Probe tests are another tool in the arsenal of the test-infected programmer. They complement unit tests and automated functional tests. They clearly do not replace either. However, they can be quite useful under certain circumstances. The larger goal of this technique is to provide a way for legacy systems to be migrated to unit testing. This can substantially increase the applicability and value of XP. It can also make the job of programmers maintaining legacy systems easier and help them take advantage of unit testing and refactoring.
Acknowledgements I would like to acknowledge the useful feedback I received from Brian Marick, Jim Stearns, and Bryan Murray.
References 1. Beck, K.: Extreme Programming Explained: Embrace Change. Addison-Wesley, Reading, MA (1999) 2. Fowler, M: Refactoring: Improving the Design of Existing Code. Addison-Wesley, Reading, MA (1999) 3. Jalis, A.: probe-testing.zip. http://wokkil.pair.com/asim/probe-testing.zip 4. Mackinnon, T., Freeman, F., Craig, P.: Endo-Testing: Unit Testing with Mock Objects. In: Succi, G., Marchesi, M. (eds.): Extreme Programming Examined. Addison-Wesley, Reading, MA (2001) 287-301 5. Jalis, A., Kind, L.: Automatically Generating System Mock Objects. Presented at XP Universe 2001, http://www.xpuniverse.com/2001/pdfs/Testing04.pdf (2001)
An Informal Formal Method for Systematic JUnit Test Case Generation David Stotts, Mark Lindsey, and Angus Antley Dept. of Computer Science Univ. of North Carolina at Chapel Hill _WXSXXWPMRHWI]ERXPI]a$GWYRGIHY Abstract. The JUnit testing tool is widely used to support the central XP concept of “test first” software development. While JUnit provides Java classes for expressing test cases and test suites, it does not provide or proscribe per se any guidelines for deciding what test cases are good ones for any particular class. We have developed a method for systematically creating complete and consistent test classes for JUnit. Called JAX (for Junit Axioms), the method is based on Guttag’s algebraic specification of abstract data types. We demonstrate an informal use of ADT semantics for guiding JUnit test method generation; the programmer uses no formal notation other than Java, and the procedure meshes with XP test-as-design principles. Preliminary experiments show that informal JAX-based testing finds more errors than an ad hoc form of JUnit testing.
1
Motivation and Background
Regression testing has long been recognized as necessary for having confidence in the correctness of evolving software. Programmers generally do not practice thorough tool-supported regression testing, however, unless they work within a significant industrial framework. JUnit [1,2,3] was developed to support the “test first” principle of the XP development process [4]; it has had the side effect of bringing the benefits of regression testing to the average programmer, including independent developers and students. JUnit is small, free, easy to learn and use, and has obtained a large user base in the brief time since it’s introduction in the XP community. Given this audience, we will not go into any detail about its structure and usage. JUnit and its supporting documentation are available at http://www.junit.org . The basic JUnit testing methodology is simple and effective. However, it still leaves software developers to decide if enough test methods have been written to exercise all the features of their code thoroughly. The documentation supporting JUnit does not prescribe or suggest any systematic methodology for creating complete and consistent test suites. Instead it is designed to provide automated bookkeeping, accumulation, and execution support for the manner in which a programmer is already accustomed to developing test suites. We have developed and experimented with a systematic test suite generation method we call JAX (for Junit Axioms), based on Guttag’s algebraic semantics of Abstract Data Types (ADTs) [5,6,7]. Following the JAX method leads to JUnit test suites that completely cover the possible behaviors of a Java class. Our approach is simple and systematic. It will tend to generate more test methods than a programmer D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 131–143, 2002. © Springer-Verlag Berlin Heidelberg 2002
132
David Stotts, Mark Lindsey, and Angus Antley
would by following the basic JUnit practice, but our preliminary experiments show this extra work produces test suites that are more thorough and more effective at uncovering defects. We refer to JAX as an informal formal method because, while it is based in the formal semantics of abstract data types, the Java programmer and JUnit user need use no formalisms beyond Java itself to take advantage of the guidance provided by the method. Our early experiments have been with this informal application of JAX. There are ways to automate the method at least partially, but they require more reliance on formal specs [8,9]. 1.1
Related Work on Formal Spec-Based Testing
The methods we are pursuing for automating JAX are similar to those of DAISTS [8] and Daistish [9]. The DAISTS system used Guttag’s algebraic specification of abstract data types to write a test oracle for an ADT implemented in a functional language (SIMPL-D). Daistish expanded the work of DAISTS into the objectoriented domain (for C++) by solving problems related to object creation and copying that were not found in functional languages. Daistish automated the creation of the test oracle, leaving the programmer to write axiomatic specifications and test points. ASTOOT [10] is a related system for testing based on formal specifications. ASTOOT uses a notation similar to Parnas’ trace specs [11,12] instead of algebraic ADT semantics. This work was presented in the context of the language Eiffel, and has not been carried forward into commercial quality tools. We also think the use of the functional notation of algebraic ADT axioms is an advantage over the trace spec approach; such axiom can be expressed in a functional programming language (we give our examples in ML) giving executable specs. Larch [13,14,15] is another research effort in which formal program specs are used to gain leverage over software problems. In Larch, program specifications have a portion written in Guttag’s functional style, along with a second portion written to express semantic-specific details for a particular programming language. A significant body or work exists on using Larch for program verification, supported by automated theorem provers. We do not know of any testing methodologies based on Larch specs however. 1.2
Structure of the Report
In the following sections we first present a brief explanation of algebraic specifications of abstract data types, and why they are appropriate as formal semantics for objects. Following that we show how to use ADT specs manually to systematically develop a consistent and complete JUnit test class. We then discuss our preliminary experiments with several ADT implementations and the comparison of the number of errors found with JAX versus without. We conclude with a discussion of how the method meshes with the XP test-as-design principle.
2
Algebraic Specification of ADTs
The raison d’etre of JAX is to apply a systematic discipline to the otherwise informal (and possibly haphazard) process of creating effective test methods in JUnit test
An Informal Formal Method for Systematic JUnit Test Case Generation
133
classes. A class to be developed is treated as an ADT (abstract data type), and the formal algebraic specification [5,6,7] of this ADT is then used as a guide to create a complete and consistent set of test methods. Guttag’s work is fundamental and, we think, underused in modern software methodologies; for this contribution he was recently recognized as a software pioneer along with Dijkstra, Hoare, Brooks, Wirth, Kay, Parnas, and others [16]. We present here a brief summary of algebraic ADT semantics before we more fully explain JAX. Note that the ADT formalism explained here is used as an intellectual guide for developing tests. Effective use of these ideas requires only Java, so the reader should not be overly concerned with the specific formalisms and notations shown. An ADT is specified formally as an algebra, meaning a set of operations defined on a set of data values. This is a natural model for a class, which is a collection of data members and methods that operate on them. The algebra is specified in two sections: the syntax, meaning the functional signatures of the operations; and the semantics, meaning axioms defining the behavior of the operations in an implementationindependent fashion. For example, consider BST, an ADT for a bounded stack of elements (of generic type E). The operation signatures are new: push: pop: top: isEmpty: isFull: maxSize: getSize:
int --> BST BST x E --> BST BST --> BST BST --> E BST --> bool BST --> bool BST --> int BST --> int
In object-oriented terms, operation signatures correspond to the interfaces of the methods for a class. In Guttag’s notation, ADT operations are considered functions in which the state of the object is passed in as a parameter, and the altered object state is passed back if appropriate. The operation push(S,a) in Guttag’s ADT notation would thus corresponds to a method S.push(a) for a specific object S of this class. In this example, new is a constructor that takes an integer (the maximum size of the bounded stack) and creates a BST that will hold at most that many elements. The push operation takes an E element and a BST (an existing stack) and adds the element to the top of the stack; though it has not yet been so specified, we will define the semantics to reflect our desire that pushing an element on a full stack does nothing to the state of the stack – effectively a noOp. Operation maxSize returns the bound on the stack (which is the parameter given to new); operation getSize tells how many items are currently in the stack. Operation isFull exists for convenience; it tells if there is no room in the stack for any more elements and can be done entirely with maxSize and getSize. The remaining operations behave as would be expected for a stack. The semantics of the operations are defined formally with axioms that are systematically generated from the operation signatures. To do so, we first divide the
134
David Stotts, Mark Lindsey, and Angus Antley
operations into two sets: the canonical operations1, and all others. Canonical operations are those needed to build all elements of the type; these necessarily are a subset of the operations that return an element of the type. In this example, there are only three operations that return BST (new, push, pop), and the canonical operations are {new, push}. This implies that all elements of type BST can be built with new and push alone. Any BST that is built using pop can also be built with a simpler sequence of operations consisting only of new and push; for example, the stack produced by push(pop(push(new(5),a)),b) using pop is equivalent to the stack produced by push(new(5),b) not using pop. Once a set of canonical operations is identified2, one axiom is written for each combination of a non-canonical operation applied to the result of a canonical operation. This is a form of induction. We are defining the non-canonical operations by showing what they do when applied to every element of the type; we obtain every element of the type by using the output of the canonical operations as arguments. For example, considering the bounded stack ADT, one axiom is for pop(new(n)) and another is for pop(push(S,a)). With 2 canonical constructors and 6 non-canonical operations, we write a total of 6*2 = 12 axioms. These combinations generate mechanically the left-hand sides of the axioms; specifying the right-hand sides is where design work happens and requires some thought. An axiom is thought of as a re-writing rule; it says that the sequence of operations indicated on the left-hand side can be replaced by the equivalent (but simpler) sequence of operations on the right-hand side. For example top(push(S,x) = x pop(push(S,x)) = S
// ok for normal stack but not for bounded stack // ok for normal stack but not for bounded stack
are two axioms for normal stack behavior. The first specifies that if an element x is pushed on top of some stack S, then the top operation applied to the resulting stack will indicate x is on top. The second says that if an element x is pushed onto some stack S, then the pop operation applied to the resulting stack will return a stack that is equivalent to the one before the push was done. For a bounded stack the proper behavior is slightly more complicated: top(push(S,x)) = if isFull(S) then top(S) else x pop(push(S,x)) = if isFull(S) then pop(S) else S
This says that if we push an element x onto a full stack S, then nothing happens, so a following pop will be the same as popping the original stack S; if S is not full, then x goes on and then comes off via the following pop; similar reasoning applies to top. For those readers interested in more details, the full set of ADT axioms for BST is given in Appendix A. For JAX, the lesson to carry into the next section is that class methods can be divided in two groups (canonical and non-canonical) and that combining one non-canonical method applied to one canonical will define one axiom… and hence one test method for a JUnit test class, as will see. 1
Guttag uses the term canonical constructor. We use operation instead of constructor to avoid confusion with the method that is invoked when an object is first created. That constructor method is called new in Guttag’s vocabulary. 2 Though it is not evident from this simple example, there can be more than one set of canonical operations for an ADT. Any valid set will do.
An Informal Formal Method for Systematic JUnit Test Case Generation
3
135
Informal Formality: Manual JAX
Once the ADT axioms have been specified for the target class, we can systematically construct a corresponding JUnit test class. In summary, the steps are: 1) design the method signatures for the Java class to be written (the target class) 2) decide which methods are canonical, dividing the methods into 2 categories 3) create the left-hand sides (LHS) of the axioms by crossing the non-canonical methods on the canonical ones 4) write an equals function that will compare two elements of the target class (to compare two BSTs in our example) 5) write one test method in the test class for each axiom left-hand side, using the abstract equals where appropriate in the JUnit assert calls. The last two steps are the keys. The papers explaining JUnit provide examples where each method in the target class causes creation of a corresponding method in the test class. JAX calls for creation of one test class method for each axiom. The first level of informality enters the method here. We do not need to write out the axioms completely in the ML formalism (or any other formal notation). Rather, all we need is the left hand sides of the axioms – the combinations of the noncanonical methods applied to the canonical ones. The programmer will create the right-had side behavior en passent by encoding it in Java directly in the methods of the corresponding JUnit test class. The formal ADT semantics tell us which method combinations to create test for, but we flesh out the axiomatic behavior directly in JUnit. For example, consider this BST axiom: pop(push(S,e)) = if isFull(S) then pop(S) else S
The right-hand side gives a behavior definition by showing the expected outcomes when the method calls on the left-hand side are executed. The “=” equality that is asserted between the method sequence on the left and the behavior on the right is an abstract equality; in this case, it is equality between two stacks. To check this equality the programmer must supply a function for deciding when two stacks are equal, which is the function from item (4) in the list above. In this case, we get a method called testPopPush() in the JUnit test class for BST: protected void setUp() { stackA = new intStackMaxArray(); stackB = new intStackMaxArray(); }
// defined as max 2 for ease
public void testPopPush() { // axiom: pop (push(S,e)) = if isFull(S) then pop(S) else S // do the not(isFull(S)) part of the axiom RHS int k = 3; stackA.push(k); stackA.pop(); assertTrue( stackA.equals(stackB) ); // use of abstract equals //now test the isFull(S) part of the axiom RHS stackA.push(k); // 1 element stackA.push(k); // 2… now full stackB.push(k); // 1 element
136
David Stotts, Mark Lindsey, and Angus Antley stackB.push(k); assertTrue(stackA.equals(stackB));
// 2.. now full // expect true
stackA.push(k); stackA.pop(); stackB.pop(); assertTrue( stackA.equals(stackB) );
// // // //
full… so push is a noop now has 1 elt now has one elt expect true
}
Note that this test method has a section for each part of the right-hand side behavior. In this example, the axiom was written first for illustrative purposes. In a fully informal JAX application, the programmer may well have invented appropriate behavior as she wrote the JUnit code and never expressed the right-hand side of the axiom any other way. JAX is clearly going to generate more test methods, but they are systematically generated and together cover all possible behaviors of the ADT. Consistent and complete coverage of the target class behavior is guaranteed by the proof that the axioms formally define the complete ADT semantics [5]. 3.1
A Note on the Importance of ‘Equals’
In addition to writing the test classes for the axiom combinations, the tester must write an “equals” function to override the built in method on each object. This must be an abstract equality test, one that defines when two objects of the class are thought of being equal without respect to unimportant internal storage or representational issues. This gives us something of a chicken-and-egg problem, as this abstract equals is used with JUnit assert methods to indicate the correctness of the other methods of the target class. It is important, therefore, that the tester have confidence in the correctness of equals before trying to demonstrate the correctness of the other methods. We follow a procedure where the test methods are generated for equals like for other methods, by applying it to objects generated by the various canonical operations of the target class. In this example we wrote and populated the JUnit test methods testNewNewEquals() testPushNewEquals()
testNewPushEquals() testPushPushEquals()
and run them first. This has the effect of checking the canonical operations for correct construction of the base elements on which the other methods are tested, as well as checking equals for correct comparisons. Doing the extra work to structure these tests and including them in the regression suite allows changes in the constructors to be checked as the target class is evolved. Given the introspective nature of the equals tests, the programmer may wish to create them as a separate but related JUnit test class rather than bundling them into the same test class as the other test methods. The JUnit suite can then be used to bundle the two together into a full JAX test for the target class. 3.2
Preliminary Experimental Results
We have run several experiments to gauge the effectiveness of the JAX methods. While these early studies were not fully controlled experiments, we think the results
An Informal Formal Method for Systematic JUnit Test Case Generation
137
are encouraging enough to continue with more extensive and controlled experiments. We coded several non-trivial ADTs in Java and tested each two ways: • basic JUnit testing, in which one test-class method is generated for each method in the target class • JAX testing, in which one test-class method is generated for each axiom in the ADT specification. Each of the preliminary experiments involved two students (author and tester). For each ADT tested, the tester wrote axioms as specifications of the target class. The axioms were given to the author, who wrote Java code to implement the class. While the author was writing the source class, the tester wrote two test classes -- one structured according to basic JUnit testing methodology, and the other structured according to the JAX methodology. After the author completed the source code for the target class the tester ran each JUnit test collection against the source class, and recorded the number of errors found in each. In each case the source class was the same, so the sole difference was the structure and number of the test methods in the test class. Over the course of the studies we used 2 different testers and 2 different authors. We examined in this way 5 Java classes, starting with some fairly simple ones to establish the approach and progressing to a pair of cooperating classes with a total of 26 methods between them3. In the non-trivial cases, the JAX test class uncovered more source class errors than did the basic JUnit test class. As a warm up we wrote a JAX test class for the ShoppingCart example distributed with JUnit. The JUnit test class that comes with it containe no errors, and none were found via JAX either. Another of the classes tested was the bounded stack (BST) discussed previously. We implemented it twice, with two different internal representations; in each case, JAX testing found one error and basic JUnit test uncovered none. The most complicated example we studied was a pair of cooperating classes implementing a simple library of books. The interface is defined as follows: public public public public public public public public public public public public public public public public 3
book( String title, String author ) void addToWaitList(int pid) void removeFromWaitList(int pid) void checkout(int pid) void checkin(int pid) int getNumberAvailable() String getTitle() String getAuthor() boolean isInCheckoutList(int pid) boolean isInWaitList(int pid) int getPidWaitList(int index) void addACopy(int) void removeACopy(int) boolean equals(book b) int getSizeOfWaitList() int getSizeOfCheckoutList()
All Java code for these classes, and the JUnit test classes corresponding to them, can be obtained online at http://rockfish-cs.cs.unc.edu/JAX/
138
David Stotts, Mark Lindsey, and Angus Antley
public int NextWaiting() public int getNumberCheckedOut() public public public public public public public public public public pid) public
library() book getBook(String title, String author) void addBook(String title ,String author) void removeBook(String title ,String author) void checkoutBook(String title, String author,int pid) void checkinBook(String title, String author,int pid) boolean equals(library l) boolean isAbleToBeRemoved(String title, String author) boolean isAvailable(String title, String author) boolean isAbleToBeCheckedIn (String title, String author,int int getNumberOfBooks()
This test involved two different classes interacting. In particular, methods in Library invoke methods in Book; to manage this, we developed and tested Book first, then developed and tested Library. We repeated this two-step process once for normal JUnit tests, and once for the JAX methodology. Obviously, if there are errors in Book we would expect to see the error propagate into errors in the Library test as well. Here are the results for each test class when run in JUnit: Test Failures ----------------------------BookTest 0 BookAxiomTest 13 LibraryTest 0 LibraryAxiomTest 15
We have been using the term “error” a bit loosely. This table shows counts of failed JUnit tests; because of the interactions being tested in JAX, a single code error (bad method) is likely to cause several JUnit failures. Moreover, we suspected that the 13 failures found in class Book would be due to errors that would then propagate into many of the failures seen in class Library. We decided to fix the errors in Book and retest. On examination, we found 3 distinct flaws in the implementation of Book. On rerunning JUnit, the 15 failures found by the JAX test class still remained: Test Failures ----------------------------BookTest 0 BookAxiomTest 0 LibraryTest 0 LibraryAxiomTest 15
3.3
Implications of These Numbers
Following JAX manually will require a programmer to write more test methods in a test class than he would with a normal JUnit discipline for the same test class. This is because the JAX approach requires one test method per axiom. For example, in the
An Informal Formal Method for Systematic JUnit Test Case Generation
139
ADT bounded stack (BST), there are 8 methods, with 2 being canonical. Normal JUnit testing produced 8 test methods (or 7, as the constructor was not tested); JAX application produced 2 * 6 = 12 methods from axioms alone; creation of the equals function and dealing directly with constructors brought the total to 14 methods. In our early experiments, we are finding about 70% more test methods are being written in a JAX test suite for small classes (those with on the order of 8 methods). For the larger Book class, there are 17 methods, of which 4 are canonical (book, checkOut, addToWaitList, addACopy). This means we wrote 13*4 = 52 test methods for it. Given the combinatorial nature of axiom creation, the rough estimate for number of test methods in a JAX test class will be on the order of (n^2)/4 where n is the number of target class methods. Since good OO design encourages smaller classes, the quadratic factor should not be a major issue. An IBM fellow who was listening to an earlier talk on our work commented that IBM systems typically had 10 times as much test code as application code, so we don’t find the overhead of applying JAX to be excessive compared to that industrial benchmark. The goal is to test thoroughly, not to economize on test writing. Since tests will be run hundreds of times once written, and will help uncover errors in code beyond the class they are developed for, economizing at the test writing phase is false economy.
4
Meshing JAX with the Test-as-Design Philosophy of XP
Many XP programmers use test case construction as an important component of their iterative design process. Martin and Koss [17] give a good example of this process, showing a pair-programming session in which a small bowling score program was developed test-first, incrementally, using JUnit. Use of JAX does not preclude this important design approach; we see at least two ways to employ it. First, one can generate JAX cross tests incrementally, as new methods are discovered and included in a design. The previous presentation of JAX showed a fully designed ADT, in that all methods of the final class were present for generation of cross tests. However, this was to illustrate the ADT theory that a method crossing approach covers all class behavior through a structural induction. Applying the cross testing principle does not require the entire class to be present. You can generate the JAX cross test class as you go, just as you generate any other JUnit test class. The incremental JAX process is as follows. If you add a new state-creation method during design, then decide if it is canonical or not; if it is, then you write new cross tests for all existing non-canonical methods applied to the state created by the new method. If it is not canonical, then you write new cross tests applying the new method to the state created by the existing canonical methods. Thus, as one designs and builds the class iteratively, you also build the suite of cross tests iteratively. At no point do we write methods into a class because some ADT axioms call for it… we only write cross tests based on the methods that have actually been included in the class due to the incremental design process. This allows adherence to the “write the minimal solution” principle of XP. If your particular stack-like class has no need for a “pop” then JAX will not require you to create a “pop” simply because some ADT theory says beautiful/complete stacks should have “pop” operations.
140
David Stotts, Mark Lindsey, and Angus Antley
A second way to apply JAX is to wait until the class is mostly designed, implemented and stable, and then write a JAX cross test class as a way of “filling in the holes” if they exist, thereby making the regression test suite as complete a safety net as possible before continued system development. As a quick illustration of this approach, we examined the bowling scorer code and JUnit tests found in Martin and Koss [17]. We set up the code and tests from the paper, and created a second JUnit test class (JaxTestGame) to augment the ones they wrote (TestGame). In class Game there are 3 public methods (add, score, scoreForFrame) in addition to the constructor (New). The canonical methods (necessary state creators) are add and New. Portions of the cross test methods are given here (score x New, score x Add, scoreForFrame x New, scoreForFrame x Add) : public void testScoreNew() { assertEquals(0,g.score()); // g is new Game() from setup } public void testScoreAdd() { g.add(5); assertEquals(0,g.score()); // score is not avail until frame ends g.add(2); assertEquals(7,g.score()); // ok, legal score here g.add(14); assertEquals(7,g.score()); // should not be able to add more than 10 // but note… frame is not over so 14 is not // treated as strike… == 10 buried someplace } public void testScoreForFrameNew () { assertEquals(0,g.scoreForFrame(0)); assertEquals(0,g.scoreForFrame(1)); assertEquals(0,g.scoreForFrame(10)); assertEquals(0,g.scoreForFrame(11));
// exception gened // but not handled
} public void testScoreForFrameAdd () { g.add(10); assertEquals(10,g.scoreForFrame(1)); g.add(10); assertEquals(20,g.scoreForFrame(1)); g.add(20); // not a legal pin count but it is allowed by the object // let’s see how it affects the score assertEquals(40,g.scoreForFrame(1)); // it adds it in // according to pins + pins of next two frames g.add(10); assertEquals(80,g.scoreForFrame(2)); g.add(10); assertEquals(110,g.scoreForFrame(3)); // should be 120 if it's adding pins // 80 + 20 + 10 + 10 // this means constant 10 is buried in code }
Class JaxTestGame uncovered two or three design issues in the Game class. As shown in testScoreAdd, there is no prohibition on adding more than 10 pins per frame. This results in impossible scores. It also shows that a pin count greater than 10 is not treated as strike or spare, meaning a direct comparison to 10 is somewhere in the code (equality comparison vs. inequality). As shown in testScoreForFrameAdd, we see that if we add an illegal number of pins, the algorithm is not correctly adding the pin counts for the frames when scoring marks; instead it is adding the constant 10, so a design decision could be re-thought. Note also the uncovering of a failure to
An Informal Formal Method for Systematic JUnit Test Case Generation
141
make a range check on the frame number… the test to get the score for frame 11 generates an exception (array bounds) that is not handled. Of course, the way to eliminate these issues is to have the bowling game or scoring object check its input for correct ranges. The authors specifically noted their concern over this, and chose to leave it out since use of the program was to be completely in their control (“we just won’t call it with an 11”). So these are not flaws per se; rather they are illustrations of how JAX cross tests can semi-mechanically uncover such omissions when they are not specifically ignored. Omissions are quite common in program design [18]. By definition, they are errors that escape one’s thinking. A systematic method like JAX can help find omissions by covering the behavior space of a class, as a partially mechanical supplement to the omission-prone raw thinking done during design.
5
Discussion and Conclusions
We have presented JAX, a systematic procedure for generating consistent and complete collections of test methods for a JUnit test class. JAX is based on a formal semantics for abstract data types and exploits method interactions rather than keying on individual methods. Theoretical results from formal semantics indicate that the interactions generated by JAX cover the possible behaviors of a class. JAX is unique in that formal methods are used for guidance, but the programmer is not saddled with formalisms; effective use of JAX requires only Java. The method is automatable in at least two ways for designers who don’t mind writing formal specifications. Our preliminary studies show that JAX-based test classes find more errors in the target class than the basic method we compared it with (writing one test class method for each method in the target class). These studies were initial investigations only and were not blind controlled experiments. Though the results were encouraging, more thorough and controlled experiments need to be performed for the findings to be considered conclusive. We are pursuing such studies with large Java applications (30K – 50K lines) obtained from the EPA which were developed with JUnit testing. Our follow-on studies involve writing JAX-based test classes to compare to the JUnit classes supplied with the EPA applications. The JUnit classes provided with these applications are not strictly written according to the simple one-test-method-pertarget-method approach; rather, they are supplemented in an ad hoc fashion with test methods deemed needed by the programmers but not corresponding directly to any target method. We think comparing JAX tests to these JUnit tests will be a more effective study of the thoroughness of the JAX procedure. We are not suggesting that all classes be tested this way. However, we think that certain forms of classes lend themselves well to the approach. Classes that are highly algorithmic, for example, or have little GUI interactivity are especially applicable. We do not wish to rob agile methods of their flexibility and responsiveness; however, thorough testing is needed for agile development as much as for traditional processes. JAX is a mostly mechanical procedure for producing thorough collections of test methods. Another issue we have not studied is the impact JAX would have on refactoring. More test methods means more code to move or alter as one refactors target classes. It is not clear how difficult it is to refactor JAX tests compared to other JUnit tests.
142
David Stotts, Mark Lindsey, and Angus Antley
Automation could help, as axioms/specs might be easier to move than full tests; the tests could be regenerated once the specs were relocated in the refactored code.
Acknowledgements This research was supported by a grant from the United States Environmental Protection Agency, project #R82–795901–3. We also thank the referees, and specifically Brian Marick and Dave Thomas, for helpful comments and suggestions for revising the presentation of this work.
References 1. Beck, K., and Gamma, E., “ JUnit Test Infected: Programmers Love Writing Tests,” Java Report, July 1998, Volume 3, Number 7. Available on-line at: http://JUnit.sourceforge.net/doc/testinfected/testing.htm 2. Beck, K., and Gamma, E., “JUnit A Cook’s Tour,” Java Report, 4(5), May 1999. Available on-line at: http://JUnit.sourceforge.net/doc/cookstour/cookstour.htm 3. Beck, K., and Gamma, E., “JUnit Cookbook” Available on-line at http://JUnit.sourceforge.net/doc/cookbook/cookbook.htm 4. Beck, K., “Extreme Programming Explained,” Addison-Wesley, 2000. 5. Guttag, J.V., and Horning, J.J., “The Algebraic Specification of Abstract Data Types,” Acta Informatica 10 (1978), pp. 27-52. 6. J. Guttag, E. Horowitz, D. Musser, “Abstract Data Types and Software Validation”, Communications of the ACM, 21, Dec. 1978, pp. 1048-1063. 7. J. Guttag, “Notes on Type Abstraction”, IEEE Trans. on Software Engineering, TR-SE 6(1), Jan. 1980, pp. 13-23. 8. Gannon, J., McMullin, P., and Hamlet, R., “Data Abstraction Implementation, Specification, and Testing,” IEEE Trans. on Programming Languages and Systems 3( 3). July 1981, pp. 211-223. 9. Hughes, M., and Stotts, D., “Daistish: Sytematic Algebraic Testing for OO Programs in the Presence of Side Effects,” Proceedings of the 1996 International Syposium Software Testing and Analysis (ISSTA) January 8-10, 1996, 53-61. 10. R.-K. Doong, P. Frankl, “The ASTOOT Approach to Testing Object-Oriented Programs”, ACM Trans. on Software Engineering and Methodology, April 1994, pp. 101-130. 11. D. Hoffman, R. Snodgrass, “Trace Specifications: Methodology and Models”, IEEE Trans. on Software Engineering, 14 (9), Sept. 1988, pp. 1243-1252. 12. D. Parnas, Y. Wang, “The Trace Assertion Method of Module Interface Specification”, Tech. Rep. 89-261, Queen’s University, Ontario, Oct. 1989. 13. J. Guttag, J. Horning, J. Wing, “The Larch Family of Specification Languages”, IEEE Software, 2(5), Sept. 1985, pp. 24-36. 14. J. Wing, “Writing Larch Interface Language Specifications”, ACM Trans. on Programming Languages and Systems, 9(1), Jan. 1987, pp. 1-24. 15. J. Wing, “Using Larch to Specify Avalon/C++ Objects”, IEEE Trans. on Software Engineering, 16(9), Sept. 1990, pp. 1076-1088. 16. Proceedings of the Software Design and Management Conference 2001: Software Pioneers, Bonn, Germany, June 2001; audio/video streams of the talks at can be viewed at http://www.sdm.de/conf2001/index_e.htm 17. R. Martin and R. Koss, “Engineer Notebook: An Extreme Programming Episode”, http://www.objectmentor.com/resources/articles/xpepisode.htm 18. B. Marick, “Faults of Omission,” Software Testing and Quality Engineering, Jan. 2000, http://www.testing.com/writings/omissions.html
An Informal Formal Method for Systematic JUnit Test Case Generation
143
Appendix A The full set of axioms for the bounded set BST is given here, using the functional language ML as our formal specification notation: These specs are fully executable; download a copy of SML 93 and try them. The datatype definition is where the canonical constructors are defined. The axioms are ML function definitions, using the pattern-matching facility to create one pattern alternative for each axiom. (* Algebraic ADT specification full axioms for BST (bounded set of int)
*)
datatype BST = New of int | push of BST * int ; fun isEmpty (New(n)) = true | isEmpty (push(B,e)) = false ; fun maxSize (New(n)) = n | maxSize (push(B,e)) = maxSize(B) ; fun getSize (New(n)) = 0 | getSize (push(B,e)) = if getSize(B)=maxSize(B) then maxSize(B) else getSize(B)+1 ; fun isFull (New(n)) = n=0 | isFull (push(B,e)) = if getSize(B)>=maxSize(B)-1 then true else false ; exception topEmptyStack; fun top (New(n)) = raise topEmptyStack | top (push(S,e)) = if isFull(S) then top(S) else e ; fun pop (New(n)) = New(n) | pop (push(S,e)) = if isFull(S) then pop(S) else S ;
A Light in a Dark Place: rd Test-Driven Development with 3 Party Packages James Newkirk Thoughtworks, Inc., 651 W. Washington, Suite 600, Chicago, Illinois, USA NRI[OMVO$XLSYKLX[SVOWGSQ Abstract. Test-Driven Development is often described in the context of writing new software or adding functionality to existing pieces of software. This paper examines the role that Test-Driven Development can play when working with 3rd party packages. The paper documents using test-driven development to explore understand and verify functionality of an existing class library.
1
Introduction
Test-Driven Development (TDD) is often spoken and written about and used in the context of writing new software or adding functionality to an existing piece of software. In fact, the two simple rules of TDD are as follows [5]: • Do not write a single line of code unless you have a failing automated test • Eliminate Duplication These rules however, leave most people thinking that this technique cannot be applied when working with a 3rd party package. The paper describes the following extension of the rules to work with 3rd party packages: • Write a test for all 3rd party library functionality before you use it Writing tests for an existing 3rd party library serves at least two purposes. The first purpose is discovery. During this initial period of discovery the programmer is trying to find out if the software has the functionality needed and how difficult it to use the library. Once the initial period has ended and a decision has been made to use the library then the tests serve another purpose, verification. The tests written during discovery view the 3rd party library as a black box. For additional information about techniques for Black Box Testing read [1]. Over the duration of the project additional tests are added as more functionality of the library is being used and as problems are encountered. The growing body of tests instills a great deal of confidence in the library and also proves to be very useful resource when upgrading to new versions of the library. 1.1
Task Description
The task that forms the basis of this paper was to read an existing XML document and build Java objects from its contents. The objects would then be manipulated in D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 144–152, 2002. © Springer-Verlag Berlin Heidelberg 2002
A Light in a Dark Place: Test-Driven Development with 3rd Party Packages
145
memory and then once the modifications were complete they were then to be serialized into the same XML document on disk. The programming language used is Java, JDK 1.3.1. The example described in this paper is not the actual code but is similar in spirit. 1.2
XML Format
The format for the XML Document was an existing format that is used by other programs so its structure is well known and would be difficult to change. It is a simple structure with a root element “albums” which can contain many “album” elements. Each “album” element has a name and an artist attribute along with a “tracklist” element. The “tracklist” element contains an arbitrary number of “track” elements. Each “track” element has a name and a time attribute. The structure is as follows:
1.3
Enter JDOM
In order to read and write the XML document we choose to look into using the JDOM Class library [2]. JDOM was chosen after reviewing the documentation and reading a number of articles. The articles and documentation described a class library for XML that was designed to be more natural for Java programmers to read and write XML files, than either DOM or SAX, which are meant to be language independent. JDOM Beta3 was the first version of the library that was used in this project. During the development of the rest of the project Beta6 and Beta7 were released and used on the project. The development of this software proceeded from the beginning of 2000 through the end of 2001.
2
Library Workings Discovered
As stated above, the choice of JDOM was based on some articles and the documentation available on the web site. This is often not good enough because there are many questions that need to be answered; mainly will the existing code perform as expected when working with the project’s code instead of the example programs. However, our main goal is to gain some insight into how we write code that will use the library. The tool of choice for this is to write some tests in JUnit [3] that demonstrates in code,
146
James Newkirk
which is the only true means of feedback for this activity that the library works as expected. These tests will not only indicate that the library works they will also provide a working example for the actual software that will be written to work with the library. The guide that is used for the tests mimics the steps will be needed when the actual software is written. 2.1 The To-Do List [5] The first activity when doing test-driven development is to plan out the scope of the task, one test at a time. In this task these tests fall into three categories; reading in an XML Document and creating Java objects, writing the contents of Java Objects out to an XML file, and the last category is a complete roundtrip. Inside each category there are individual tests. For example, in the first category the lists of tests include building an XML document from a file, testing for the root element “albums”, test retrieving the list of “album” elements, etc. The following sections describe these tests. 2.2
The First Test
The object of the first test is to figure out how to read in an existing XML document and verify that it is well formed XML. public void testFileLoad() throws JDOMException { Document doc = null; Builder builder = new SAXBuilder("org.apache.xerces.parsers.SAXParser"); File file = new File("albums.xml"); doc = builder.build(file); assertNotNull(doc); }
This test is written using the JUnit testing framework. The test demonstrates that an org.jdom.Document can be created from a file. If when running the test a JDOMException occurs, the test fails. The test compiled and ran successfully. We now knew how to read in XML documents and build org.jdom.Document objects using JDOM. Let’s move on. 2.3
First Look at the Content
The previous test verified that we could read a file and that the XML document was well formed. This test we will go one small step further and check that the root element of the document is “albums”. Since we need to load a document in this test as well the first step in the previous test the first step was to create a separate method called DocumentLoader.load(“filename”) to load the Document.
A Light in a Dark Place: Test-Driven Development with 3rd Party Packages
147
public void testRoot() { try { Document doc = DocumentLoader.load("albums.xml"); assertEquals("albums", doc.getRootElement().getName()); } catch (JDOMException e) { fail("could not test the root element"); } }
The point of this test is to verify that when we read in the XML document that it is the correct one and that we know how to get the root element. Each step of the way the knowledge of how to use the library increases and our confidence grows. 2.4
An Interesting Experience
In the next test the goal was to retrieve the “album” elements that were nested under the root element “albums”. The example file that was used contained 2 “album” elements. Once again we need to do a little refactoring because of some code duplication in testRoot and testAlbumCount. Both need to load the same document. In JUnit this is best achieved by moving common code to the setUp method. The following is the code after the change was made. private Document doc; private Element root; protected void setUp() throws Exception { doc = DocumentLoader.load("albums.xml"); root = doc.getRootElement(); } public void testRoot() { assertEquals("albums", root.getName()); } public void testAlbumCount() { List albums = root.getChildren("album"); assertEquals(2, albums.size()); }
When we ran these two tests we fully expected to see both tests pass. They did not. The testAlbumCount method failed and stated that the number of elements was 6. We thought this was very strange and first we checked the documentation and a couple of samples and verified we were calling the function correctly. That was not the problem; we were calling the function correctly. We then checked the XML file thinking that maybe there were some elements in the file that we could not see, like text elements that contained non-printable characters. This is a problem we have had in the past when working with the DOM and SAX parsers. This was also not the problem. One of us then went to the JDOM mailing list archive and did a search. Sure enough, there was a problem with Beta3 when calling the getChildren function. Someone graciously provided a patch. The patch was downloaded and recompiled.
148
James Newkirk
Once the JDOM library was built the tests were run against the patched version. There were some concerns about this fix breaking other parts of the library but it certainly did not break any parts of the library that we were currently using. How did we know this? We reran the tests and none of them failed. This easily could have discovered a problem that forced a rethinking of the decision of using JDOM. It did not, and the discovery continued. 2.5
Finishing Discovery
How do we know when we are done? We review the to-do list. The categories that we have before us states that we have to read in an XML file, build corresponding Java objects, and then serialize the Java objects out to an XML file. Therefore, we would need to approximate all of this functionality in tests. To complete the discovery there were a number of tests that were added to verify that reading the content of the XML file was correct. This proceeded without incident, the library worked as advertised. The next step was to serialize in-memory objects out to an XML file. This involved building XML Documents in memory and then writing them to a file. Lastly, could we do the whole roundtrip? When doing this we found ourselves writing the same tests over and over just with different ways to load the XML document. For example, when reading the file we loaded the XML document from a file. When creating the document in memory there was no reason to load just to verify that it was built correctly. Also, in the roundtrip scenarios we needed to do both. This led us to the following test code to assist in this process. We used the Template Method [4] to solve the problem of having to run the same tests with different source documents. The battery of tests was identical; the only difference was where the org.jdom.Document object came from. The class shown below, TestDocumentContents is the class that contains the tests that are run against the document. The class is defined as abstract due to the method getDocument which is also abstract. In this manner the subclasses of TestDocumentContents can use whatever means necessary to provide the document. public abstract class TestDocumentContents extends TestCase { private Element root; protected abstract Document getDocument() throws JDOMException; protected void setUp() throws Exception { Document doc = getDocument(); root = doc.getRootElement(); } public void testRoot() { assertEquals("albums", root.getName()); } // and many more tests }
A Light in a Dark Place: Test-Driven Development with 3rd Party Packages
149
The first derivative of the TestDocumentClass was TestFileLoader. This class would load an XML Document from a file when getDocument was called. This allowed the battery of tests to be run against an XML document that was loaded from a file. public class TestFileContents extends TestDocumentContents { protected Document getDocument() throws JDOMException { JDOMFile file = new JDOMFile("albums.xml"); return file.load(); } }
One of the consequences of choosing to have the tests work in this fashion is that all of the test data has to look the same. For example, the XML document that is in the file “albums.xml” has to be identical to the XML document that is built inmemory. So in order to accomplish this there had to be an in memory version of the XML document that is described in “albums.xml”. The following class creates the mock contents and returns a reference to the built in-memory version of the document. public class TestMockContents extends TestDocumentContents { protected Document getDocument() throws JDOMException { return MockContents.makeMockXml(); } }
Once again the same set of tests is run and passes and its time to move on. The last class that uses the Template Method was the roundtrip class. The following class TestRoundtrip first creates an empty file, then creates a mock version of the XML and serializes it out to a file. Once complete, it then reloads the document and returns that document to the base class to be run against the suite of tests. public class TestRoundtrip extends TestDocumentContents { protected Document getDocument() throws JDOMException { JDOMFile file = new JDOMFile("tmp1.xml"); Document mockDoc = MockContents.makeMockXml(); try { file.store(mockDoc); } catch (IOException e) { // would be logged in non-test code } Document loadedDoc = file.load(); return loadedDoc; }
150
James Newkirk
2.6
Discovery Conclusions
The discovery yielded many positive outcomes. Foremost from the project perspective JDOM appears to work as advertised, with a caveat for the problem that was encountered with getChildren. We had an excellent understanding of how to use JDOM for our task. In addition to the understanding we also have a verifiable working example in our domain of how to use the library. Based on this information we decided to move forward with JDOM as the library. Upon completion there were 35 tests run against the library. This does not come close to testing the entire library. In fact, many more capabilities existed but there were no tests written for them. Why not? You might ask. They are not used in this program. One of the mantras of Test-Driven Development is that you should “test everything that could possibly break”. When working with existing code this could be extended to be “test everything that could possibly break, in the code that we use”. Trying to test the entire library would not be cost effective; also there is no need since we do not use the entire library. In fact the battery of tests that were written for this discovery period only took approximately 2 days. These 2 days were well spent given the benefits described above. However, as we find out next there are even more benefits.
3
Upgrading to New Versions
During the duration of the project there were new versions of the JDOM library that were released. Many times in software projects there is genuine fear in upgrading to new versions of libraries. Why is that? There is no way of knowing that the new version works like the old one. Given that there was a battery of tests that was not the case for this project. 3.1
Upgrading to Beta6
The first upgrade to JDOM that was done was to go from Beta3 to Beta6. The new version was downloaded and compiled against the test suite. There were 25 compile errors. They ranged from classes no longer existing, NoSuchElementException, NoSuchAttrException, and Builder, to methods being removed, etc. The approach that was used at this point was to once again go test by test from the beginning to fix the test code. This was simple and straightforward and most important of all we did not really need to understand the whole scope of the library changing, just the parts that we were using. The most interesting of changes were related to NoSuchElementException and NoSuchAttrException. In the previous version if you called the method getChild the code had to catch an exception if the child element was not present. In the new version the method returns null instead. Overall the upgrade took a couple of hours and the switch was made. This switch was made with a great deal of confidence due only to the battery of tests written during discovery.
A Light in a Dark Place: Test-Driven Development with 3rd Party Packages
3.2
151
Upgrading to Beta7
The last upgrade that was done on this project was to go from Beta6 to Beta7 of JDOM. Given the experience from the previous upgrade there was very little fear in upgrading. The same process was followed this time, in that the new version was downloaded and then the test code was compiled with the new version. This time there were only 4 warnings. The function addAttribute had been deprecated. All the tests passed even without the changing the software. Reading the documentation indicated that calls to addAttribute should be changed to setAttribute. The four changes were made, the code recompiled and the tests ran. The upgrade was done.
4
Conclusions
In the introduction it was proposed that the rules of test-driven development were to be extended as follows: • Do not write a single line of code unless you have a failing automated test • Eliminate Duplication • Write a test for all 3rd party library functionality before you use it During this project this rule extension turned out to be very useful. The techniques were beneficial in discovering how the library worked and verifying that it was suitable for the purpose that it was intended. In this case, working with the tests proved invaluable in identifying a problem and also provided excellent sample code. Once the tests were written deciding to use the library was simple. Over time the tests were used again as a resource to facilitate upgrading to new versions of the library. The tests provided an objective means of determining how easy or hard it would be to upgrade as well as the scope of the changes. It turned out that the upgrade to Beta6 was more difficult than Beta7. The important thing to keep in mind is that this would have been a difficult task without the tests. Does this mean that one should always write tests for all 3rd party software? The answer lies somewhere in the complexity of the library and assumptions on how much the library will change. In the case of JDOM, the software was Beta3. It was a good bet that it would change and also a good bet that it might not work correctly. Given these circumstances the choice would be to use test-driven development techniques again.
References 1. Raishe, T.: Black Box Testing (http://www.cse.fau.edu/~maria/COURSES/CEN4010SE/C13/black.html) 2. http://www.jdom.org 3. http://www.junit.org
152
James Newkirk
4. Gamma, E. et al: Design Patterns, Elements of Reusable Object-Oriented Software, Addison-Wesley (1995) 5. Beck, K.: Test-Driven Development by Example, 25 March 2002 Draft 6. Jeffries, R., Anderson, A., Hendrickson, C.: Extreme Programming Installed, AddisonWesley (2001) 7. Newkirk, J., Martin R.: Extreme Programming in Practice, Addison-Wesley (2002) 8. Beck, K.: Extreme Programming Explained, Addison-Wesley (2000)
Agile Meets CMMI: Culture Clash or Common Cause? Richard Turner1 and Apurva Jain2 1 The
George Washington University, Department of Engineering Management and Systems Engineering, School of Engineering and Applied Sciences, Washington, DC 20052 XYVRIV$WIEWK[YIHY 2 University of Southern California, Center for Software Engineering, Computer Science Department, Los Angeles, CA 90089-0781 ETYVZEN$WYRWIXYWGIHY
Abstract. This paper is based on a workshop held at the University of Southern California Center for Software Engineering in March, 2002. The components of SM SM the Capability Maturity Model Integration (CMMI ) Systems Engineering/ Software Engineering/Integrated Product and Process Model1 are evaluated for their support of agile methods. We also present a set of dualistic concepts differentiating the approaches. The results help to identify and reduce the level of mythology and establishes common ground for further discussion.
1
Introduction ®
If one were to ask a typical software engineer if the Capability Maturity Model for ® Software (CMM ) [1] and process improvement were applicable to agile methods, the response would most likely range from a blank stare to hysterical laughter. Although attempts to reconcile the positions appear in the literature [2-4], the two approaches have been informally characterized as having the same relationship as oil and water. In a recent workshop, the idea of insurmountable differences was challenged. The result of the challenge was an exercise that compared each component of the Capability Maturity Model Integration Systems Engineering, Software Engineering SM and Integrated Process and Product Development (CMMI -SE/SW/IPPD) model [5, 6] with agile concepts. Additional characterizations of the two approaches, beyond oil and water, were also developed and a survey of the level of agreement performed.
2
A Short Primer on Process Improvement and CMMI
Process improvement grew out of the quality movement and the work of Crosby [7], Deming [8], and Juran [9], and is aimed at increasing the capability of work processes. By increasing the capability of its processes, an organization becomes more mature and so operates at a higher level of effectiveness. 1
The following Carnegie Mellon University service marks and registered marks are used in this SM SM paper: Capability Maturity Model, CMM, CMM Integration , and CMMI .
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 153–165, 2002. © Springer-Verlag Berlin Heidelberg 2002
154
Richard Turner and Apurva Jain
One means of achieving this focus on process is by using a capability model to guide and measure the improvement. Assessments against the model provide findings that initiate corrective actions which result in better processes. Models often are organized so that there is a proven, well-defined order by which processes are improved based on the experience of successful projects and organizations. The first model of this type was the Software Engineering Institute’s Capability Maturity Model for Software. The latest in capability model thinking is represented in the Capability Maturity Model Integration (CMMI) effort and the product suite it has developed. CMMI is essentially a set of requirements for engineering processes, particularly those involved in product development. It consists of two kinds of information – process areas (PAs) that describe the goals and activities that make up process requirements in a specific focus area, and generic practices (GPs) that are applied across the process areas to guide improvement in process capability. The process areas include requirements for
• • • • •
basic project management and control basic engineering life cycle processes fundamental support processes process monitoring and improvement processes (similar to SW-CMM) integrated development using teams The second type of information CMMI provides is a set of generic practices that support the improvement of the processes established under the Process Areas. The generic practices are associated with a six-level capability scale that describes relative capabilities as follows: 0. Not performed (Not even doing the basics) 1. Performed (just doing it) 2. Managed (fundamental infrastructure to accomplish the process generally at the project level) 3. Defined (institutionalizes a standard version of the process for tailoring by projects) 4. Quantitatively managed process (uses quantitative measures to monitor and control selected sub-processes) 5. Optimizing (constant adaptation of processes based on quantitative measures) CMMI users apply these two kinds of information to establish, refine and manage the processes used to meet organizational goals.
3
Methodology
A workshop on agile methods was held as part of the annual review of the research conducted by the Center for Software Engineering, located at the University of Southern California. Over 40 participants attended, including researchers, research sponsors, and affiliates as well as invited experts on agile methods. One of four breakout groups was asked to look at agile methods in the context of CMMI and process improvement. This sub-group included members from government research facilities, developers, a government sponsor, an agile methods expert, and academics. The sub-group members had expertise in agile methods and CMMI.
Agile Meets CMMI: Culture Clash or Common Cause?
155
The initial task the sub-group was to classify each CMMI component as in conflict with, of no consequence to, or supportive of agile methods in general. This is essentially the reverse of the approach taken by Robert Glass [4]. The second task was to identify and capture significant conceptual differences and similarities between the two approaches. This was accomplished by brainstorming, discussion, selection, and revision. The Agile Manifesto [10] served as the basis for analysis, but information from practitioners of a particular agile method was included if it seemed relevant. Realizing that agile methods differ in many ways, this approach was viewed as an expedient way to resolve issues given the time constraints of the break-out group. The sub-group results were reported to the larger body provoking considerable discussion. In light of this response, an informal survey was developed and distributed to the workshop participants to roughly measure the degree of agreement with the break-out group’s findings. The survey, made available a week after the conference, was completed by 19 of the attendees. No demographic information was collected.
4
Component Comparison
The CMMI components considered by the sub-group were Process Areas and Generic Practices. For each component, a finding was determined, characterized as follows: − Conflicts (C). The CMMI requirement is a barrier to implementing agile methods − Neutral (N). The CMMI requirement does not impact implementing agile methods − Supports (S). The CMMI requirement is an enabler to implementing agile methods Where the survey results showed no majority, the two findings with the largest percentage are indicated separated by a dash (e.g. C-N for Conflicts and Neutral) The degree of agreement is based on the results of the informal survey and is scaled according to the percentage of survey respondents that agreed with the finding: − Strong (S). 75% or greater − Medium (M). 50% to 74% − Low (L). 25% to 50% − None (N). Below 25% Tables 1 and 2 summarize the results of the Agile methods to CMMI component mapping. Notes on the individual component findings may be found in the appendix. 4.1
Summary of CMMI Component Comparison
Of the 40 components analyzed by the sub-group and validated by the larger group, the results can be summarized as follows:
• • • • •
7 components are seen as clearly in conflict 10 components are seen as possibly in conflict 11 components are seen as clearly supportive 11 components are seen as no worse than neutral 1 component had no consensus finding
156
Richard Turner and Apurva Jain Table 1. Agile method vs. CMMI Process Area conflict findings Process Area
Organizational Process Focus Organizational Process Definition Organizational Training Organizational Process Performance Organizational Innovation and Deployment Project Planning Project Monitoring and Control Supplier Agreement Management Integrated Project Management Risk Management Integrated Teaming Quantitative Project Management Requirements Management Requirements Development Technical Solution Product Integration Verification Validation Configuration Management Process and Product Quality Assurance Measurement and Analysis Decision Analysis and Resolution Organizational Environment for Integration Causal Analysis and Resolution
Sub-Group Finding C S S C C S N N S S S N S S S S S S N N N C S N
Survey Finding C C-N N-S C C-S S S N S N S C N S S S S S None C-N C-N C S N
Agreement M N L M L M L M M L H N L M M M M M L L L M M M
Table 2. Agile method vs. CMMI Generic Practice conflict findings CMMI Generic Practices 2.1 Establish an Organizational Policy 2.2 Plan the Process 2.3 Provide Resources 2.4 Assign Responsibility 2.5 Train People 2.6 Manage Configurations 2.7 Identify and Involve Relevant Stakeholders 2.8 Monitor and Control the Process 2.9 Objectively Evaluate Adherence 2.10 Review Status with Higher Level Management 3.1 Establish a Defined Process 3.2 Collect Improvement Information 4.1 Establish Quantitative Objectives for the Process 4.2 Stabilize Subprocess Performance 5.1 Ensure Continuous Process Improvement 5.2 Correct Root Causes of Problems
Sub-Group Finding N S S S S S S N C N N C C C S N
Survey Finding N-S N-S N-S S N C-S S N C N-S C C N C-N C-N N
Agreement L L L M L L H M M L N M L L L M
Only 17 of the 40 components are considered in conflict or possible conflict. Twenty two components were seen to be supportive of or neutral to agile methods. The components that were deemed in conflict were primarily those that addressed organizational process. This makes sense given the project focus of agile methods.
Agile Meets CMMI: Culture Clash or Common Cause?
157
Many of the supportive components were fundamental project management activities which must be performed in some fashion for any successful project. Agile activities also mapped well to the product development activities. The generic practices were mixed in support, which we believe reflects their process improvement focus. 4.2
Inter-group Dissention
It should be noted that the component mappings represents multiple perceptions from both sides of the divide. There were two distinct groups from the CMMI school – a conservative, by-the-letter group and a liberal, concepts-oriented group. Likewise, there were “conservative” agilists who were extremely rigid in their definitions and liberal agilists who saw the value of comparisons and hybridization. This is born out in the outlier data on both sides that prevented full consensus to be reached. 4.3
CMMI Component Interaction
The results ignore the interactions between the GPs and the PAs, such as enabling PAs like CM, that provide ways to implement the GPs (CM is a good example). It might have been better to have paired these in order to show their association. In a similar vein, the IPPD extension impacts nearly all of the PAs and how they are accomplished. If we had explicitly modified the purpose and goal language to express the IPPD implications of integrated development and team relationships, the results might have shown more supportive or neutral components.
5
Conceptual Comparison
The sub-group defined a number of conceptual characterizations of the differences and similarities of agile methods and CMMI. Some of these were obvious, but others indicated fundamental differences that are not as immediately evident. The survey of the larger group asked the respondent to identify the level to which they agreed with each of the characterizations on a scale from 1 to 7, with 1 representing total disagreement. 5.1
Differences
In looking at the two paradigms, the sub-group identified a number of areas where CMMI and agile had strikingly different and often quite enlightening characterizations. Survey results show that there is general (although not always strong) agreement from the respondents with the workgroup’s characterizations. The following paragraphs discuss each of the differences. What Provides Customer Trust: CMMI - Process Infrastructure AGILE - Working S/W, Participation Agreement level: 5.7 out of 7 (.81)
158
Richard Turner and Apurva Jain
This pair stems from the observation that process people count on their process maturity to provide confidence in their work. CMM appraisals are often used in source selection for large system implementation or for sourcing decisions. Agile people use the idea of working software and customer participation to instill trust. In proposals they use their track record, the systems they’ve developed and the expertise of their people to demonstrate capability. Scope of Approach: CMMI - Broad, Inclusive and Organizational AGILE - Small, Focused Agreement level: 5.6 out of 7 (.80) This observation is based on the premise that CMMI covers a broader spectrum of activities than agile methods. CMMI looks to develop the capabilities of an organization in a number of disciplines, including systems engineering. Agile methods are generally used on smaller projects, and concentrate on delivering a software product on time that satisfies a customer. Where Knowledge Created during Projects (Lessons Learned, etc.) Resides: CMMI - Process Assets AGILE - People Agreement level: 5.8 out of 7 (.83) The management and retention of knowledge is key to organizational survival. In process-oriented organizations this knowledge is maintained in the process assets (including process definitions, training materials, etc.). This is to support uniformity across projects and comparability for measurement. Agile methods generally focus on a project rather than an organization and maintain their experience in the people doing the work. As these people work on more and more tasks, that knowledge is shared across the organization. Practitioner and Advocate Characteristics: CMMI - Disciplined, Follow Rules and Risk Averse AGILE - Informal, Creative and Risk Takers Agreement level: 5.2 out of 7 (.74) Here the differences are on the perceived mind set of the practitioners and supporters of the two approaches. CMMI supporters are often characterized as rigid, structured, and bureaucratic. Agile supporters are seen as freer spirits who let their talents flow unfettered into the work and don’t worry about exactly how the work gets accomplished, so long as it meets the customer’s needs. Scaling Challenges: CMMI - Scaling down -- Doable, but Difficult AGILE - Scaling up -- Undefined Agreement level: 5.6 out of 7 (.80) While there can be debate on this issue, CMMI is generally seen as appropriate for large projects and complex organizations, but difficult to apply to small companies or individual teams. Agile is seen as wonderful for small organizations and small, separable projects, but its ability to scale up to larger projects and organizations has been widely questioned.
Agile Meets CMMI: Culture Clash or Common Cause?
159
Operational Organization: CMMI - Committees AGILE - Individuals Agreement level: 4.2 out of 7 (.60) This observation saw differences in how the two approaches accomplished work. In process-based organizations decisions and work are usually done in a committee, and decision authority and responsibility may be dispersed. Process-based organizations often have highly specific chains of command, decision making processes and requirements, and other operational structures that require multiple sign-offs and long coordination times. Agile methods are observed to be dependent on the individual to accomplish tasks and the team to make quick, informed, product-oriented decisions. The authority usually resides in the team doing the work, and there is little bureaucracy. It should be noted that this had the lowest agreement level from the survey. The reason may be in the summary words used in the survey which could be considered to negate the collaborative nature of agile methods. Goals of the Approach: CMMI - Predictability, Stability AGILE - Performance, Speed Agreement level: 5.6 out of 7 (.80) Process maturity (or capability) is focused on predictability and stability. They are aimed at developing the capabilities of organizations rather than specifically delivering products, and seek to enable predictable performance regardless of the staff involved. Agile methods are focused on speed, performance, and delivering quickly to the customer. Agile practices are more opportunistic in nature – that is, they support rapid change to accommodate situational needs and environmental demands. Communication Style CMMI - Macro, Organizational AGILE - Micro, Person to Person Agreement level: 5.5 out of 7 (.79) In process-oriented organizations, common processes and training are seen to support communication across broad sections of the organization using work products as the medium. Such artifacts provide traceability, can be used in process analysis, and provide history for later projects. Artifacts also are generally developed in conformance with a standard. This adds additional time and effort to the project. Agile methods tend to encourage frequent, person-to-person communication and specifically address only intra-project communication. Much of the communication is “as necessary” and has no lasting artifact. This allows rapid development, but can make recovery or later analysis more costly. Usual Focus of Issue Resolution: CMMI - Words AGILE - Product Agreement level: 5.2 out of 7 (.74) This was probably the most interesting difference stated. When process people work a problem, there is an enormous amount of energy expended on defining the
160
Richard Turner and Apurva Jain
specifics and finding just the right words for both the problem and a solution. The waterfall approach is evident in the way they consider getting just the right description so that there is agreement and the results can be communicated clearly to a large group. Agile people tend to act first and talk later so they can get the product out. Rather than discuss an issue at length, people generally try something and see if it works. If it doesn’t, keep trying until something does. The spiral or evolutionary nature of their thinking leads to a number of trial solutions which may refine the understanding of the problem. 5.2
Similarities
The sub-group identified two places where the CMMI and agile methods found common ground. Both Have Specific Rules Agreement level: 4.6 out of 7 (.66) It became obvious early on in the discussions that process-oriented and some of the agile methods (particularly XP) have specific rules that must be followed. Agile tended to have considerably fewer rules, but the feeling of the group was that this made those few much more critical. Both Are Motivated by the Desire to Become a High Performance Organization Agreement level: 4.4 out of 7 (.63) Both approaches are motivated to develop and maintain high performance organizations. Process approaches work in a more traditional industrial fashion, using the concepts of engineering and manufacturing to establish a well-defined machine of an organization. Agile uses more post-industrial ideas. Work is done within the specific context of the problem, and the goal is to establish experts with generalized talent that can form a team and deliver an acceptable product to the customer as quickly as possible.
6
Analysis and Conclusions
From the results of the component comparisons it is evident that while there are significant differences, the “oil and water” description of CMMI and agile approaches is somewhat overstated. It is also clear that while both represent methodologies of a sort – maturity models and appraisal techniques or agile tenets and practices – the defining characteristic is the attitude or mindset under which development activities are accomplished. All of the conceptual comparisons were validated by the survey. While there were still outliers, our general opinion is that the differences between the agile and process worlds are beginning to be better articulated and so better understood. Because this data is based primarily on perceptions, an empirical analysis of the conceptual pairs to determine the validity of the findings is currently under way by the authors.
Agile Meets CMMI: Culture Clash or Common Cause?
161
It is our belief that there is much in common between the two world views, and that the strengths and weaknesses are often complimentary. We also believe that neither way is the “right” way to develop software or software-intensive systems. Rather, there are instances of projects or phases of projects when one or the other represents a significant advantage. While development organizations will almost certainly have a preferred manner of doing business, they should be able to identify and respond to these instances by adapting their work processes to the work at hand. We look forward to continuing discussion and the results of collaborative efforts and hybrid methods that are sure to appear in the near future.
References 1. Paulk, M.C.e.a., Capability Maturity Model for Software, Version 1.1. 1993, Software Engineering Institute, Carnegie Mellon University: Pittsburgh, PA. 2. Paulk, M., Extreme Programming from a CMM Perspective. IEEE Software, 2001. 3. Boehm, B.W., Get Ready for Agile Methods, with Care. IEEE Computer, 2002. 4. Glass, R.L., Agile versus Traditional: Make Love not War. Cutter IT Journal, 2001. Vol. 14, No. 12(December): p. 12-18. 5. Ahern, D.M., A. Clouse, and R. Turner, CMMI distilled: a practical introduction to integrated process improvement. The SEI series in software engineering. 2001, Boston: Addison-Wesley. xv, 306. 6. CMMI Development Team, CMMI-SE/SW/IPPD, V1.1: Capability Maturity Model Integrated for Systems Engineering, Software Engineering and Integrated Product and Process Development, Version 1.1: continuous representation.. 2001, Software Engineering Institute, Carnegie Mellon University: Pittsburgh, PA. p. 688. 7. Crosby, P.B., Quality is Free: The Art of Making Quality Certain. 1979, New York, NY: MacGraw-Hill. 8. Deming, W.E., Out of Crisis. 1986, Cambridge, MA: MIT Center for Advanced Engineering. 9. Juran, J.M., Juran on Planning for Quality. 1988, New York, NY: MacMillan. 10. Agile Alliance, Manifesto for Agile Software Development. 2001. 11. Schwaber, K. and M. Beedle, Agile Software Development with Scrum. 2002, Upper Saddle River, N. J.: Prentice Hall.
Appendix – Notes on CMMI Component Findings CMMI Process Areas Organizational Process Focus (Findings: C, C Agreement: M) Conflict was based on the implied infrastructure needed to accomplish the goals. While agile processes evolve, they do so under their own experience, generally within a team or a project. Agile organizations do generally have an organizational process (the agile method or ecosystem) within which improvements take place. Organizational Process Definition (Findings: S, C-N Agreement: N) This process area caused some difficulty across the broader group. The sub-group accepted the idea of an agile method or ecosystem being essentially a process asset repository, albeit perhaps an informal one. While the larger group did not agree, it was evenly divided between conflict and neutrality.
162
Richard Turner and Apurva Jain
Organizational Training (Findings: S, N-S Agreement: L) Agile methods rely on practitioners trained in the method. The agile manifesto values individuals over processes and tools, and so should lead toward training and mentoring. There is no requirement in the PA for complex infrastructure – simply that the capability exists and is maintained. Organizational Process Performance (Findings: C, C Agreement: M) The idea of measuring a process and maintaining baselines and models was certainly in conflict with the agile manifesto. A full 25% of the larger group indicated CMMI supported agile in this area. One comment during the brief out indicated that some of the agile methodologies, Scrum [11] for example, have metrics which could be characterized as process metrics. Organizational Innovation and Deployment (Findings: C, C-S Agreement: L) There was considerable discussion about this PA. Several argued that it captured the essence of agile development. Other’s cited the need for infrastructure and the organizational focus implied in the process area goals. There was nearly a 50/50 split between conflicting and supporting among the wider group. Project Planning (Findings: S, S Agreement: M) Most agile methods require a high level of start-up planning and risk assessment. The sub-group indicated this as one of the similarities between the two development approaches. Project Monitoring and Control (Findings: N, S Agreement: L) The difference of opinion results from the level of tracking and planning. Some thought the PA implied more rigorous planning and tracking than agile methods usually employ. Supplier Agreement Management (Findings: N, N Agreement: M) Almost everyone saw this as something rarely addressed in agile projects due to the nature of the teams and the development focus of the work. Integrated Project Management (Findings: S, S Agreement: M) Agile methods are generally team-based and integrate the developers, validators, and customers. Over 90% of the respondents indicated neutral or supportive. Risk Management (Findings: S, N Agreement: L) Most agile methods are designed to mitigate certain types of risks – particularly those from changing requirements and schedules. The opposing view held that agile didn’t really address long-term risk and didn’t strictly follow the CMMI process of identifying, analyzing and tracking. Integrated Teaming (Findings: S, S Agreement: H) Integrated teaming is a key facet of all of the agile methodologies. Quantitative Project Management (Findings: N, C Agreement: N) Because agile methods don’t necessarily perform these activities, there is nothing in the agile manifesto that precludes their performance. The larger group indicated a conflict, with statistical control, deemed non-agile, was the primary concern.
Agile Meets CMMI: Culture Clash or Common Cause?
163
Requirements Management (Findings: S, N Agreement: L) Since agile tenets call for continuous interaction with the customers, it was inferred that requirements were being closely managed. The larger group reflected that tracking and plan management did not support agility. Requirements Development (Findings: S, S Agreement: M) This PA supports the agile concepts of close customer relationships, customer-based requirements elicitation and stakeholder involvement. Technical Solution (Findings: S, S Agreement: M) The only arguments against were based on the requirement for support documentation – something that some agile methodologies don’t strictly support. Product Integration (Findings: S, S Agreement: M) There was some support for a neutral finding based on the idea that this PA addresses integrating components from hardware and software sources and that agile rarely dealt with that type of project. Verification (Findings: S, S Agreement: M) Peer reviews are closely aligned with pair programming. The concept of requirements traceability caused some concern, in that many agile methods focus on functional requirements and place little or no value on nonfunctional requirements or capturing derived requirements which may be lost in later versions. Validation (Findings: S, S Agreement: M) The close relationship with the customer in most agile methods is strongly supported by this PA. Concerns were similar to those for the Verification PA. Configuration Management (Findings: N, None Agreement: L) The sub-group found configuration management to be neutral, with no consensus in the larger group. Some saw frequent builds as strong configuration management, while others pointed out that CM in CMMI was to be applied as appropriate to all work products, which does not support the agile reduced emphasis on documentation. Process and Product Quality Assurance (Findings: N, C-N Agreement: L) Again, the sub-group saw this as neutral while the larger group leaned toward conflict. Given the emphasis on process, non-compliance, and work products, it seemed to some that this PA was focused on peripheral materials rather than product. Measurement and Analysis (Findings: N, C-N Agreement: L) The sub-group, pointing out that several agile methods included some form of progress measure, found this PA to be neutral toward agility. Others felt the concept of measurement and analysis was not a part of the agile approach, and meeting the schedule with an acceptable product to the user was sufficient for agilists. Decision Analysis and Resolution (Findings: C, C Agreement: M) This PA’s focus on establishing specific processes for team functions was in conflict with the spirit of agility. To be agile means to be able to adapt quickly to the situation rather than be bound to pre-conceived criteria and a strict alternative evaluation or decision analysis process.
164
Richard Turner and Apurva Jain
Organizational Environment for Integration (Findings: S, S Agreement: M) Most of the agile methods are supported by a person-friendly, “whatever the developer needs” environment which mirrors the CMMI goals. Some were concerned that infrastructure of any type negated agility. Causal Analysis and Resolution (Findings: N, N Agreement: M) Agile methods use reflection, refactoring, or other cognitive reviews of the product and process to establish lessons learned and make the next cycle more efficient. CMMI Generic Practices GP 2.1 Establish an Organizational Policy (Findings: N, N-S Agreement: L) While not an important part of agile methods, policy was not necessarily in conflict with their intent. The move to agile often requires support by management that may be enhanced through policy. GP 2.2 Plan the Process (Findings: S, N-S Agreement: L) The sub-group generally felt that agile method’s up-front activities accomplished this in most cases. The larger group was a bit more skeptical, seeing maintaining a plan for the process (not the project) was not necessary to agility. GP 2.3 Provide Resources (Findings: S, N-S Agreement: L) Providing the resources necessary to complete the work is not in conflict with agile values. Some felt it was not a part of the development activity and thus neutral. GP 2.4 Assign Responsibility (Findings: S, S Agreement: M) Assignment of responsibilities with the associated authority was strongly supportive of the people performing the work and so supported the agile approach. GP 2.5 Train People (Findings: S, N Agreement: L) The sub-group found it supported agility by providing incentives to train developers in the methodology and to mentor new personnel. GP 2.6 Manage Configurations (Findings: S, C-S Agreement: L) Like the Configuration Management PA, the practice that requires CM to be applied across processes found conflicting opinions across the participants. GP 2.7 Identify and Involve Relevant Stakeholders (Findings: S, S Agreement: H) There was almost no disagreement with this finding. GP 2.8 Monitor and Control the Process (Findings: N, N Agreement: M) Within and between cycles, agile methods are monitored for adherence to functional and schedule requirements as established in the plans. GP 2.9 Objectively Evaluate Adherence (Findings: C, C Agreement: M) The idea of a process mafia that checked on how the developers developed was seen as significant barrier to agile methods. The sub-group noted, however, that in XP and other more strictly defined methods, there was a sense that the team lead/coach/ facilitator performed this function on a person-by-person basis.
Agile Meets CMMI: Culture Clash or Common Cause?
165
GP 2.10 Review Status with Higher Level Management (Findings: N, N-S Agreement: L) Some expressed the opinion that agile adoption requires executive support, so success story briefings are helpful. Where multiple agile teams provide parts of a product, briefing higher management was seen as a necessity. GP 3.1 Establish a Defined Process (Findings: N, C Agreement: N) Some agile methods are detailed while others are more akin to philosophies. Some considered any attempt to define the process as in conflict with agile approaches. GP 3.2 Collect Improvement Information (Findings: C, C Agreement: M) The thrust of this practice is to collect information to improve the process. GP 4.1 Establish Quantitative Objectives for the Process (Findings: C, N Agreement: L) Quantitative objectives for the process seems not in the spirit of agile concepts. GP 4.2 Stabilize Subprocess Performance (Findings: C, C-N Agreement: L) This practice is closely related to statistical process control. Other negative considerations were the concept of sub-process and the establishment of objectives that were not necessarily associated with the customer. GP 5.1 Ensure Continuous Process Improvement (Findings:S,C-N Agreement: L) Some saw continuous improvement as a goal of agile methods. GP 5.2 Correct Root Causes of Problems (Findings: N, N Agreement: M) The general consensus for this practice was one of neutrality, considering that root cause analysis, while a worthwhile endeavor, was neither recommended nor proscribed by agile methods.
Circle of Life, Spiral of Death: Are XP Teams Following the Essential Practices? Vinay Ramachandran and Anuja Shukla Department of Computer Science, North Carolina State University, Raleigh, NC - 27695 _ZVEQEGLEWLYOPEa$YRMX]RGWYIHY Abstract. Many of the twelve carefully defined practices of Extreme Programming are tightly coupled. The various practices have checks and balances on each other and are, in many ways, dependant on each other. Therefore, neglecting essential practices can have sub-optimal or negative consequences to team results. The practices that are actually utilized in team development can often be attributed to the perceptions of the value and difficulty of the practices by the developers on the team. Through two surveys answered by 27 developers, we have assessed the utilization of and the sentiments towards the XP practices. In general, these developers we surveyed valued and utilized most of the practices.
1
Motivation
The twelve XP practices support each other. The weaknesses of one are covered by the strengths of others[1]. In general, the dependencies between the practices imply risk when the team does not utilize all the practices. However in [2], Ron Jeffries shares his wisdom and experienced judgment in assessing which of the twelve practices are essential for teams to be successful through utilizing an XP or XP-like methodology. At times, XP teams explicitly or implicitly create a customized variant of the methodology for one of two reasons. First, the team consciously decides they do not want to utilize some of practices. Alternately, the team or certain team members might find no perceived value in a practice or they find it difficult. The team or these team members might not actually do these practices, whether they are technically supposed to or not. Second, experienced XP teams can learn to reduce the coupling between practices by utilizing other practices to customize their own variant XP methodology. Jeffries states, “As teams become experienced with XP, they develop the wisdom to move beyond the basics, modifying practices, adding them, replacing them”[2]. To assess the utilization of and the sentiments towards the XP practices, we ran two surveys of practicing Extreme Programmers. Twenty-seven developers answered the survey. We share the results of this survey in this paper.
2
About the Surveys
Two surveys were conducted in order to gather information regarding XP projects. The survey questionnaire was based on the survey conducted by Robert Gittins[3] at D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 166–173, 2002. © Springer-Verlag Berlin Heidelberg 2002
Circle of Life, Spiral of Death: Are XP Teams Following the Essential Practices?
167
School of Informatics University of Wales Bangor. The questions in each of the two surveys were very similar, but not identical. The first survey, which is referred to as Survey-I throughout this paper, was administered to ten developers at a single project group of a telecommunications software development organization. The second, referred to as Survey-II, was administered to individuals belonging to four different organizations. Since the two questionnaires were not identical, some information obtained by Survey-I could not be obtained from Survey-II and vice versa. When neither survey is specified, the question was asked of both surveys. For more details about the survey, please refer to the appendix.
3
Views on XP Practices
3.1
The Planning Game
The planning games are a very important aspect of XP. Ron Jeffries mentions “ The planning game is part of the essential cycle of the XP team’s process: it defines what will be done, it provides key feedback between programmers and customers.” [2] The survey respondents in general felt that the planning games had worked well for them. Planning games necessitates a great deal of interaction between the programmers and customers so that they are in sync. Since the interaction between the customer and the programmer is so important, it would be interesting to know if they comment freely. In Survey-I, 90% of the participants agreed that they commented freely whenever they felt they could contribute in the planning games. XP uses story cards to obtain customer requirements. Ninety-two percent of them believed that story cards were effective in capturing customer requirements. Once the stories are written, estimates have to be made by the development team. XP teams are supposed to empower the developers to create estimates themselves, rather than to have developers inherit estimates developed by managers or coaches. Eighty-nine percent felt that the story estimates and 56% felt that the task estimates were made collectively by the team. . We did not have data to explain why only 56% of the participants agreed that task estimates were made collectively. A probable reason for this could be that the developers responsible for implementing the tasks made the task estimates. On the whole, 92% of them felt that estimates made about ‘story’ and ‘task’ time allocations were effective. The planning game includes tracking the velocity of the project; an assessment of how much observable progress has been made by the team. Monitoring velocity can give early warning of problems with testing or refactoring [2]. The majority (63%) agreed that tracking project velocity is important. But, a significant 41% of the participants stated that they had no experience in using project velocity. We did not have data in order to explain why a significant percentage of participants did not have experience in using project velocity. A probable reason could be that the higher management and not the developers did the setting and tracking of project velocity. 3.2
Pair Programming
Pair programming is a style of programming in which two programmers work sideby-side at one computer, continuously collaborating on the same design, algorithm,
168
Vinay Ramachandran and Anuja Shukla
code or test [4]. Research has shown that pair programmers produce higher quality code in about half the time as solo programmers[5]. However, there is still considerable resistance to transitioning from solo to pair programming. From the manager perspective, it appears that two persons are doing the work of one. Programmers initially resist the transition, likely because they are long conditioned to working alone[6]. They are skeptical of the value of collaboration in programming, not expecting to benefit from or to enjoy the experience [4]. Our data indicates that programmers ultimately overcome this initial resistance. The programmers who took Survey-I initially classified pair programming as a most concerning XP practice before they made the transition. Ultimately, 93% felt that they enjoy work more than when programming alone. Ninety three percent also felt that they were more confident in their solution than while working alone. Also, 67% of the participants felt that pair programming was more efficient than independent programming and only 19% felt that pair programming is no more efficient than an experience programmer working alone. Another notable aspect was that almost none of the participants preferred to work independently. This alludes to preference of pair programming over independent programming. However, more than 52% agreed that pair programmers should work alone some of the time. The participants felt that following factors were critical for pair programming: • 96% of them felt that the partner must be committed to pair programming. • 96% of them concurred that both partners must share the screen and the keyboard. • All of them felt that management support is critical for pair programming. 3.3
Small Releases
XP specifies that software be built in two- to four-week small releases. Small releases allow for the fastest possible return on investment and giving customers greater vision into the project [1]. Among the survey participants, this was one of the most highly used practices of XP. In Survey-I, the majority of the participants (88%), felt that small release cycles were helpful. On a scale of 0 to 10 (where 0 indicates the minimum and 10 indicates the maximum), the Survey-I respondents rated its overall usage as almost a perfect 10 (on average). 3.4
Continuous Integration
Ron Jeffries states, “To really accomplish small releases, you need the ability to build the system reliably and frequently. To support collective ownership, you need to avoid code conflicts. The more frequently you integrate, the fewer conflicts you’ll have.”[2]. It seemed that continuous integration was a frequently used practice. In Survey-I, the participants rated its usage as 9 (on average) on a scale of 0 to 10 and the majority (75%) felt that continuous integration was helpful for them. 3.5
Simple Design
The Extreme programming design philosophy emphasizes that the features should be built using the simplest practical design. This practice, combined with Design Im-
Circle of Life, Spiral of Death: Are XP Teams Following the Essential Practices?
169
provement (Refactoring), lets XP teams deliver business value from the beginning, rapidly, and safely [2]. Simple Design was a highly popular practice of XP. Seventy eight percent of respondents believed that simple design is the key to getting the job done quickly. Getting the job done quickly is not the only priority; it must also be done right. However 33% of them also believed that simple design is not necessarily the right answer. Perhaps these survey participants felt that complex design was inevitable under some circumstances. Often programmers try to speculate future requirements and design accordingly. Kent Beck states, “If you believe that future is uncertain, and you believe that you can cheaply change your mind, then putting in functionality on speculation is crazy.”[1]. The survey participants felt the same too. Only 7% felt that putting functionality based on speculation can be justified and only 15% felt that adding functionality early in the project saves time later. Also, 44% of respondents believed that adding functionality early in the project slows the project down. 3.6
Testing
In XP, tests are divided into acceptance (or customer) tests and unit (or programmer) tests. Programmers continually write unit tests before writing code, which must run flawlessly for development to continue. Customers work with the programming team to write tests demonstrating that user stories are finished [1]. Jeffries states, “Running without acceptance tests is running out of control” [2]. Fifty eight percent of respondents considered functional testing in general to be adequately covered, meaning that the functional test cases written by the customer properly exercised the program. But, it seems that acceptance test cases are seldom written early. In Survey-I, only 30% reported that they were often written at story creation time and in Survey-II, 88% said that functional test cases were only available in the middle or later part of the story development. Unit testing seemed to be performed thoroughly in the project teams that were surveyed. Ninety two percent perceived that all classes in the system were tested during unit testing; and 92% of them considered that unit testing in general was properly exercising the code. Unlike acceptance testing, a high majority (96%) felt that these automated unit test cases were often created written early. 3.7
Refactoring
XP does not advocate a “Big Design Up Front” (BDUF) [7]. The design of the code evolves as it is written utilizing the test-first unit test practice. The resulting implemented design may not be desirable. Therefore, the Simple Design practice is dependant up on the refactoring practice. Refactoring is the process of improving the design of existing code [8]. Refactoring is an essential aspect of XP. If a project is to deliver business value from the very beginning, the design must be kept clean. This requires continuous design improvement implemented by refactoring [2]. But there is a concern as to how much refactoring is actually done. Developers may get overly focused on delivering functionality and neglect the important refactoring. Sixty two percent considered refactoring to be adequately performed. However refactoring seems to be sporadic. Fifty eight percent said that they performed refactoring when
170
Vinay Ramachandran and Anuja Shukla
they had the time. Also, Survey-I revealed that, 50% of the participants felt that the refactoring aspect of XP could be further improved in their project The refactoring practice is in turn highly dependant upon the set of automated unit test cases. When code is refactored, it must still pass all the unit test cases in the code base. If test cases fail after refactoring, either the code or the tests themselves must be fixed. All (100%) of the Survey-II respondents reinforced that automated test cases were often run during refactoring; 94% of them agreed that duplicate code was removed during refactoring. 3.8
Collective Ownership
On an Extreme Programming project, any programming pair who sees an opportunity to add value to any portion of the code can do so any time. Everyone shares the responsibility for the quality of all the code Ron Jeffries also contends that this collective code ownership improves quality [2]. Seventy percent of the participants agreed with Jeffries statement. Moreover, 85% of the participants believed that collective ownership encourages the entire team to work more closely But it was interesting to find out to know whether the participants were comfortable with someone changing their code. In both the surveys, none of them objected others changing their code. 3.9
40-Hr Week
This practice reflects the need to keep the team healthy. The practice means to work hard, and to rest when you need to [2]. This is also a popular practice of XP and is used frequently. In Survey-I, the participants rated its usage as 9 (on average) on a scale of 0 to 10 and the majority (86%) felt that the 40 hr week practice was helpful to them. In Survey-II, the majority of 65% stated that they were comfortable with the pace of their team. 3.10 On-site Customer XP requires the presence of an on-site customer to clarify requirements. Kent Beck states, “A real customer must sit with the team, available to answer questions, resolve disputes, and set small scale priorities.” [1] Consequently, 81% of the participants in Survey-II agreed that onsite customer was working positively for them. Also, in Survey-II, 67% stated that they had access to the customer all the time; the rest had access to the customer sometimes for answering their queries, which served the purpose of having an on-site customer. In Survey-II, 94% of the participants had their customer located at least in the same floor and the most common means of communication with the customer was via informal meetings. Among the other ways of communication were email and telephone, which were moderately used. 3.11 Coding Standards XP teams all write code following the same style guidelines. This makes pairing easier and supports collective ownership [2]. This is also another commonly used XP
Circle of Life, Spiral of Death: Are XP Teams Following the Essential Practices?
171
practice. There seems to be many ways by which coding standards are created and followed. In Survey-II, 53% stated that the entire team collectively decided the coding standards that would be adopted for the project and 29% stated that they followed the already existent company/department/team coding standards. In Survey-II, 12% even stated that they chose their own coding conventions. The coding standards document was generally small. In Survey-II, 89% of the participants had coding standards document that was 5 pages or less. 3.12 Metaphor With the Metaphor practice, the system design/architecture is expressed using a simple, common system of names, a simple easily-explained pattern of design. This enables you to say “the program is like an assembly line” or “the program is like bees bringing pollen back to the hive.” [2]. Metaphor seemed to be the least used XP practice among the people surveyed. Only about 37% of them believed that metaphors are helpful in creating a mental picture of the system. The majority of the participants, 80%, claimed that they knew the concept of a metaphor but had never used it and only 1% of the participants had used metaphors and felt that it was important. In Survey-II, 86% of the participants stated that their projects did not have any metaphor. As one of them put it, “We don't need it to help understanding the system.”
4
Overall Impression of XP
The participants felt that all of the following aspects were aided by XP and these are reflected by the positive averages on a scale of –5 (Much worse) to 5 (Fully achieved) • Deliver software in time (4) • Develop software with a high quality i.e. with less bugs (4) • Let developers have fun with their work (3) • Allow changes without incurring big costs (3) Finally, 92% of the participants in Survey-II stated that they achieved 75% or more of their initial objectives when their project completed. This information could not be obtained from Survey-I as the project had not yet terminated. Overall, almost all of the survey participants seemed happy with XP. Unanimously, respondents stated that they would use XP again and almost all of them expressed that they would advocate using XP in the future. One survey participant wrote, I still don’t know how management will plan releases of a product which is developed using XP. On the other hand, XP gives management a very high visibility of the progress of the work, which is something that they normally don’t get in a traditional development paradigm. In the old days, management won’t know that a project is late until after integration testing has started (or tried but failed to start), which may be many months from when coding first started.
172
5
Vinay Ramachandran and Anuja Shukla
Conclusion
All the practices were popular and well accepted with the exception of the metaphor. Most of them had not used metaphors in their projects and it seemed many projects did not have metaphors. Among the most popular practices was planning games, pair programming and short release cycles. Most of the participants were happy with XP and said that they would advocate XP in future projects.
Acknowledgements The research outlined in this paper was funded by the Center for Advanced Computing and Communication (CACC), NC State University. The authors gratefully acknowledge the invaluable suggestions and guidance given to them by Dr Laurie Williams, NC State University. The authors also acknowledge the employees of the various organizations who participated in the two surveys. We would also like to thank James Grenning of ObjectMentor for helping us find survey participants.
About the Author The authors Vinay Ramachandran and Anuja Shukla are computer science graduate students of North Carolina State University. They are currently involved in research pertaining to agile methodologies.
Appendix: Details about the Survey Survey-I was administered to a group of ten participants working on the same project of a particular organization. Survey-II was administered to a group of seventeen participants spread across four different organizations. The two surveys had multiplechoice questions with set of answers to choose from. Generally, the answers choices provided to the user conformed to Likert scaling in order to aid the collection of the statistics. The answer choices were continuums mostly like “Strongly Disagree”, ”Disagree”, “Agree” and “Strongly Agree” or “Very Ineffective”, “Ineffective”, “Effective” and “Very Effective”, which can be scaled linearly from 1 to 4. There were no neutral choices like “Neither Agree nor Disagree”. Hence, the choices were such that they forced the respondent to express a definite opinion, either positive or negative. There were also questions for which Likert scaling was not used since the answer choices were not continuums for e.g. for a question “I perform Refactoring …”, the answer choices were “If I have the time”, “When I am asked to”, “Always”, “Other”. A few optional questions also required the respondent to type in entire sentences for e.g. the participant had to enter some comments about testing. The analysis of the data involved finding the statistical information such as mean and mode of the answers chosen by the participants for a particular question. In some
Circle of Life, Spiral of Death: Are XP Teams Following the Essential Practices?
173
cases the final values were rounded off to the nearest integer since their decimal values were not very significant. And since the two questionnaires were not identical, some information obtained by Survey-I could not be obtained from Survey-II and vice versa. A majority 74% of the participants were involved in pair programming for about 3 months. Sixty seven percent, of Survey-I participants had attended planning games for 3 months or more. Equivalently, 76% of participants in Survey II had attended 7 or more iteration planning meetings.
References 1. Beck, K., Extreme Programming Explained:Embrace Change. 2000: Addison-Wesly. 2. Jeffries, R., Circle of life, Spiral of Death. Extreme Programming Perspectives ed, ed. G.S. M. Marchesi, D. Wells,L. Williams. 2002, Boston, MA: Addison-Wesly, in press. 3. Gittins, R., Extreme Programming - Questionnaire, School of Informatics University of Wales Bangor. 4. Laurie A. Williams, R.R.K., All I Really Need to Know about Pair Programming I Learned In Kindergarten, in Communications of the ACM. May 2000. 5. Williams, L.A., The Collaborative Software Process PhD Dissertation. 2000, University of Utah: Salt Lake City, UT. 6. Williams, L.A., Pair Programming Illuminated. 2002, Boston, MA: Addison Wesly. 7. Auer, K. and R. Miller, Extreme Programming Applied. 2002, Boston, MA: Addison Wesly. 8. Fowler, M., Refactoring: Improving the Design of Existing Code. August, 1999, MA: Addison Wesly.
Tracking Test First Pair Programming – An Experiment Matevz Rostaher1 and Marjan Hericko2 1FJA
OdaTeam d.o.o., Ptujska cesta 184, SI-2000 Maribor, Slovenia QEXIZ^VSWXELIV$JNESHEXIEQGSQ 2University of Maribor, Institute of Informatics, Smetanova 17, SI-2000 Maribor, Slovenia QEVNERLIVMGOS$YRMQFWM
Abstract. The authors ran an experiment where a group of professional programmers working in pairs and a control group programming alone implemented a small system from predefined requirements. Most programmers spent between 50% and 60% of time on testing; only the most inexperienced spent less. Programmers reported more problems with refactoring than testing. The rhythm of switching the driver and navigator role is essential for test-first pair programming. The experiment showed that partners switched roles 21 times per day on average. The comparison of the control group of individuals and the group programming in pairs showed that both groups spent almost the same amount of time to complete the tasks. The result of this comparison is by applying a t-test not statistically significant. We believe that more detailed research apart of evaluating test-first programming is needed to compare solo vs. pair programming in the investigated group.
1
Introduction
The FJA OdaTeam is a software development organization in Slovenia that has used all the twelve practices of Extreme Programming [1] since 1999. The developers work on an insurance administration system that is installed at many client sites. The developers’ activities include bug corrections, extensions of existing subsystems, and adding new subsystems. All code produced is written by pairs of programmers, each sitting at one workstation. The degree of acceptance of pair programming is very high. In March 2002, an experiment and an exercise on test-first pair programming behavior and performance was done at FJA OdaTeam. The coach (as defined by Kent Beck in Extreme Programming Explained [1]) felt that the developers didn’t do testfirst programming as described and demonstrated by Ron Jeffries in Extreme Programming Installed [2]. He asked questions like: − Are tests really written before the functionality all the time? − Are the steps of refactoring and of adding functionality always separated and tests run after each step? − How often does a pair switch the “driver” and “navigator” role? − Do you always do the simplest thing that could possibly work? D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 174–184, 2002. © Springer-Verlag Berlin Heidelberg 2002
Tracking Test First Pair Programming – An Experiment
175
He didn’t always receive satisfying answers. Some developers admitted that they had problems with the issues raised by the coach. One could argue that the team wasn’t doing XP, but we do not agree. In reality, things are harder than in theory. Kent Beck [1] agrees that the rules are simple but actually practicing them can be hard. We also believe so. Test-first programming “feels” good, but it is hard to explain, especially to junior programmers. We wanted to create a repeatable exercise for practicing test-first pair programming and tracking its results.
2
Introducing Practices by Positive Motivation
For an organization with a relatively small number of open-minded team members (seven in 1999 and currently 20 in 2002), it was not hard to try pair programming. We already had the experience of doing hard tasks and teaching new members in pairs before. The only problem was the bad habit of not doing pair programming all the time for all the programming tasks. After all, we were all used to programming alone. A breakthrough was achieved when we rearranged our office workspace, similar to that of the Chrysler C3 team, as reported in [1], and dropped ownership of workstations. All the work could be done in the central area only, and there was no room to hide and do solo programming anymore. The set up forced us to drop the bad habit, and it worked. Today nobody even suggests solo programming anymore. Similarly, we made a rule for running all the tests (acceptance and unit tests) before the work of a pair could be integrated. In the past, we had often forgotten to do so, and sometimes the integrated code broke the system. The rule was that everyone who forgets to run the tests pays 300 SIT (300 Slovenian Tolars is approximately one US Dollar) to a so called “beer fund”. After the number of total tests reached 500 we emptied the beer fund and used it for a nice evening. Today nobody forgets to run the tests anymore, but we still celebrate the round numbers, such as 1000, 2000 etc., of total tests written. With these and other examples in mind, we were looking for a way, rule or game that would be accepted by all the team members and would encourage test-first programming. The exercise and experiment which will be described was the first practical step in this direction.
3
The Exercise and Experiment
3.1
Goals of the Exercise
The first major goal was to perform an exercise for practicing test-first programming by positive motivation. The team knew what test-first programming is about in theory and partly in practice (depending on the level of experience) by doing it in everyday programming. We wanted to create an exercise, which would help additionally improve test-first habits. The programmers should implement a simple system which was specified in advance and at the same time explicitly think of the three programming activities: testing, adding functionality and refactoring. We wanted to achieve this by letting the programmers log all switches between these
176
Matevz Rostaher and Marjan Hericko
three activities while programming. The logging forced the developer to think about the activities rather than get lost in the actual implementation of the specified system. A different approach with a similar goal is described in [6] where the author uses a three-minute sandglass hour to improve the rhythm of testing and adding functionality cycle. The programmer turns around the sandglass hour after switching from coding to testing and vice versa. If three minutes pass before the change of activity it is a sign that the programmer is out of rhythm. Both approaches are a learning practice and not a programming practice. The ultimate goal is for the programmer to stop using the tool because the rhythm becomes natural. We also wanted to gather quantitative information on how much time we spend on each of the three activities. The exact information we desired to get was the percentage of time spent for each activity based on the experience level of the developers. Based on past experience, we expected that a similar amount of time would be spend on testing and programming. We had no accurate historical data on how much time was spent on refactoring. Another important issue is how often pairs switch the driver/navigator role depending on experience level. From past experience we know that switching occurs in intervals of a few minutes but we wanted to be more confident in this. Therefore, we also logged changes of the driver/navigator roles. Another goal was to gather quantitative information about how well the team performed pair programming versus solo programming. Pair programming had become a habit already, however, we wanted to know how much time it takes to complete the tasks compared to solo programming. To achieve this we additionally split the team into an experimental group doing pair programming and a control group doing solo programming. We wanted to record time spent to finish the programming tasks divided by experience level. We hypothesized that pairs are not less productive than individuals doing the same programming tasks. The exercise had to be repeatable with a different set of tasks in the future. This allows us to compare the results over time by doing more such exercises. 3.2
Forming the Solo and Pair Programming Groups
We wanted to compare the performance of pair programming versus solo programming. The exercise could also be done with pairs of developers only, but the comparison of pair vs. solo programming would not be possible. We also wanted to reflect every day project reality where all kind of pairing combinations happen. For example, it sometimes happens that two rather inexperienced developers work together while two senior developers have to work on an emergency task. Sixteen developers were split into two groups. Four developers worked alone and 12 developers worked in six pairs. The knowledge of the team members was very diverse. Therefore, we divided programmers into one of four groups: − − − −
1 = less than a year of experience (fresh from university or still a student) 2 = more than a year of experience 3 = more than three years of experience 4 = more than five years of experience By experience we meant active programming of mission critical applications. Because we did not want to force anybody to a particular group, developers could
Tracking Test First Pair Programming – An Experiment
177
choose the appropriate group for themselves. One developer out of each group had to work alone. The others formed pairs where we tried to produce as many combinations as possible out of the four groups, ranging from a pair with two programmers with more than five years of experience to a pair with two inexperienced programmers. 3.3
The Modified Planning Game
The customers in the experiment were the authors, who were present during the whole time (customer on-site). We chose six small stories all tightly related to each other to form a simple insurance contract administration system sorted by priority (customer value): − − − − − −
Enter insurance contract Change insurance contract Calculate portfolio statistics Calculate change of portfolio statistics Cancel insurance contract Calculate the premium reserve (premium not earned) The stories were unrelated to current projects in the organization but were defined by someone with over ten years of experience in building insurance software systems. Defining appropriate stories was essential for the experiment to work. Good communication between business and technical members of the team is one of the strengths of XP. We tried to write a system of stories, which used the language of real customers from insurance companies and had real business value. Each pair or individual had to implement as many of the stories as possible, in the exact order shown above. It was hard to predict how fast the development would go. The customer defined the stories and set the priorities, and the developers have to estimate. The experiment was limited to one day, and enough stories to fill at least this day were written. The customer told the developers that each of the stories was important and that he wanted to have the stories implemented as soon as possible. The stories were written on a sheet of paper and not on story cards and were simplified as shown in figure 1. Story 3: Change insurance contract The policyholder can either increase or decrease the annual premium. In order to know for which insurance contract to change the premium, the policyholder has to provide the insurance contract number. The date from which the change is valid is important. The change of the insurance contract doesn’t affect the insurance contract number. Estimate in ideal hours: Time started: Time finished:
_________ _________ _________ Fig. 1. Sample story description
178
Matevz Rostaher and Marjan Hericko
For each of the stories, an automated acceptance test was written in SUnit, the Smalltalk version of the xUnit testing framework [5]. No special acceptance testing framework was needed because the goal was to write an API and not an application containing a GUI. SUnit is sufficient for testing API’s because it tests methods and classes. 3.4
Development Environment and the “XP Tracker”
Smalltalk and its integrated development environment were used for the implementation. The acceptance test was already loaded into the system. A simple tool developed by the authors, called “XP Tracker”, was installed, as shown in figure 2. The programmers had to click on one of the activity buttons to track changes in programming activities. After pressing the button an entry containing the activity name and the time stamp was written to a log file. At the same time the tracking window was minimized for the developer to proceed without any additional mouse clicks. The possible activities were: 1. Testing (T1 = first person driving, T2 = second person driving) Testing was defined as the process of writing unit tests for a class and its methods. 2. Refactoring (R1 = first person driving, R2 = second person driving) Refactoring was defined as the process of rearranging the structure of code without changing its interface. All existing tests have to pass after a refactoring activity. 3. Adding new functionality (A1 = first person driving, A2 = second person driving) Adding new functionality is the activity of writing code to support the unit tests written before. This activity is finished when the unit test(s) pass. 4. Break (designing, lunch break, having snacks etc.) The activity “break” could be divided into many activities, but since we wanted to concentrate on coding only, we have deliberately thrown everything else in one bag. Further dividing would produce too much complexity of results and would be harder to use by the developers. Each of the activities could be tracked for the first or second developer driving (having the keyboard and writing code). We specifically did not record the names of the developers to avoid the feeling of being watched and evaluated. We only recorded a log file for each pair or individual. The log file also included the experience level (as defined in section 3.2) for each of the members of a pair or individual. After the experiment we discovered the idea of sandglass programming [6] and incorporated a timer, which makes the XP Tracker window pop up after a predefined time limit (usually 3 to 5 minutes) if there is no change of activity logged. We plan to use this improvement in the next experiment. The XP Tracker is an improvement over having a separate person for each pair or individual watching and tracking results. It is hard for a third person to track a change, for example from refactoring to adding functionality, without any notice from the developers. Having someone constantly watching over the shoulders would distract from work and increase the feeling of being watched. We trusted the developers to follow the rules. Casually we walked around and monitored proper use of the XP Tracker.
Tracking Test First Pair Programming – An Experiment
179
Fig. 2. The Smalltalk development environment and XP Tracker
3.5
Implementation Phase
After the planning game and the first discussion about the stories, the developers started to implement the system. Each pair or individual worked on a separate workstation but in one large room. The customers were present and answered questions. We agreed on the following rules: 1. Nothing but the stories defined by the customer should be implemented. 2. Stories should be implemented the simplest way possible without any upfront design. 3. The developers should be aware of what activity (testing, refactoring, adding functionality) they are currently doing and should log any change of activity with the XP Tracker. 4. A story was finished when the acceptance test for the particular story had passed. Suggestions were made by developers to improve the system but the customer chose not to change any of the stories and wanted to have a working result as soon as possible. The customer also had to change the acceptance test cases because developers reported inconsistencies during programming of the stories. As usual in an open workspace environment, discussions started among team members. The developers are used to work as a team, and we did not want to prohibit normal communication. We also noticed that after finishing a story the pairs or individuals took a short break to gain new energy for the next story.
180
Matevz Rostaher and Marjan Hericko
3.6
Results
The data gathered was the tracking logs for each pair or individual as shown in figure 3, the code produced by the developers and of course reactions and experience gained.
Fig. 3. Example of tracking log file
Most developers finished four stories and did not start the fifth one. The person working alone and the pair, all having less than a year experience both finished three stories. The inexperienced pair started working on the fourth story but did not finish it. In addition to providing experimental data, a major goal of the study was to practice test-first programming. After the experiment we had a discussion and in asked the following two questions: 1. How did the exercise help better understand test-first programming? 2. Can you apply this experience in your everyday programming tasks? In general all felt the exercise helped in some way. For many it helped to clearly separate the testing, refactoring and adding functionality activities by having to press the buttons. They valued the experience of separation between adding functionality and refactoring much more than testing and coding. It was easier to test and refactor than on real projects, because the system was relatively small and no legacy code existed. The stories were easy enough and allowed to concentrate on test-first programming issues and experimenting with their programming habits and style. Some even suggested that they would use the XP Tracker on daily basis. Usually these were programmers involved with many other issues like time pressure, understanding the large class library, and understanding complicated insurance business issues. For the very young programmers, this can be difficult, and they can get lost quickly. Less experienced programmers reported increased discipline and found that they did not get lost in coding so quickly. Some reported that this was their first pure and consequent test-first experience. We also found out that the navigator is more involved if tests are written before the code because they already agreed on what exactly is to be implemented next. They had a high opinion of acceptance tests being written before the coding even started. This is usually not the case in our real projects. Experienced developers felt that they had a better felling on how the code written has grown to a well factored system without upfront design. There was an overwhelming consensus that it is much easier to do test-first programming on new systems than on modifications of existing systems. A lot of our existing code is up to ten years old, and there are no unit and acceptance tests written for it. Some expressed that after the exercise, they felt more in control and could more easier build a well-factored system. Someone also suggested that it would be better if
Tracking Test First Pair Programming – An Experiment
181
the set of stories would be given to developers one by one after finishing the previous story. This would even more force not to do upfront design and increase the need to refactor. Figure 4 shows the relative time spent for each of the activities investigated. The break time (overall 35% of the time, which includes a one hour lunch break) was left out to show only programming time. Therefore the results in figure 4 show the remaining 65% of time spent. The results show that approximately 50% of time was spent for testing in both pair and solo programming as we expected. We think this is a good result and shows that also individuals had the habit of writing tests. The results by experience level (figure 5) in general show a big difference of testing time for inexperienced programmers (level 1) and the others. All other testing times range from 50% to 60% percent and did not increasing much with experience. The numbers for refactoring time are more diverse. We think this is because it was harder to separate between refactoring and adding functionality and more errors were produced in the logging process. The discussion after the exercise has shown that many programmers mixed refactoring and adding new functionality but had no problem with separating testing and coding. These statements correspond to the quantitative numbers. Relative time spent for programming activities (solo)
Relative time spent for programming activities (pairs)
R 11%
R 13%
T 50%
A 39%
T 51%
A 36%
Fig. 4. Time spent for each of the activities (pairs and individuals) Time spent on programming activities by experience level Experience level 1 2 3 4 1&1 2&1 2&2 4&1 4&2 4&3 Grand Total
Activity A 37% 49% 32% 36% 60% 40% 38% 37% 26% 25% 37%
R 25% 1% 13% 6% 8% 8% 12% 15% 16% 15% 12%
T 38% 50% 55% 58% 32% 52% 50% 48% 58% 59% 51%
Grand Total 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
Fig. 5. Time spent for each of the activities by experience
The results of switching the driver/navigator role are shown in figure 6. In average pairs switched the driver/navigator roles 21 times during the experiment and one uninterrupted activity lasted five minutes in average for individuals and pairs. The most experienced pair (level 4 & 3) switched most often (42 time); their average time for switching activities was 3 minutes. During the discussion, this pair also felt most
182
Matevz Rostaher and Marjan Hericko
confident in doing test-first programming. The results show that in general the number of switches corresponds with the experience level. We know from past experience that a high frequency of switches and short phases of continuous testing, refactoring and adding functionality is typical for test-first programming. This numbers gave us a more detailed insight of what “high frequency” means in our organization.
Comparison of switching driver/partner role by experience of pairs
45
Number of switches of driver/partner roles
40 35 30 25 20 15 10 5 0 1&1
2&1
2&2
4&1
4&2
4&3
Pairs by experience level 1 to 4
Fig. 6. Number of switches of driver/navigator role by experience level
The last analyzed result is the time spent for completing the stories (Figure 7). The analysis of the time spent by experience level did not yield any pattern. Normally we would expect shorter times for more experienced developers. The average time spent to complete all three stories by solo and pair programmers is very similar. (358 minutes for pairs and 362 minutes for individuals) This implies that pairs need almost twice as much time to complete the same amount of work compared to individuals. However, the time difference between pairs and solo programmers was not statistically significant, based on a t-test (similar to [3]). The two-tailed P value was 0.917. The information is still valuable if compared with the other data gathered. The pair and individual having level 1 experience finished only three stories. This gap between the level 1 team members and the others is similar to the gap in the amount of testing time (Figure 5) where we see that the same inexperienced programmers did much less testing than all the others.
4
Conclusion
4.1
How Did We Achieve Our Goals?
Developers liked the exercise and claim they learned more about test-first programming. They learned from the code reviews about common mistakes they made. The results on the relative time spent for each programming activity, the
Tracking Test First Pair Programming – An Experiment
183
frequency of switching roles in a pair and the average time of one uninterrupted activity gave us an insight on how the team performs as a whole. Specifically, we found out that 50 to 60% of programming is spent for testing as expected. The results also showed us what a high frequency of switching pairs (more than 20 per day) and short phases of uninterrupted activities means (5 minutes average) in numbers. These values will help us as a reference to further investigation and also to give programmers a feeling of what the rhythm of test-first programming should be. We also noticed a quality gap between very new members of the team and all the others, even if the experience of the others is also very diverse. Probably this is due to the nature of XP teams where the open workspace and “extreme” communication allows learning the programming habits of the whole team very quickly. Time spent to complete the four stories Time spent in (hh:mm:ss) Experience Level 1 2 3 4 1&1 2&1 2&2 4&1 4&2 4&3 Grand Total
Total 6:30:32 6:35:06 5:28:27 5:37:07 4:39:50 6:27:26 5:47:20 7:03:38 5:58:29 6:02:44 60:10:39
(time until three stories completed)
(time until three stories completed)
Fig. 7. Number of switches of driver/navigator role by experience level
The time analysis helped to show one aspect of productivity of solo vs. pair programming. Neither of the two approaches was significantly faster. We only noticed the same gap between very new developers and the others as described in the section above. Our results are less in favor of pair programming compared to results in [3] and [4]. The requirements for the system were probably better suited for practicing test-first programming than for comparing solo vs. pair programming. According to the studies in [3] and [4] we would need a more challenging requirements for pair programming to be more productive. Future investigations should not mix test-first performance and pair vs. solo performance questions in one experiment. The results of the XP Tracker are very simple (time stamps + activity names for each development machine). In spite of this, they allow a wide range of investigations. We deliberately did only a small set of possible analysis to keep the first iteration as simple as possible and to receive immediate feedback, speaking in XP terms. The case of measuring performance of pair vs. solo already shows that we wanted too much for the first time. 4.2
What Did the Team Learn from the Exercise?
Besides learning test-first programming skills on individual basis, we also found some major issues we have to address in the future.
184
Matevz Rostaher and Marjan Hericko
1. Acceptance tests should be written up front before the actual development starts. 2. According to the statements of the programmers the legacy code significantly slows down usage of test-first programming, especially for junior programmers. 3. The weakest part of test-first programming practice in the organization is usage of refactoring. 4.3
What to Change Next Time?
We believe revealing the stories one by one and not all at once would further encourage refactoring. The results of the next exercise will show if we are right. We will also try the sandglass programming approach and incorporate a three minute timer which will automatically pop up the XP Tracker and remind the programmers to keep the test/refactor/add functionality rhythm. Another possibility would be to divide the quantitative data based on difficulty of the stories. A future investigation could be to explore how mixing the partners experience levels affects switching of driver/navigator roles. This is important because we encourage experienced programmers to allow less experienced partners to drive. We found out that otherwise they get passive and cannot keep track with the more experienced partner. To do this it would only require matching the person pressing the left and right buttons on the XP Tracker with the experience level.
Short Author Biography Matevz Rostaher is co-founder of OdaTeam, a software development company, specialized in object-oriented software development. While having some of the practices and values already in place, he started implementing Extreme Programming in 1999. Since then, he tries to improve and tailor XP to the needs of the organization by playing the role of the coach. Marjan Hericko, Ph.D., is an Assistant Professor at University of Maribor, Institute of Informatics. He is a technical leader of the Slovenian Object Technology Center which assists the Slovenian industry in the transition to object technology and organizes the annual Object Technology Conference in Maribor.
References 1. Beck, K., Extreme Programming Explained: Embrace Change. 2000, Reading, Massachusetts: Addison-Wesley. 2. Jeffries R., Extreme Programming Installed. 2000, Reading Massachusetts: AddisonWesley. 3. Nosek J.T., The cast for collaborative programming, Communications of the ACM, vol. 41 (1998), No. 3, 105-108. 4. Williams L. et al., Strengthening the case for pair programming, IEEE Software, vol. 17 (2000), No. 4, 19-25. 5. http://www.xprogramming.com/testfram.htm 6. http://c2.com/cgi/wiki?SandglassProgramming
How to Get the Most out of Extreme Programming/Agile Methods Donald J. Reifer, President Reifer Consultants, Inc. P. O. Box 4046 Torrance, CA 90510 HVIMJIV$MIIISVK Abstract. This paper reports the results of an analysis of thirty-one extreme programming (XP)/agile methods early adopter projects completed by fourteen firms who have embraced the techniques in the form of lessons learned. The survey results show that early adopters have cut costs, improved productivity and reduced time to market through the use of these methods. To get the most from these methods, fifteen lessons learned have been developed that build on the experiences of others. Several of these lessons run counter to the teachings of the methodology developers. The paper next provides a scorecard that rates XP’s performance in eight application domains. The paper concludes by summarizing four critical success factors for early adopters.
1
Introduction
The software industry seems to be embracing yet another change in the way it does business. Because of their emphasis on agility and time-to-market, many software shops have made the move to extreme programming/agile methods. Such methods focus on building working products instead of the documents and formal reviews that are frequently used to demonstrate progress in more classical developments. To implement these methods adherents embrace XP practices like pair programming, refactoring and collective code ownership to generate their products. These releases, which are working versions of the product, not prototypes, are used to demonstrate functions and features to stakeholders who help shape their form through refactoring [1] and continuous integration. Initial reports from the field from early adopters about extreme programming/agile methods are encouraging. However, as is the case with anything new, some practices work better than others and some don’t seem to work well at all. The purpose of this paper is to address what works in practice by summarizing the initial experiences of early adopters in the form of lessons learned. The paper’s goal is to help those contemplating the move to XP/agile methods to take advantage of the experience of others as they try to use these practices productively.
2
The Survey
A survey was conducted to determine if extreme programming methods have merit and if they cut costs, reduced time-to-market and impacted product quality. The survey was conducted across eight segments of the industry [2]. The four goals that were established for the survey were: D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 185–196, 2002. © Springer-Verlag Berlin Heidelberg 2002
186
Donald J. Reifer
• • • •
Determine what practices early adopters of agile methods were using. Assess the scope and conditions governing their use. Assess the cost/benefits associated with their use. Identify lessons learned relative to getting the most from their use. The approach used in the survey was question and answer. The questionnaire was structured using project phases to glean what early adopters felt were the most important experiences. We also tried to gather hard data to determine cost/benefits. The questionnaire was sent to software managers in eight industry groupings who responded to the original call for information. Verification of findings was achieved via selective interviews and data analysis. The demographics of the thirty-two organizations from twenty-nine firms (i.e., several large firms had more than one organization trying to use XP techniques) that responded to our call for information are summarized in Table 1. When queried, the fourteen firms that were using or had used XP/agile methods generally characterized the thirty-one projects that they were pursuing as follows: • They were using XP/agile methods primarily to decrease the time needed to bring software products/application to markets. Cutting costs was a secondary concern. • Over ninety percent of the projects involved were relatively small (typically less than ten participants) and being pursued as pilots or pathfinders. Only two of the projects involved more than twenty engineers. • All thirty-one projects were in-house developments (as contrasted to contracted efforts), one-year or less in duration and low risk (risk was in the methods, not the software development). • Stable requirements, established architectures and a high degree of development flexibility characterized almost all of the projects involved. • Almost eighty-five percent of the software being developed were mostly quick-tomarket applications (mostly web-based and client-server oriented). • Teams were for the most part cohesive and staffed with motivated, experienced performers. The team members were relatively young and open to new ideas. Over ninety percent of the staff had little or no experience using XP/agile methods. • While there was some skepticism about how well they would work, most of those involved with XP/agile methods were enthusiastic about the prospects. As expected, different people from different firms had different views about what constituted best practices. When queried, the following practices were cited as being agile or extreme: − Collective ownership − Metaphors instead of architectures − Concurrent development − Nightly product builds − Continuous integration − Pair programming − Customer collaboration − Rapid Application Development − Daily standup meetings − Refactoring − Product demos instead of documents − Retrospectives − Frequent product releases − Stories for requirements − Full stakeholder participation − Team programming − Just in time requirements − Test-driven development
How to Get the Most out of Extreme Programming/Agile Methods
187
Table 1. Survey responses for information on XP/agile methods performance by industry type, number of firms responding, number of firms using XP/agile methods, number of XP/agile methods projects completed to date and average size of projects Industry
− − − − − − − −
Aerospace Computer Consultants E-business Researchers Scientific Software Telecom Totals
# Firms Polled 30 20 10 20 10 10 20 20 140
# Firms Responded 5 4 3 6 2 2 3 4 29
# Firms Using XP/Agile Methods 1 2 1 5 1 0 2 2 14
# Agile Projects Completed 1 3 2 15 1 0 4 5 31
Average Size (KESLOC)* 23 32 25 33 12 N/A 25 42 31.8
*KESLOC = thousand Equivalent Source Lines of Code computed using formulas that normalize reused and modified code in terms of new lines of code (see Boehm [2] for a discussion of the mathematical approach used)
In other words, just about any new practice that literature says speeds products/ applications to market was considered to be agile or extreme. Because we received such a wide range of definitions, we decided to limit further investigations to the twelve practices of extreme XP that are outlined in Kent Beck’s book [3]. Several recent good books have been written exploring use of these practices [4, 5] and the current literature is rich with many excellent articles on the topic [6, 7].
3
Project Startup
The hardest thing for most organizations to do seemed to be startup. Even though they had enthusiastic staff who wanted to try the techniques, they had difficulty in convincing management that XP/agile methods were more than a passing fad. Because most of the firms that responded to our survey had embraced processes that were in tune with the Software Engineering Institute’s Software Capability Maturity Model (CMM) [8], there was hot debate over whether the twelve principle practices of XP were suitable for firms with established processes that were assessed at least a level 2. In addition, many of the line managers seemed content to stick their heads in the sand and argue “Why change? We’re doing all right.” Those who wanted to try XP responded uniformly: “Because it takes too long to get our products to market.” To resolve the debate, most early adopters decided to use XP/agile methods on some form of pilot or pathfinder project. As previously summarized, these projects were relatively small, low risk and done with an in-house workforce. Based on preliminary positive results summarized in Table 2, the results so far relative to increasing productivity, cutting costs and reducing time-to-market are encouraging. When queried, only three industries had moved XP/agile methods into production
188
Donald J. Reifer
(e.g., E-business, software and telecommunications). However, one can build a compelling business case for their use using the hard and soft data that we captured [9]. Table 2. Summary of the results of analyzing data supplied by the 14 organizations using XP/agile methods. These results were developed based upon an analysis of both hard and soft data supplied by the participants. Soft data was collected via questionnaire. Hard data was collected via a data collection form developed for that purpose. This data was then normalized, validated and statistically analyzed using standard regression techniques. To ensure the integrity of the data collected, standard statistical tests were run to test it for homogeneity and co-linearity
Hard Data Productivity improvement: 15 to 23 percent average gain Cost reduction: 5 to 7 percent on average Time to market compression: 25 to 50% reduction in time Quality improvement: five of the firms had data which showed that their defect rates were on par with their other projects when products/applications were released to the field (e.g., post release defect rates)
Soft Data • Most used some form of survey to capture stakeholder opinions • All used recruitment, morale and other intangibles to build a case for trying and retaining agile methods • All argued passionately for continued use of agile methods based on qualitative factors • All pressed for help in working the issues which revolved around technology transfer Note: The Hawthorne effect may apply because the sample size was relatively small. Further analysis is therefore warranted to ensure that the results are valid
After permission to start was given by management, the three biggest issues associated with startup were associated with release planning, requirements specification and architecture. Because these issues are interrelated, we will have to treat them together. XP places less emphasis on requirements and architecture than classical methods. During the elaboration phase [10], XP seeks an effective metaphor for putting a skeleton in place that frames the development. Requirements are captured iteratively as releases are formulated and plans are finalized after functions and features for the new release are decided upon. Requirements are captured on 3X5 index cards based using stories that users/customers tell to communicate what they want the system to do and what there priorities are. Releases are scheduled frequently, normally in three month increments. By planning to develop a working version of the system as the first release, they assume the pieces will come together while the system is still simple. According to early adopters, it takes some time to get used to the XP metaphor, i.e., slim requirements and skinny architectures [11]. It also takes getting used to the concept of developing systems from scratch using only a conceptual idea of what you
How to Get the Most out of Extreme Programming/Agile Methods
189
think the customer/user wants the system to do. Indeed, some software engineers don’t seem to know how to start this definition process. In response, a coach may be needed to kickoff the effort and get the team started off on the right track. The coach should start by conducting training sessions to bring the team up to speed on XP practices and to set realizable performance expectations. The two most important lessons that were learned follow. These lessons are new and should be used by others to shape the metaphor adopted. The first addresses how to structure your stories to get the most out of them. It suggests putting a premium on capturing performance and quality expectations in your stories in addition to functions and features. The second addresses ways to reduce potential architecture and XP/agile method mismatches by focusing the effort on the application layer of the architecture. In all cases, the early adopters that were surveyed suggested that XP might not be appropriate for non-precedented systems. Precedentedness is a measure of similarity with previously developed applications [9]. This statement is controversial because advocates argue that this is where agile methods shine. • Lesson 1 – When developing stories, focus on capturing performance and quality expectations in addition to the features and functions the user/customer wants. These can be defined in terms of acceptable end-to-end processing times and user tolerance for errors or desired levels of quality of service which include most nonfunctional requirements (maintainability, security, etc.). Recognize that users who are accustomed to higher standards of excellence will not tolerate poor quality in most applications. [New] • Lesson 2 – Localize software to be built to the architecture’s applications layer. That’s where applications can be built quickly using XP techniques. Revert to more classical development methods if additional services are needed at the infrastructure layer of the architecture to mechanize the application (e.g., XP seems to work best when services available at the application layer of the architecture are adequate). [New] After the metaphor has been developed, you can start developing a high-level plan for the first release. When thinking of such a plan, think of spending a week or two having the engineers estimate what it will take to deliver the capabilities outlined in the stories. Unlike document-driven methods, emphasis in planning is placed on having the engineers figure out the time and effort associated with story implementation. In the XP sense, such stories are testable because tests are developed typically in cooperation with customers/users prior to the start of coding to drive definition of the acceptance criteria. As part of planning, the customer/user writes the stories, scopes what functions/ features are needed, sets priorities and determines relative business value. The customers/users and software engineers collaborate to develop a way of realizing the high priority functions/features in a reasonable timeframe [12]. If the customer doesn’t like the schedule, the team changes the content of the release with the user’s blessing to reflect what the engineers believe can be done in the time allowable. The following two lessons learned are aimed at getting the most out of these planning activities. Both lessons confirm experience others have had when trying to put these methods into practice. The first recommends that you assign a team leader
190
Donald J. Reifer
to coordinate the development of release plans. performance expectations to drive release content.
The second focuses on using
• Lesson 3 – When planning releases, break the work down using stories to tasks that can be built by teams of two to ten people collaborating together in pairs to get the job done. Recognize that you must assign a team leader to coordinate assignments and to be ultimately held accountable for results. Also be extremely sensitive to matching personalities when staffing pairs. Else, there can be conflicts and the results can be negatively affected. [Confirming] • Lesson 4 – Consider performance expectations as you begin work on your release. Haste makes waste especially when performance considerations have to be factored into your design after the fact. Pin performance expectations down as early in the process as you can (see Lesson 1) and continuously exercise your working version to demonstrate their achievement via your test program. [Confirming] As these two lessons highlight, questions of scalability dominate the issues that larger teams have relative to harnessing the power of XP/agile methods. The data that we collected shows that XP/agile methods have merit for small applications. The jury is still out for larger projects because only two of the projects that we polled were staffed at greater than twenty people.
4
Project Implementation
You are ready to start coding once you have your stories and your release plan together. While the books say you can typically start coding within two to three weeks of starting the project, the survey indicates that startup takes four to six weeks. That’s because you have to staff the team and prepare them to start using the XP/agile practices. Pair programming was singled out by many as the most controversial practice during this project phase. When asked why the controversy, most early adopters replied that assigning two people to work a job normally staffed by one was counter-cultural. In addition, several firms suggested that matching pairs was necessary to reduce potential personality conflicts. Two ways around these conflicts appear in our next two lessons learned which confirm other’s experiences with XP/agile methods. Both address the need to be sensitive to how you establish pairs and whom you assign to staff them. • Lesson 5 – Early experience with pairs indicates that personnel should be periodically rotated at least as often as work on new releases commences. This permits tasks to be staffed with pairs that have the prerequisite skills, knowledge and abilities to work on the problem currently at hand. It also facilitates mentoring to increase skill levels. [Confirming] • Lesson 6 – Other early adopters have recommended that a short stand-up meeting be held daily in order for the team to review its plans, progress and problems. As a means to reduce conflict, pairs would be selected at this meeting based upon who could best contribute to getting the job done in the most effective manner [13]. [Confirming]
How to Get the Most out of Extreme Programming/Agile Methods
191
The practice of having the customer/user on-site as a full-time project participant was identified as ideal. While great in theory, the survey suggests that this practice just does not seem to work in practice. The best that could be achieved in most situations by the firms surveyed was having the customer/user on-site for extended periods of time. This gives rise to the question: “Who were the users/customers for the typical applications being developed in the sites surveyed?” Unexpectedly, users/customers for applications being developed using by those in our survey ranged from the executives to focus groups representing users for applications like an enterprise-wide travel system on the web. Members of these groups included secretaries, engineers, managers, sales representatives and a variety of people from other specialties. Assigning a person from such a diverse group to work full-time as part of the development team was considered unacceptable because this person couldn’t make decisions for the group at large. Even a person from the travel department could not represent the user at large [14]. As another example, the customer for a web-based supplier management system being developed at one of the software firms was the Chief Technical Officer of the firm. Because he was focused on defining the firm’s next generation products, getting him to work full-time as part of the team for any prolonged period of time was simply out of the question. He participated in the development, but his average attendance was one day a week. This inability to staff the project full-time with a user/customer representative gives rise to our next lesson learned that differs from what others have reported. • Lesson 7 – Getting a user/customer to be resident full-time is almost impossible in most organizations. The best that can be done is getting the user/customer resident full-time for weekly periods. The challenge is to decide when to schedule such participation. Most early adopters agree that user/customer interactions are most valuable when the functionality of new releases is being planned (e.g., as working products are released for review). [New] Some critics believe that XP involves nothing more than hacking out code [15]. While this may be true in some cases, this was not observed in the firms involved in the survey. Many had adapted their existing software processes to encompass XP/agile methods. Most liked the emphasis placed by XP/agile methods on demonstrating working product releases and its disdain for excessive documentation. Many argued convincingly that XP/agile methods were compatible with the CMM process infrastructure endorsed by the Software Engineering Institute [16]. These experiences lead to the next lesson whose aim is getting the most out of these methods. Again, this lesson tends to be controversial especially when firms don’t have established processes. • Lesson 8 – When incorporating XP/agile methods into an existing process infrastructure, put a premium on reviewing actual working product at demonstrations instead of paper reviews. Replace out-of-date practices whenever they conflict. However, keep the infrastructure because it makes sure that you address all of the right things, not a part of them. [New] Everyone surveyed agreed that the concept of frequent small working releases of the product and continuous integration made a lot of sense. The debates that occurred among early adopters revolved around how often these releases were needed and whether or not nightly builds generating daily working versions of the system were appropriate [17]. Independent of the frequency of the builds, all agreed that a
192
Donald J. Reifer
working version of the code that was under some form of version control should be available for testing the next day. In addition, all argued that flexibility to reprioritize release contents should be preserved by the team (assuming the user/customer is a member). Such priorities were best dictated when set by the user/customer a priori based upon some notion of business value of the functions and/or features involved. Based on these observations, the following additional four lessons learned were developed to get the most out of XP/agile methods. • Lesson 9 – Code must be put under some form of release control to manage changes being made to working versions. The goal is to preserve the integrity of what is released for review and testing. Early adopters tend to have their teams update code releases at least nightly after the initial working versions are delivered for open review by any team member. Changes to releases should be incorporated nightly after pairs run clean tests. A baselined working version should be released for use during the next day, if possible. [Confirming] • Lesson 10 - A version of the system should be built nightly composed of those code units that have cleanly completed testing. This version is needed because it contains those units/objects that others must interface with when they test their units the next day. To expedite testing, this version should be made public each morning and placed under version control (see Lesson 10 for more on configuration management experience). [Confirming] • Lesson 11 – Releases should be planned every few months based upon a list of prioritized stories (situation-dependent; three months the average). When push comes to shove, schedules should be preserved by pushing low priority functions and features to future releases. Performance considerations should dominate the demonstration independent of function and feature content. Acceptance tests used for the release should be devised by customers/users whenever possible because they best know what is desired. [Confirming] One area that most early adopters felt needed more attention was standards. Many felt that focusing on only coding standards was suboptimal. Because many of the applications cited in the survey were web-based, additional attention was needed on hyper-media design and the use xml and html. In addition, many argued that more attention needed to be placed on reuse considerations especially when products employing multi-media were being developed iteratively with high degrees of refactoring. Finally, many interviewed felt that it was the software engineer’s responsibility to design quality into the code. As a consequence, having quality assurance personnel check to ensure standards were followed was thought to be counter-productive. As a matter of fact, many pairs felt that XP’s elimination of the many watchers and checkers (external QA personnel, the process police, etc.) typically assigned to a project was a good thing. The value these people added to the project was questioned repeatedly. These experiences gave rise to the next two lessons whose aim is to enable those using XP to exploit its many virtues. • Lesson 12 – Standards should embrace design and reuse considerations in addition to what is necessary for pair programming, refactoring, testing and continuous integration. The best way to communicate best standards is via some form of context-sensitive help provided on-line with examples of what is and isn’t good. [New]
How to Get the Most out of Extreme Programming/Agile Methods
193
• Lesson 13 – Make sure that those assigned to the team add value, e.g., watchers and checkers should be put to work developing product instead of offering critical remarks. [Confirming]
5
Project Completion
To be judged successful within most organizations, a project must deliver a high quality product to market on schedule and within budget limitations. It must also capture the knowledge gained by the project in a form that others can capitalize on it. This means that firms need to put in place a process to take advantage of the lessons learned and any hard data that resulted. One of the practices most firms added that wasn’t in the recommended set was retrospectives [18]. Such reviews permitted these firms to consider past experience, capture it and use it on their next project. Because most of these XP/agile method projects were pilots, it was natural for firms to assess them and their experience as they tried to exploit the knowledge gained. This recommendation gives rise to our next lesson learned. • Lesson 14 – Plan to conduct a retrospective when you complete your projects to pinpoint your lessons learned and capture hard data. Then, plan to use the results to help shape how you implement XP/agile practices and the coding standards. [New] Before I close, I must comment on the forty-hour week practice heavily endorsed by the XP/agile community. Results from early adopters indicate that forty hour weeks are still unachievable. The rationale behind this conclusion that is summarized as follows: • Many software engineers live their work and enjoy learning on the job. Even when they don’t have to, they stay in the office and work on their project or professional capabilities. Taking advantage of spare time is something new and novel for them. • Many other software engineers are perfectionists. They need to learn when “good enough” is acceptable. • Finally, experience shows that XP projects start slowly and end with a bang. The reason for this is that more and more hours are needed as the code requires more and more refactoring later in the project. Contrary to others, our experience shows effort levels are not flat. These points lead to our fifteenth and final lesson learned which follows: • Lesson 15 – Plan for more than forty hour weeks towards the end of the project. Effort tends to increase proportionately with the amount of refactoring done. This is because performance problems that occur as the product is built up tend to become harder and harder to resolve (i.e., the cost to fix curve is linear, not flat) and deliver dates hardly ever change. [New] This lesson runs counter to the teachings of the methodology developers. Yet, the hard data from our survey and that of others [19] tends to show that the effort associated with refactoring is not flat. Instead, the staff needed goes up as the project progresses. Such an increase in effort late in the project forces staff to expend more than forty hours per week to deliver what’s promised.
194
6
Donald J. Reifer
A Scorecard for XP/Agile Methods
The scorecard for XP/agile methods based on these thirty-one projects is summarized in Table 3. While seemingly positive, one must remember that the majority of these projects were XP/agile method pilots and pathfinders. The projects are small, short in duration and relatively low risk. The question on the minds of most early adopters we surveyed was “Will XP/agile practices scale appropriately when used on larger, more risky developments?” They were also concerned with the retraining problem. While initial experiences were promising, the jury is still out based on the hard data we analyzed. Therefore, we need to continue to monitor results closely to determine how to get the most out of XP/agile methods We tried to compare the results/experiences of the larger teams to the general population dominated by small projects to answer these scalability concerns. Our efforts were not fruitful statistically because we had just two projects employing teams in excess of twenty people in our XP database. Scalability therefore remains an issue that we will have to track because we don’t have enough data to develop meaningful conclusions. Table 3. XP/agile method scorecard for each of our eight industry areas rating relative performance (budget, schedule and quality) based on data collected on thirty-one pilot and pathfinder projects Industry − Aerospace
# Agile Projects Completed 1
− Computer
3
− Consultants − E-business
2 15
− − − −
1 0 4 5
Researchers Scientific Software Telecom Totals
Budget Performance*
Schedule Achievement*
Better than Average Average
Better than Average Better than Average Average Better than Average Average N/A Average Better than Average
Average Better than Average Average N/A Average Better than Average
Product Quality+ Below Par No data No data Above Par No data N/A Par Par
31
* Average represents what is normal performance for similar projects (using existing metrics) + Par represents nominal quality rating for their projects (using metrics that are already in place)
• Proper Domain of Fit – Recognize that XP/agile methods currently have been shown to work best on relatively small projects (less than ten people) where the systems being developed are precedented, requirements are stable and the architecture’s is well established. This conclusion runs counter to rapid change which is one of the big selling points for XP/agile methods. But, scalability and application of the methods to high-risk situations tend to be the two issues that continue to concern those considering embracing the methods.
How to Get the Most out of Extreme Programming/Agile Methods
195
• Process Focus – Adapt and refine rather than throw away your existing processes when you adopt XP/agile methods. Experience shows that XP/agile methods work best when they are integrated into and become part of an existing process framework that establishes the way your firm develops software. Recognize that by using this approach you can adapt and exploit supportive processes (version control, design practices, etc.) whenever they make sense and are appropriate to the task at hand. • Suitable State of Organizational Readiness – Realize that the move to XP/agile methods represents a culture change in most firms. In order use these methods successfully, you must prepare for and foster change to this new way of doing business. Preparation is best achieved by educating and training the workforce so that they become equipped with the skills, knowledge and abilities to get the job done on schedule and within budget constraints. In addition, you must also convince your customers/users to adopt new business practices. Then, you must work with your customers/users to set realizable expectations. • Appropriate Practice Set – Don’t be afraid to put additional practices into place when they are needed to get the job done (daily standup meeting, project retrospective, etc.). Being overly zealous implementing new concepts often interferes with your organization’s ability to perform. Make sure that what results as you extend the practice set stays aligned with the guiding principles of the agile manifesto [20]. This manifesto should provide the overarching concepts to frame your implementation. To answer the questions raised relative to whether XP/agile methods will scale to larger projects, we plan to continue our data collection and analysis activities during the next few months. We also plan to analyze several of the larger projects being done in depth to understand more fully what it takes to exploit these new methods to the maximum extent possible.
Acknowledgments I would like to acknowledge and thank those individuals from the fourteen firms surveyed who supplied me the information upon which this paper is based. I appreciate their time and insights. I would also like to thank Dr. Barry Boehm, Dr. Sunita Chulani, Dr. Steven Fraser and my other reviewers for making some very positive suggestions.
References 1. Fowler, Martin: Refactoring: Improving the Design of Existing Code, Addison-Wesley, New York (1999). 2. Reifer, Donald J.: How Good Are Agile Methods? Software, IEEE, New York, Issue 4 (2002). 3. Beck, Kent: Extreme Programming Explained, Addison-Wesley, New York (2000). 4. Auer, Ken and Miller, Roy: Extreme Programming Applied, Addison-Wesley, New York (2002). 5. Wake, William C.: Extreme Programming Explored, Addison-Wesley, New York (2002).
196
Donald J. Reifer
6. Highsmith, Jim: Does Agility Work? Software Development magazine, San Francisco, CA, Vol. 10, No. 6, (2002) 30-36. 7. Rising, Linda, Agile Meetings, The Software Testing & Quality Engineering (STQE) magazine, Orange Park, FL, Vol. 4, Issue 3, (2002) 42-46. 8. Paulk, Mark C., Weber, Charles V., Curtis, Bill and Chrissis, Mary Beth: The Capability Maturity Model: Guidelines for Improving the Software Process, Addison-Wesley, New York (1995). 9. Reifer, Donald J.: Making the Software Business Case: Improvement by the Numbers, Addison-Wesley, New York (2001). 10. Kruchten, Philippe: The Rational Unified Process: An Introduction, Addison-Wesley, New York (1998). 11. 11.Barry W. Boehm, Chris Abts, A. Winsor Brown, Sunita Chulani, Bradford K. Clark, Ellis Horowitz, Ray Madachy, Donald Reifer, Bert Steece: Software Cost Estimation with COCOMO II, Prentice Hall, New York, (2000) 31-33. 12. Highsmith, James A. III: Adaptive Software Development: A Collaborative Approach to Managing Complex Systems, Dorset House Publishing, New York (2000). 13. Williams, Laurie: Extreme Programming (and Pair Programming, In: Proceedings of the University of Southern California-Center for Software Engineering Annual Research Review, Los Angeles, CA, (2002) 14. Constantine, Larry L: The Peopleware Papers, Yourdon Press, New York (2001) (see thoughts on consensus building on 13-16). 15. 15.Rakitin, Steven R.: Manifesto Elicits Cynicism, Letter to the Editor, Computer, IEEE, New York, Issue 12 (2001) 4. 16. Paulk, Mark C.: Extreme Programming from a CMM Perspective, Software, IEEE, New York, Issue 6 (2001) 19-26. 17. 17.Cusumano, Michael A. and Selby, Richard W.: Microsoft Secrets, Simon & Schuster: A Touchstone Book, New York (1998). (See notes on builds on pp. 263-271). 18. Kerth, Norman L.: Project Retrospectives: A Handbook for Team Reviews, Dorset House Publishing, New York (2001). 19. 19.Boehm, Barry: Private Communications (2002). 20. Cockburn, Alistair: Agile Software Development, Addison-Wesley, New York (2002).
Empirical Findings in Agile Methods Mikael Lindvall1, Vic Basili1,4, Barry Boehm3, Patricia Costa1, Kathleen Dangle1, Forrest Shull1, Roseanne Tesoriero1, Laurie Williams2, and Marvin Zelkowitz1,4 1 Fraunhofer Center for Experimental Software Engineering, Maryland _QPMRHZEPPZFEWMPMTGSWXEOHERKPIJWLYPPVXIWSVMIVS Q^IPOS[MX^a$JGQHYQHIHY 2 North Carolina State University [MPPMEQW$GWGRGWYIHY 3 University of Southern California Center for Software Engineering FSILQ$WYRWIXYWGIHY 4 University of Maryland Empirical Software Engineering Group _FEWMPMQZ^a$GWYQHIHY
Abstract. In recent years, the use of, interest in, and controversy about Agile methodologies have realized dramatic growth. Anecdotal evidence is rising regarding the effectiveness of agile methodologies in certain environments and for specified projects. However, collection and analysis of empirical evidence of this effectiveness and classification of appropriate environments for Agile projects has not been conducted. Researchers from four institutions organized an eWorkshop to synchronously and virtually discuss and gather experiences and knowledge from eighteen Agile experts spread across the globe. These experts characterized Agile Methods and communicated experiences using these methods on small to very large teams. They discussed the importance of staffing Agile teams with highly skilled developers. They shared common success factors and identified warning signs of problems in Agile projects. These and other findings and heuristics gathered through this valuable exchange can be useful to researchers and to practitioners as they establish an experience base for better decision making.
1
The Rise of Agile Methods
Plan-driven methods are those in which work begins with the elicitation and documentation of a “complete” set of requirements, followed by architectural and high level-design development and inspection. Examples of plan-driven methods include various waterfall and iterative approaches, such as the Personal Software Process (PSP) [1]. Beginning in the mid-1990’s, some practitioners found these initial requirements documentation, and architecture and design development steps frustrating and, perhaps, impossible [2]. As Barry Boehm [3] suggests, these plan-driven methods may well start to pose difficulties when change rates are still relatively low. The industry and the technology move too fast and customers have become increasingly unable to definitively state their needs up front. As a result, several consultants have independently developed methodologies and practices to embrace and respond to the inevitable change they were experiencing. These methodologies and practices are
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 197–207, 2002. © Springer-Verlag Berlin Heidelberg 2002
198
Mikael Lindvall et al.
based on iterative enhancement, a technique which was introduced in 1975 [4] and that has been come to be known as Agile Methodologies [2, 5]. Agile Methodologies are gaining popularity in industry although they comprise a mix of accepted and controversial software engineering practices. It is quite likely that the software industry will find that specific project characteristics will determine the prudence of using an agile or a plan-driven methodology – or a hybrid of the two. In recent years, there have been many stories and anecdotes [6-8] of industrial teams experiencing success with Agile methodologies. There is, however, an urgent need to empirically assess the applicability of these methods, in a structured manner, in order to build an experience base for better decision-making. This paper contributes to the experience base and discusses the findings of a synchronous, virtual eWorkshop in which experiences and knowledge were gathered from and shared between Agile experts located throughout the world.
2
An Experience Base for Software Engineering
In order to reach their goals, software development teams need to understand and choose the right models and techniques to support their projects. They must consider key questions such as: What is the best life-cycle model to choose for a particular project? What is an appropriate balance of effort between documenting the work and getting the product implemented? When does it pay-off to spend major efforts on planning in advance and avoid change, and when is it more beneficial to plan less rigorously and embrace change? The goal of the NSF-sponsored Center for Empirically-Based Software Engineering (CeBASE)1 is to collect, analyze, document, and disseminate knowledge on software engineering gained from experiments, case studies, observations, interviews, expert discussions and real world projects. A central activity toward achieving this goal has been the running of “eWorkshops” (or on-line meetings) that capture expert knowledge to formulate heuristics on a particular software engineering topic. The CeBASE project defined the eWorkshop and has used the technology to collect valuable empirical evidence on defect reduction and COTS. [9] The rise of Agile Methods provides a fruitful area for such empirical research. This paper discusses the results of the first eWorkshop on Agile Methods sponsored by the Fraunhofer Center Maryland and North Carolina State University using the CeBASE eWorkshop technology. The discussion items are presented along with an encapsulated summary of the expert discussion. The heuristics can be useful both to researchers (for pointing out gaps in the current state of the knowledge requiring further investigation) and to practitioners (for benchmarking or setting expectations about development practices).
3
Collecting Expert Knowledge on Agile Methods
Workshops in which experts discuss their findings and record their discussions are a classic method for creating and disseminating knowledge. Workshops, however, possess limitations: 1) experts are spread all over the world and would have to travel to 1
http://www.CeBASE.org
Empirical Findings in Agile Methods
199
participate, and 2) workshops are usually oral presentations and discussions, which are generally not captured for further analysis. The eWorkshops are designed to help overcome these problems. The eWorkshop is an on-line meeting that replaces the usual face-to-face workshop. While it uses a Web-based chat-application, the session is structured to accommodate the needs of a workshop without becoming an unconstrained on-line chat discussion. The goal of the eWorkshop is to synthesize new knowledge from a group of experts in an efficient and inexpensive manner in order to populate an experience base. More details about the eWorkshop tool and process can be found in [10]. The goal of the Agile workshop, held in April 2002, was to create a set of heuristics that represent what experts in the field consider to be the current state of understanding about Agile Methods. The participants in this event were experts in Agile Methods. Our lead discussants (the workshop leaders and authors of this paper) formed part of the team that interacted with an international group of invited participant experts. The names of these 18 participants are listed in the acknowledgements at the end of the paper.
4
Seeding the eDiscussion
Participants of eWorkshops prepare for the discussion by reading relevant material, preparing a position statement reacting to proposed discussion points, and reading the position statements of the other discussants. For this eWorkshop, Barry Boehm’s January 2002 IEEE Computer article [3] and the Agile Manifesto [11-13] served as background material. The Agile Manifesto documents the priorities of its signers. They value: • • • •
Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan Many in industry debate the prudence of these values. Steven Rakitin comments [14] that the items on the right are essential, while those on the left only serve as excuses for hackers to keep on irresponsibly throwing code together with no regard for engineering discipline. Another example is Matt Stephens’ critical analysis of XP and its applicability2. It is important to remember that Agile includes many different methodologies, of which the best known include: • • • • • •
Extreme Programming (XP) [15-17] Scrum [18] Feature Driven Development (FDD) [19] Dynamic Systems Development Method (DSDM) [20] Crystal [5] Agile modeling [21] In his article, Boehm brings up a number of different characteristics regarding Agile Methods compared to what he calls “Plan-Driven,” the more traditional waterfall,
2http://www.softwarereality.com/lifecycle/xp/case_against_xp.jsp
200
Mikael Lindvall et al.
incremental or spiral methods. Boehm contends that Agile, as described by Highsmith and Cockburn, emphasizes several critical people-factors, such as amicability, talent, skill, and communication, at the same time noting that 49.99% of the world’s software developers are below average in these areas. While Agile does not require uniformly high-capability people, it relies on tacit knowledge to a higher degree than plan-driven projects that emphasize documentation. Boehm’s point is that there is a risk that this situation leads to architectural mistakes that cannot be easily detected by external reviewers due to the lack of documentation. Boehm also notes that Cockburn and Highsmith conclude that “Agile development is more difficult for larger teams” and that plan-driven organizations scale-up better. At the same time, the bureaucracy created by plan-driven processes does not fit small projects either. This again, ties back to the question of selecting the right practices for the task at hand. Boehm questions the applicability of the Agile emphasis on simplicity. XP’s philosophy of YAGNI (You Aren’t Going to Need It) [15] is a symbol of the recommended simplicity that emphasizes getting rid of architectural features that do not support the current version. Boehm feels this approach fits situations where future requirements are unknown. In cases where future requirements are known, the risk is, however, that the lack of architectural support could cause severe architectural problems later. This raises questions like: What is the right balance between creating a grandiose architecture up-front and adding features as they are needed? Boehm contends that plan-driven processes are most needed in high-assurance software. Traditional goals of plan-driven processes such as predictability, repeatability, and optimization, are often characteristics of reliable safety critical software development. Knowing for what kind of applications different practices traditional or agile are most beneficial is crucial, especially for safety critical applications where human lives can be at stake if the software fails. The eWorkshop organizers planned to discuss each of these issues (people, team size, design simplicity, applicability for high assurance systems) outlined in Boehm’s article in relation to the Agile Manifesto. However, the discussion took on its own shape based on the interests and desires of the discussants. Ultimately, the following issues were discussed: The definition of agile Selecting projects suitable for agile a. Size requirements (and scale-up strategies) b. Personnel requirements c. Use with critical, reliable, safe systems 3. Introducing the methodology a. Ideas for training 4. Managing the project a. Success factors b. Warning signs c. Refactoring d. Documentation 1. 2.
Each of these will be discussed in the following section.
Empirical Findings in Agile Methods
5
201
Findings
During the eWorkshop on Agile Methods, participants contributed their own data and experiences on various topics. Excerpts of the discussions are presented below, along with the resulting statements about Agile Methods. The full discussion summary can be found on the FC-MD web site.3 5.1
Definition
The eWorkshop began with a discussion regarding the definition of Agile and its characteristics, resulting in the following working definition. Agile Methods are: • Iterative (Delivers a full system at the very beginning and then changes the functionality of each subsystem with each new release. [22]) • Incremental (The system as specified in the requirements is partitioned into small subsystems by functionality. New functionality is added with each new release. [22]) • Self-organizing (The team has the autonomy to organize itself to best complete the work items.) • Emergent (Technology and requirements are “allowed” to emerge through the product development cycle.) All Agile methods follow the four values and 12 principles of the Agile Manifesto4. 5.2
Selecting Projects Suitable for Agile Methods
5.2.1 Project Size The most important factor that determines when Agile is applicable is probably project size. The goal of the first topic was to collect experience regarding the size of projects that have been using Agile in order to determine when it is applicable. From the discussion it became clear that there is: • Plenty of experience of teams of up to 12 people • Some descriptions of teams around size 25 • A few data points of size teams of up to 100 people, e.g. 45 & 90-person teams, described in Agile Software Development [5] • Isolated descriptions of teams larger than 100 people. (e.g. teams of 150 and 800 people were mentioned and documented in [2] ). Many participants felt that any team could be agile, regardless of the team size. Alistair Cockburn argued that size is an issue. As size grows, coordinating interfaces becomes a dominant issue. Agile with face-to-face communication breaks down and becomes more difficult and complex past 20-40 people. Most participants agreed, but think that this statement is true for any development process. Past 20-40 people, some kind of scale-up strategies must be applied. 3http://fc-md.umd.edu/projects/Agile/main.htm 4
http://www.agilemanifesto.org/
202
Mikael Lindvall et al.
One scale-up strategy that was mentioned was the organization of large projects into teams of teams. In one occasion, an 800-person team was, for example, organized using “scrums of scrums” [18]. Each team was staffed with members from multiple product lines in order to create a widespread understanding of the project as a whole. Regular, but short, meetings of cross-project sub-teams (senior people or common technical areas) were held regularly to coordinate the project and its many teams of teams. It was pointed out that a core team responsible for architecture and standards (also referred to as glue) is needed in order for this configuration to work. These people work actively with the sub-teams and coordinate the work. Effective ways of coordinating multiple teams include yearly conferences to align interfaces, rotation of people between teams in 3-month internships, and shared test case results. Examples of strategies for coping with larger teams are documented in Jim Highsmith’s Agile Software Development Ecosystems [2], in which the 800person team is described. 5.2.2 Personnel There is an ongoing debate about whether or not agile requires “good people” to be effective. This is an important argument to counter as “good people” can make just about anything happen and that specific practices are not important when you work with good people. This suggests that perhaps the success of Agile methods could be attributed to the teams of good folks, rather than the practices and principles. On the other hand, participants argued that Agile Methods are intrinsically valuable. Participants agreed that a certain percentage of experienced people are needed for a successful Agile project. There was some consensus that 25%-33% of the project personnel must be “competent and experienced.” “Competent” in this context means: • Possess real-world experience in the technology domain • Have built similar systems in the past • Possess good people & communication skills It was noted that experience with actually building systems are much more important than experience with Agile development methods. The level of experience might even be as low as 10% if the teams practice pair programming [23] and if the makeup of the specific programmers in each pair is fairly dynamic over the project cycle (termed “pair rotation”). Programmers on teams that practice pair rotation have an enhanced environment for mentoring and for learning from each other. 5.2.3 Criticality, Reliability, Safety Issues One of the most widespread criticisms of Agile methods is that they do not work for systems that have criticality, reliability and safety requirements. There was some disagreement about suitability for these types of projects. Some participants felt that Agile Methods work if performance requirements are made explicit early, and if proper levels of testing can be planned for . Others argue that Agile best fits applications that can be built “bare bones” very quickly, especially applications that spend most of their lifetime in maintenance.
Empirical Findings in Agile Methods
203
There was also some disagreement about the best Agile Methods for critical projects. A consensus seemed to form that the Agile emphasis on testing, particularly the test-driven development practice of XP, is the key to working with these projects. Because all of the tests have to be passed before release, projects developed with XP can adhere to strict (or safety) requirements. Customers can write acceptance tests that measure nonfunctional requirements, but they are more difficult and may require more sophisticated environments than JUnit tests. Many participants felt that it is easier to address critical issues since the customer gives requirements, makes important issues explicit early and provides continual input. The phrase “responsibly responding to change” implies that there is a need to investigate the source of the change and adjust the solution accordingly, not just respond and move on. When applied right, “test first” satisfies this requirement. 5.3 Introducing Agile Methods: Training Requirements An important issue is how to introduce Agile Methods in an organization and how much formal training is required before a team can start using it. A majority (though not all) of the participants felt that Agile Methods require less formal training than traditional methods. For example, pair programming helps minimize what is needed in terms of training, because people mentor each other [24]. This kind of mentoring (by some referred to as tacit knowledge transfer) is argued to be more important than explicit training. The emphasis is rather on skill development, not on learning Agile Methods. Training in how to apply Agile Methods can many times be done as selftraining. Some participants have seen teams train themselves successfully. It was the conclusion that there should be enough training material available for XP, Crystal, Scrum, and FDD. 5.4
Project Management
5.4.1 Success Factors One of the most effective ways to learn from previous experience is to analyze past projects from the perspective of success factors. The three most important success factors identified among the participants were culture, people, and communication. To be Agile is a cultural thing. If the culture is not right, then the organization cannot be Agile. In addition, teams need some amount of local control; they must have the ability to adapt working practices as they feel appropriate. The culture must also be supportive of negotiation as negotiation is a big part of the Agile culture. As discussed above, it is important to have competent team members. Organizations using Agile use fewer, but more competent people. These people must be trusted, and the organization must be willing to live with the decisions developers make, not consistently second-guess their decisions. Organizations that want to be Agile need to have an environment that facilitates rapid communication between team members. Examples are physically co-located teams and pair programming. It was pointed out that success factors are not free and that organizations need to carefully implement these success factors in order for them to happen. The participants concluded that Agile Methods are more appropriate when requirements are emergent and rapidly changing (and there is always some technical uncertainty!). An-
204
Mikael Lindvall et al.
other factor that is critical for success is fast feedback from the customer. In fact, Agile is based on close interaction with the customer and expects that the customer will be on site for the quickest possible feedback because customer feedback is viewed as such a critical success factor. 5.4.2 Warning Signs A critical part of project management is recognizing early warning signs that indicate that something has gone wrong. The question posed to participants was: How can management know when to take corrective action to minimize risks? Participants concluded that the daily meetings provide a useful way of measuring problems. Because of the general openness of the project and because discussions of these issues is encouraged during the daily meeting, people will bring up problems. Low morale expressed by the people in the daily meeting will also reveal that something has gone wrong and that the project manager has to deal with it. Another indicator is when “useless documentation” is getting produced, even though it can be hard to determine what useless documentation is. Probably the most important warning sign is when the team is getting behind on planned iterations. As a result, having frequent iterations is very important for frequent monitoring of this warning sign. 5.4.3 Refactoring A key tenet of agile methodologies (especially in XP) is refactoring. [25] Refactoring means improving the design of existing code without changing the functionality of the system. The different forms of refactoring involve: simplifying complex statements, abstracting common solutions into reusable code, and the removal of duplicate code. Not all participants were comfortable with refactoring the architecture of a system because refactoring would affect all internal and external stakeholders. Instead, the approach should be frequent refactoring of reasonably sized code, keeping the scope down so that changes would more local. Most participants felt that large-scale refactoring is not a problem, because they are frequently necessary anyway and as a matter of fact, are more feasible using Agile Methods. There was a strong feeling among participants that traditional “BDUF”5 is rarely on target, but lack of applicability is not fed back to the team that created the BDUF so they do not learn from experience. It was again emphasized that testing is the major issue in Agile. Big architectural changes do not need to be risky, for example, if a set of automated tests is provided as a “safety net.” 5.4.4 Documentation Product and project documentation is a topic that has drawn much attention in discussions about Agile. Is any documentation necessary at all? If so, how do you know how much? Scott Ambler commented that documentation becomes out of date and should be updated only “when it hurts.” Documentation is a poor form of communication, but sometimes it is necessary in order to retain critical information over time. Many organizations demand more documentation than is needed. Organizations’ goal should be to communicate effectively and documentation should be one of the last options to fulfill that goal. Barry Boehm mentioned that a documented project makes it easier for an outside expert to diagnose problems. Kent Beck disagreed, saying that, 5Big
Design Up Front
Empirical Findings in Agile Methods
205
as an outside expert who spends a large percentage of his time diagnosing projects, he is looking for people “stuff” (like quiet asides) and not technical details. Bil Kleb said that with Agile Methods, documentation is assigned a cost and its extent is determined by the customer (excepting internal documentation). Scott Ambler suggested his Agile Documentation essay6 as good reference for this topic.
6
Conclusions
Whether or not to use a certain software development methodology is not trivial and depends on many factors. Our approach to support selection of methodologies is based on collecting and analyzing experience from the application of methodologies as well as the context under which the experience was gained. This experience forms an experience base and as new experience is gained, the previous experience is refined and the experience base grows. The experience base evolves over time into an asset that can support and guide future projects in selecting the most appropriate methodology for the task at hand. This expert discussion attempted to collect experience from applying Agile Methods. It was conducted by identifying and analyzing some of the most important factors related to Agile Methods and their characteristics. A post analysis of the discussion refined and structured the results. Several lessons can be learned from this discussion; lessons that seed the experience base and that can be useful to those considering Agile Methods in their organization. These lessons should be carefully examined and challenged by future projects and the circumstances for when they hold and when they do not should be captured. The lessons gained were discussed in the paper. A summary is provided below: • Any team could be agile, regardless of the team size, but size is an issue because more people make communication harder. There is much experience from small teams. There is less for larger teams, for which scale-up strategies need to be applied. • Experience is important for an Agile project to succeed, but experience with actually building systems is much more important than experience with Agile methods. It was estimated that 25%-33% of the project personnel must be “competent and experienced”, but the necessary percentage might even be as low as 10% if the teams practice pair programming due to the fact that they mentor each other. • Reliable and safety-critical projects can be conducted using Agile Methods. The key is that performance requirements are made explicit early, and proper levels of testing are planned. It is easier to address critical issues using Agile Methods since the customer gives requirements, makes important things explicit early and provides continual input. • Agile Methods require less formal training than traditional methods. Pair programming helps minimize what is needed in terms of training, because people mentor each other. This is more important than regular training that can many times be completed as self-training. Training material is available in particular for XP, Crystal, Scrum, and FDD. 6http://www.agilemodeling.com/essays/agileArchitecture.htm
206
Mikael Lindvall et al.
• The three most important success factors are culture, people, and communication. Agile Methods need cultural support otherwise they will not succeed. Competent team members are crucial. Agile Methods use fewer, but more competent people. Physically co-located teams and pair programming support rapid communication. Close interaction with the customer and frequent customer feedback are critical success factors. • Early warning signs can be spotted in Agile projects, e.g. low morale expressed during the daily meeting. Other signs are production of “useless documentation” and delays of planned iterations. • Refactoring should be done frequently and of reasonably sized code, keeping the scope down and local. Large-scale refactoring is not a problem, and is more feasible using Agile Methods. Traditional “BDUF” is a waste of time and doesn’t lead to a learning experience. Big architectural changes do not need to be risky if a set of automated tests is maintained. • Documentation should be assigned a cost and its extent be determined by the customer. Many organizations demand more than is needed. The goal should be to communicate effectively and documentation should be the last option. We have an ambitious goal of collecting relevant empirically based software engineering knowledge. Based on our experiences on the topic of Agile Methods, the eWorkshop has been shown to be a mechanism for inexpensively and efficiently capturing this information. It has been useful for discussing important Agile topics, and we have obtained critical information regarding experience from real world projects using Agile Methods. To continue this activity, we will run a second eWorkshop on Agile Methods in 2002. It will be a more detailed discussion focusing on a more specific set of topics in order to collect even more detailed information about Agile Methods and their characteristics. We believe this is an important activity as practitioners need to understand when and under what circumstances a certain method or process is optimal and how it should be tailored to fit the local context.
Acknowledgements We would like to recognize our expert contributors: Scott Ambler (Ronin International, Inc.), Ken Auer (RoleModel Software, Inc), Kent Beck (founder and director of the Three Rivers Institute), Winsor Brown (University of Southern California), Alistair Cockburn (Humans and Technology), Hakan Erdogmus (National Research Council of Canada), Peter Hantos (Xerox), Philip Johnson (University of Hawaii), Bil Kleb (NASA Langley Research Center), Tim Mackinnon (Connextra Ltd.), Joel Martin (National Research Council of Canada), Frank Maurer (University of Calgary), Atif Memon (University of Maryland and Fraunhofer Center for Experimental Software Engineering), Granville (Randy) Miller, (TogetherSoft), Gary Pollice (Rational Software), Ken Schwaber (Advanced Development Methods, Inc. and one of the developers of Scrum), Don Wells (ExtremeProgramming.org), William Wood (NASA Langley Research Center). This work is partially sponsored by NSF grant CCR0086078, establishing the Center for Empirically Based Software Engineering (CeBASE).
Empirical Findings in Agile Methods
207
References 1. Humphrey, W.S., A Discipline for Software Engineering. SEI Series in Software Engineering, ed. P. Freeman, Musa, John. 1995, Reading, Massachusetts: Addison Wesley Longman, Inc. 2. Highsmith, J., Agile Software Development Ecosystems. The Agile Software Development Series, ed. A. Cockburn and J. Highsmith. 2002, Boston, MA: Addison-Wesley. 3. Boehm, B., Get Ready for Agile Methods, with Care. IEEE Computer, 2002. 35(1): p. 6469. 4. Basili, V.R. and A.J. Turner, Iterative Enhancement: A Practical Technique for Software Development. IEEE Transactions on Software Engineering, 1975. 1(4). 5. Cockburn, A., Agile Software Development. The Agile Software Development Series, ed. A. Cockburn and J. Highsmith. 2001, Reading, Massachusetts: Addison Wesley Longman. 6. Marchesi, M., et al., eds. Extreme Programming Perspectives. XP Series, ed. K. Beck. 2002, Addison Wesley: Boston. 7. Marchesi, M. and G. Succi, eds. Extreme Programming Examined. XP Series, ed. K. Beck. 2001, Addison Wesley: Boston. 8. Highsmith, J., Does Agility Work? Software Development, 2002. 10(6): p. 30-37. 9. Shull, F., et al. What We Have Learned about Fighting Defects. in International Software Metrics Symposium. 2002. Ottawa, Canada. 10. Basili, V.R., et al. Building an Experience Base for Software Engineering: A Report on the first CeBASE eWorkshop. in Profes (Product Focused Software Process Improvement). 2001. 11. Highsmith, J. and A. Cockburn, Gile Software Development: The Business of Innovation. IEEE Computer, 2001. 34(12). 12. Cockburn, A. and J. Highsmith, Agile Software Development: The People Factor. IEEE Computer, 2001. 34(11). 13. Beck, K., et al., The Agile Manifesto. 2001: p. http://www.agileAlliance.org. 14. Rakitin, S., Manifesto Elicits Cynicism. IEEE Computer, 2001. 34(12). 15. Beck, K., Extreme Programming Explained: Embrace Change. 2000, Reading, Massachusetts: Addison-Wesley. 16. Auer, K. and R. Miller, XP Applied. 2001, Reading, Massachusetts: Addison Wesley. 17. Jeffries, R., A. Anderson, and C. Hendrickson, Extreme Programming Installed. The XP Series, ed. K. Beck. 2001, Upper Saddle River, NJ: Addison Wesley. 18. Schwaber, K. and M. Beedle, Agile Software Development with SCRUM. 2002: PrenticeHall. 19. Coad, P., J. deLuca, and E. Lefebvre, Java Modeling in Color with UML. 1999: Prentice Hall. 20. Stapleton, J., DSDM: The Method in Practice. 1997: Addison Wesley Longman. 21. Ambler, S.W., Agile Modeling. 2002: John Wiley and Sons. 22. Pfleeger, S.L., Software Engineering: Theory and Practice. 1998, Upper Saddle River, NJ: Prentice Hall. 1-44. 23. Williams, L., et al., Strengthening the Case for Pair-Programming, in IEEE Software. 2000. p. 19-25. 24. Palmieri, D., Knowledge Management through Pair Programming, in Computer Science. 2002, North Carolina State University: Raleigh, NC. 25. Fowler, M., et al., Refactoring: Improving the Design of Existing Code. 1999, Reading, Massachusetts: Addison Wesley.
Exploring the Efficacy of Distributed Pair Programming Prashant Baheti1, Edward Gehringer2, and David Stotts3 1 Dept.
of Computer Science, North Carolina State University, Raleigh, NC 27695 TTFELIXM$YRMX]RGWYIHY 2 Dept. of Computer Science, Dept. of ECE, North Carolina State University, Raleigh, NC 27695 IJK$RGWYIHY 3 Dept. of Computer Science, University of North Carolina, Chapel Hill, NC 27599 WXSXXW$GWYRGIHY
Abstract. Pair programming is one of the twelve practices of Extreme Programming (XP). Pair programming is usually performed by programmers that are collocated – working in front of the same monitor. But the inevitability of distributed development of software gives rise to important questions: How effective is pair programming if the pairs are not physically next to each other? What if the programmers are geographically distributed? An experiment was conducted at North Carolina State University to compare different working arrangements of student teams developing object-oriented software. Teams were both collocated and in distributed environments; some teams practiced pair programming while others did not. In particular, we compared the software developed by virtual teams using distributed pair programming against collocated teams using pair programming and against virtual teams that did not employ distributed pair programming. The results of the experiment indicate that it is feasible to develop software using distributed pair programming, and that the resulting software is comparable to software developed in collocated or virtual teams (without pair programming) in productivity and quality.
1
Introduction
Increasingly, programmers are working in geographically distributed teams. The trends toward teleworking, distance education, and globally distributed organizations are making these distributed teams an absolute necessity. These trends are beneficial in many ways, particularly for those in geographically disadvantaged areas. However, it is not believed that any of these arrangements makes programming more effective than if all the programmers were, indeed, collocated. Therefore, organizations must strive to maximize the efficiency and effectiveness of these unavoidably distributed programmers and teams. This paper describes the development and study of a technique tailored for distributed programming teams. Pair programming is a style of programming in which two programmers work side by side at one computer, continuously collaborating on the same design, algorithm, code or test. One of the pair, called the driver, is typing at the computer or writing down a design. The other partner, called the navigator, has many jobs. One of the roles of the navigator is to observe the work of the driver, looking for tactical and strategic defects in the work of the driver. Tactical defects are syntax errors, typos, calls to the wrong method, etc. Strategic defects are said to occur when the team is D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 208–220, 2002. © Springer-Verlag Berlin Heidelberg 2002
Exploring the Efficacy of Distributed Pair Programming
209
headed down the wrong path – what they are implementing won’t accomplish what it needs to accomplish. Any of us can be guilty of straying off the path. A simple, “Can you explain what you’re doing?” from the navigator can serve to bring the driver back onto the right track. The navigator has a much more objective point of view and can better think strategically about the direction of the work. The driver and navigator can brainstorm on demand at any time. An effective pair-programming relationship is very active. The driver and the navigator communicate at least every 45 seconds to a minute. It is also very important for them to switch roles periodically. Note that pair programming includes all phases of the development process – design, debugging, testing, etc. – not just coding. Experience shows that programmers can pair at any time during development, in particular when they are working on something that is complex. The more complex the task, the greater the need for two brains [4, 6]. Pair programming is one of the twelve practices of Extreme Programming (XP) [1]. It is usually assumed that the pairs will be working in front of the same workstation. If Extreme Programming is to be used for distributed development of software, collocation becomes a limitation. Indeed, it would require a variant of Extreme Programming in which distributed pair programming or virtual teaming can be used. By distributed pair programming, we mean that two members of the team (which may consist of only two people) synchronously collaborate on the same design or code, but from different locations. This means that both must view a copy of the same screen, and at least one of them should have the capability to change the contents on the screen. To be able to do this, they require technological support for sharing desktops and verbal conversation or even capability of video conferencing with Web cams if required. This paper is based on an experiment where these students developing distributed team projects had this capability and how they fared against those that developed software in a collocated manner or in distributed teams but without pair programming. The rest of the paper is organized as follows. Section 2 describes the previous work done with respect to pair programming and virtual teams. Section 3 gives the hypotheses based on which we test our results. Section 4 describes an initial experiment was done in fall 2001 between NCSU and UNC to determine an effective technical platform to allow distributed pair programming the hypothesis for which we test our results. Section 5 outlines the comprehensive experiment that was conducted in a graduate class at NC State University. Section 6 presents the results, followed by an outline of future work in Section 7. The conclusions are presented in Section 8.
2
Previous Work
2.1
Pair Programming
Previous research [4, 5] has indicated that pair programming is better than individual programming in a collocated environment. Research has shown that pairs finish in about half the time of individuals and produce higher quality code. The technique has also been shown to assist programmers in enhancing their technical skills, to improve team communication, and to be more enjoyable [4, 6,7,8]. Do these results also apply to distributed pairs? It has been established that distance matters [3]; face-to-face pair programmers will most likely outperform
210
Prashant Baheti, Edward Gehringer, and David Stotts
distributed pair programmers in terms of sheer productivity. But the inevitability of distributed work has made it important to discover the most effective way to develop software in a distributed environment. The traditional way to develop distributed software is with virtual teams, where the collaboration between the team members is asynchronous, as opposed to using distributed pair programming. 2.2
Virtual Teaming
A virtual team can be defined as a group of people, who work together towards a common goal, but across time, distance, culture and organizational boundaries [2]. In our context the goal is development of software. The members of a virtual team may be located at different work sites, or they may travel frequently, and need to rely upon communication technologies to share information, collaborate, and coordinate their work efforts. As the business environment becomes more global and businesses are increasingly in search of more creative ways to reduce operating costs, the concept of virtual teams is of paramount importance [9]. In the past, there was no support for collaborative programming for virtual teams. Advancements in technology and the invention of groupware have changed this situation. “Students can now work collaboratively and interact with each other and with their teacher on a regular basis. Students develop interpersonal and communication skills that were unavailable when working in isolation” [11]. In comparison to collocated teams, virtual teams have three disadvantages: “Communication within the team is hindered, team members are less aware of each other and common access to physical objects and places (like a printer or the cafeteria) is difficult” [10]. The first two of these can be addressed using the techniques of Computer Supported Cooperative Work (CSCW). As for the third one, physical objects can be simulated if they have must have meaningful electronic representations like virtual rooms or electronic blackboards. A primary consideration in virtual teaming is that of communication [12]. Poor communication can cause problems like inadequate project visibility, wherein everyone works individually, but no one knows if the pieces can be integrated into a complete solution. The communication problem is alleviated by the use of groupware applications like e-mail systems, messaging systems and videoconferencing. According to Dourish and Bellotti [13], “Awareness is an understanding of the activities of others, which provides a context for your own activity. This context is used to ensure that individual contributions are relevant to the group’s activity as a whole, and to evaluate actions with respect to group goals and progress”. Software like TUKAN [10] provide a synchronous distributed team programming environment to deal with the awareness issues.
3
Hypotheses
In the fall of 2001, we ran an experiment at North Carolina State University to assess whether geographically distributed programmers benefit from using technology to collaborate synchronously with each other. Specifically, we examined the following hypotheses:
Exploring the Efficacy of Distributed Pair Programming
211
• Distributed teams whose members pair synchronously with each other will produce higher quality code than distributed teams that do not pair synchronously. In the academic backdrop, quality can be assessed by the grades obtained by the students for their project. A statistical t-test can be performed to find whether one of the groups gets statistically significantly better results at different levels of significance (p < 0.01, 0.05, 0.1 etc.). • Distributed teams whose members pair synchronously will be more productive (in terms of LOC/hr.) than distributed teams that do not pair synchronously. • Distributed teams who pair synchronously will have comparable productivity and quality when compared with collocated teams. • Distributed teams who pair synchronously will have better communication and teamwork within the team when compared with distributed teams that do not pair synchronously.
4
Initial Platform Experiment
An initial experiment was conducted in early fall 2001 between NCSU and UNC to determine an effective technical platform to allow distributed pair programming. Two pairs of programmers worked as a 4-person team over the Internet to develop a modest Java gaming application; each pair was composed of one programmer from each remote site. The team developed a Mancala game, with GUI, in 8 sessions that varied from 1 to 2 hours in length. In addition to the actual pair-programming sessions, the project was initiated with a face-to-face meeting in which the team members agreed on requirements and an overall system metaphor. Thus the experiment mainly tested the effectiveness of the technology for pair coding and not the entire software development process. The members of a pair viewed a common PC display using desktop sharing software; we tried Microsoft NetMeeting, Symantec’s PCAnywhere, and VNC. They used headsets and microphones to speak to each other, and text chat for communications as well. We tried several instant-messaging programs (Yahoo Messenger, PalTalk, AOL Messenger) before implementing the project. The final experiment was run with NetMeeting, as this program provided PC sharing, text, audio, and video in one platform. A typical pairing session involved two programmers sharing desktops, with one of the pair (the navigator) having read-only access while the other (the driver) actually edited the code. The changes made by the driver were seen in real time by the navigator, who was constantly monitoring the driver’s work. The navigator could communicate with the driver by speaking over the microphone, or via instant messaging. The students were furnished Intel digital cameras to use as Web cams for videoconferencing, to allow them, for example, to show paper design documents to each other. However, as the sessions progressed, none of these teams found the need to use the Web cams. One of the students described his experience as follows: “While we programmed, I kept implementing the methods described in the class diagram and he kept guiding me wherever he felt I was straying in logic or when I made typing errors. His style of programming was a bit different
212
Prashant Baheti, Edward Gehringer, and David Stotts
than mine. I have done more of ad hoc programming where I didn’t need to make the code readable. But for this project, I realized that we would have to put proper names for the variables so that by seeing the code itself, one could understand its function. So, that changed my way of coding and I got more organized.” “I did not meet with any difficulties while programming distributedly, as he was constantly giving me input. Whenever we got stuck at a point, we discussed how to get over the problem, either by modifying the class diagrams or introducing new methods for functionality that we had missed out on.” “All in all, once the technology falls into place, its not difficult to program in pairs in a distributed environment. In fact, things go faster than usual because two people are thinking on the same problem. Also, you try to program better because you know that someone is supervising your work. It has been a unique and pleasant experience.” Our goal was not to test whether a remote pair could be as efficient as a co-located one, but simply to see if the programming pairs could work well enough to make functional software in reasonable time. The pairs reported that after a few early sessions in which they were learning the platform behavior, they felt comfortable and natural coding with this technology. The final game works correctly. From this experiment we found that effective remote teaming could be done with the PC sharing software and audio support. This platform was then used in the more comprehensive controlled experiment described next.
5
The Main Experiment
The experiment was conducted in a graduate class, Object-Oriented Languages and Systems,1 taught by Dr Edward Gehringer at North Carolina State University. The course introduces students to object technology and covers OOA/OOD, Smalltalk, and Java. At the end of the semester, all students participate in a 5-week long team project. The team projects consisted of programming projects like updating a GUI to use JSP, implementing a dynamic reviewer-mapping algorithm for peer review, simulating the LC-2 architecture, or building a GUI for DC motor control. Some of the projects were specified by the instructor, and some were specified by a sponsor on the NCSU ECE faculty. We chose this class for our experiment for the following reasons: 1. The projects were developed using an object-oriented language. 2. The experiment had to be performed on a class that had enough students to partition into four categories and still have enough teams in each category to draw conclusions. 3. We needed some distance-education participants for the class to make distributed development feasible and attractive. The aforementioned class had 132 students, 34 of whom were distance learning (Video-Based Engineering Education) “VBEE” students. The VBEE students were 1http://courses.ncsu.edu/csc517/common
Exploring the Efficacy of Distributed Pair Programming
213
located throughout the US, often too far apart for collocated programming or even face-to-face meetings. The team project counted for 20% of their final grade. The oncampus students were given 30 days to complete the project, while the VBEE students had 37. VBEE students’ deadlines are typically one week later than oncampus students’, because the VBEE students view videotapes of the lectures, which are mailed to them once a week (a change to video servers will occur soon). Teams composed of some on-campus and some VBEE students were allowed to observe the VBEE deadline, as an inducement to form distributed teams.. Teams were composed of 2–4 students. The students self-selected their teammates, either in person or using a message board associated with the course, and chose one of the four work environments listed below. Collocated team without pairs (9 groups) The first set of teams developed their project in the traditional way: group members divided the tasks among themselves and each one completed his or her part. An integration phase followed, to bring all the pieces together. Collocated team with pairs (16 groups) The next set of groups worked in pairs. Pair programming was used in the analysis, design, coding and testing phases. A team consisted of one or two pairs. If there were two pairs, an integration phase followed. The next two environments consisted of teams that were geographically separated – “virtual teams.” These groups were either composed entirely of VBEE students, or a combination of VBEE and on-campus students. Distributed team without pairs (8 groups) The third set of teams worked individually on different modules of the project at different locations. The contributions were combined in an integration phase. Distributed team with pairs (5 groups) This fourth set of teams developed the project by working in pairs over the Internet. At the end, they integrated the various modules. The platform experiment was done in early fall 2001 between NCSU and UNC helped in determining an effective technical platform to allow remote teaming. The pairs in our full controlled experiment used headsets and microphones to speak to each other. They viewed a common display using desktop sharing software, such as NetMeeting, PCAnywhere, or VNC. They also used instant-messaging software like Yahoo Messenger while implementing the project. A typical session involved two programmers sharing desktops, with one of the pair (the navigator) having read-only access while the other (the driver) actually edited the code. The changes made by the driver were seen in real time by the navigator, who was constantly monitoring the driver’s work. The navigator could communicate with the driver by speaking over the microphone, or via instant messaging. As in the initial platform experiment, the students were furnished Intel digital cameras to use as Web cams for videoconferencing, to allow them, for example, to show paper design documents to each other. However, as earlier, none of these teams found the need to use the Web cams. In order to record their progress, the students utilized an online tool called Bryce [14], a Web-based software-process analysis system used to record metrics for software development. Bryce was developed at N.C. State. Using the tool, the
214
Prashant Baheti, Edward Gehringer, and David Stotts
students recorded data including their development time, lines of code and defects. Development time and defects were recorded for each phase of the software development cycle, namely, planning, design, design review, code, code review, compile and test. Using these inputs, Bryce calculated values for the metrics used to compare the four categories of group projects. Over the course of the project, the metrics recorded by the students were monitored by the research team so as to make sure that they were recorded on time and were credible. It was found that defects had not been recorded properly by many of the groups, and hence, defects recorded were not considered in this analysis. Two groups (one in category 2 and one in category 3) that had not recorded metrics properly were excluded from the analysis. The two metrics used for the analysis were productivity, in terms of lines of code per hour (measured using the Bryce tool); and quality, in terms of the grades obtained by the students for the project. Lines of code was a good metric for the project as all projects had more or less the same complexity and Java was used for implementation in all projects. Two to four teams implemented each project independently. Additionally, after the students had completed their projects, they filled out a survey regarding their experiences while working in a particular category, the difficulties they faced, and the things they liked about their work arrangement.
6
Results
Data were analyzed in terms of productivity and quality, as defined above. Also, student feedback formed an important third input for the experiment. Our goal was not to show that distributed pair programming is superior to collocated pair programming for student teams. Our goal was to demonstrate that distributed pairing is a viable and desirable alternative for use with student teams, particularly for distance education students. We plan to repeat this experiment in the Fall 2002. 6.1
Productivity
Productivity was measured in terms of lines of code per hour. Average lines of code per hour for the four environments are shown in Fig. 1. The results show that distributed teams had a slightly greater productivity as compared to collocated teams; however, the f-test for the four categories shows that results are not statistically significant (p < 0.1), due to high variance in the data for distributed groups. This is better depicted by the box plot (Fig. 2) for the four categories, which illustrates the distribution of the metric for the four environments. A box plot shows the distribution of data around the median. The vertical rectangle for each category shows the distribution of the middle 50% of the readings. The horizontal line inside each rectangle shows the median value for that particular category. The line segment from the top of the rectangle shows the range in which the top 25% of the values lie. Similarly, the line segment below the rectangle shows the range in which the bottom 25% of the values lie. Thus, the end points of the two line segments indicate the total range that the values for a particular category fall into. For example, the median for the non-pair collocated category is around 10 LOC/hr., with the middle 50% of the values lying between approximately 9 and 13 LOC/hr., while the entire range is between 5 and 35 LOC/hr., approximately.
Exploring the Efficacy of Distributed Pair Programming
215
Lines of code per hour 25 21.074 18.518
Average loc/hr
20 15.119
14.756
Non-pair collocated
Pair collocated
15
10
5
0
Non-pair distributed
Pair distributed
Fig. 1.
Fig. 2.
If the comparison is restricted to the two distributed categories, a statistical t-test on the two categories shows that this difference is not statistically significant. In terms of productivity, the groups involved in virtual teaming (without pairs) is not statistically significantly better than those involved in distributed pair programming.
216
Prashant Baheti, Edward Gehringer, and David Stotts
In other words, teams involved in distributed pair programming are not shown to be worse in terms of productivity than those forming virtual teams without distributed pair programming. To find if the collocated pairs fared any better than distributed pairs, a t-test was also conducted for these two categories, and again, no category was found to be statistically significantly better than the other. Hence, it can be concluded that collocated pairs for this experiment were not more productive (statistically) than distributed pairs. 6.2
Quality
The quality of the software developed by the groups was measured in terms of the average grade obtained by the group out of a maximum of 110. We consider grade to be a good metric of quality because the grades were given after half-hour long demos to the teaching assistant assigned to a particular category of projects, sometimes in the presence of the project sponsor. The graph (Fig. 3) below indicates that the performance of students did not vary much from one category to another. Grades 120
Average Score
100
92.398
93.641
97.384
101.35
80 60 40 20 0 Non-pair collocated
Pair collocated
Non-pair distributed
Pair distributed
Fig. 3.
A box plot (Fig. 4) for the grades only corroborates the claim made above. Although nothing statistically significant can be said about the grades for the four categories, it is interesting to see that those teams performing distributed pair programming were very successful in comparison to other groups. The results of the statistical tests indicate that in terms of grade, teams involved in virtual teaming but without pair programming were not significantly better than the distributed teams using pair programming. As in the previous section, a statistical t-test was performed between distributed pairs and distributed non-pairs and between collocated pairs and distributed pairs. In either case, the results for difference between the two groups under comparison were not found to be statistically significant. This indicates that distributed pairs are comparable to collocated pairs and distributed non-pairs in terms of quality.
Exploring the Efficacy of Distributed Pair Programming
217
Fig. 4.
6.3
Student Feedback
Productivity and product quality is important. However, as educators we strive to provide positive learning experiences for our students. We ran a survey to assess students’ satisfaction with their working arrangement. One of the questions was about cooperation within the team. Table 1 shows the responses of the students in the different environments. Communication among team members is an important issue in team projects. Table 2 shows the responses of students regarding communication among team members. Table 1. Cooperation within the team Responses to the question “How was the cooperation within your team members” Very Good
Good
Fair
Poor
Non-pair collocated Pair collocated Non-pair distributed
46% 62% 45%
40% 28% 37%
11% 10% 18%
3% 0% 0%
Pair distributed
83%
17%
0%
0%
218
Prashant Baheti, Edward Gehringer, and David Stotts
Table 2. Communication among Team Members Responses to the question, “How was the communication with your team?” Very Good
Good
Fair
Poor
Non-pair collocated Pair collocated Non-pair distributed
57% 58% 41%
26% 28% 41%
11% 12% 14%
6% 2% 4%
Pair distributed
67%
33%
0%
0%
The survey also indicates that five out of six students felt that coding and testing are most suitable phases for distributed pair programming. When asked to identify the greatest obstacle to distributed pair programming, students commented as follows: “Initially exchanging code/docs via e-mail was a problem. Later on we used Yahoo briefcases to upload code to others to read it from there. From then on things went very smooth” “Finding common time available for all.” The students were asked to identify the biggest benefits of the distributed pair programming, and responded – “If each person understands their role and fulfills their commitment, completing the project becomes a piece of cake. It is like Extreme Programming with no hassles. If we do not know one area we can quickly consult others in the team. It was great.” “There is more than one brain to work on the problem.” “It makes the distance between two people very short.” Five out of six students involved in distributed pair programming thought that technology was not much of a hindrance in collaborative programming. Also, about the same fraction (82%) of students involved in virtual teaming with or without pair programming felt that there was proper cooperation among team members.
7
Future Work
The experiment we conducted was a classroom experiment among 132 students, including 34 distance-learning students. To be able to draw statistically significant conclusions, such experiments have to be repeated, on a larger scale if possible. However, this experiment has given initial indications of the viability of distributed pair programming. We intend to conduct more experiments like this so that we can draw conclusions about distributed pair programming, and whether virtual teams should be a standard practice in the classroom as well as in industry.
Exploring the Efficacy of Distributed Pair Programming
8
219
Conclusions
The results of our experiment indicate the following:
• Distributed pair programming in virtual teams is a feasible way of developing object-oriented software. • The results of the experiment indicate that software development involving distributed pair programming is comparable to that developed using collocated pair programming or virtual teams without distributed pair programming. The two metrics used for this comparison were productivity (in terms of lines of code per hour) and quality (in terms of the grades obtained). • Collocated teams did not achieve statistically significantly better results than the distributed teams. • Feedback from the students indicates that distributed pair programming fosters teamwork and communication within a virtual team. Thus, the experiment conducted at NC State University is a first indication that distributed pair programming is a feasible and efficient method for dealing with team projects.
Acknowledgments We would like to thank NCSU undergraduate student Matt Senter for his help in administering this experiment. The support of Intel in providing equipment is graciously acknowledged. We would also like to thank NCSU graduate student Vinay Ramachandran for developing the tool called Bryce to record project metrics.
References 1. K. Beck, “Extreme Programming Explained: Embrace Change”. Reading, Massachusetts: Addison-Wesley, 2000. 2. B. George., Y. M. Mansour, “A Multidisciplinary Virtual Team”, Accepted at Systemics, Cybernetics and Informatics (SCI), 2002. 3. G. M. Olson and J. S. Olson, “Distance Matters”. Human-Computer Interaction, 2000, volume 15, p. 139–179. 4. L. A. Williams, “The Collaborative Software Process PhD Dissertation”, Department of Computer Science, University of Utah. Salt Lake City, 2000. 5. J. T. Nosek, “The case for collaborative programming”, Communications of the ACM 41:3, March 1998, p. 105–108. 6. L. A. Williams, and R. Kessler, Pair Programming Illuminated, Boston, MA: Addison Wesley, 2002. 7. L. Williams, R. Kessler, W. Cunningham, and R. Jeffries, “Strengthening the case for pairprogramming”, IEEE Software 17:4, July/Aug 2000, pp. 19–25. 8. Cockburn, and L. Williams, “The costs and benefits of pair programming”, in Extreme Programming Examined, Succi, G., Marchesi, M. eds., pp. 223–248, Boston, MA: Addison Wesley, 2001 9. S. P. Foley, “The Boundless Team: Virtual Teaming”, http://esecuritylib.virtualave.net/virtualteams.pdf, Report for MST 660, Seminar in Industrial and Engineering Systems, Master of Science in Technology (MST) Graduate Program, Northern Kentucky University, July 24, 2000.
220
Prashant Baheti, Edward Gehringer, and David Stotts
10. T. Schummer and J. Schummer, “Support for Distributed Teams in Extreme Programming”, in Extreme Programming Examined, Succi, G., Marchesi, M. eds., p. 355–377, Boston, MA: Addison Wesley, 2001 11. M.Z. Last, “Virtual Teams in Computing Education”, SIGCSE 1999: The Thirtieth SIGCSE Technical Symposium on Computer Science Education, LA, New Orleans, 1999, Doctoral consortium. See page v. of the proceedings. 12. D. Gould, “Leading Virtual Teams”, Leader Values, http://www.leader-values.com/Guests/Gould.htm. July 9, 2000. 13. P. Dourish, V. Bellotti. “Awareness and Coordination in Shared Workspaces”, CSCW ’92, Conference Proceedings on Computer-Supported Cooperative Work, 1992. 14. http://bryce.csc.ncsu.edu/tool/default.jsp
Biographies Prashant Baheti is a student at North Carolina State University since fall 2001 and is pursuing his Masters with thesis option in Computer Science. Dr Edward F. Gehringer has been on the faculty of North Carolina State University since 1984, and has taught o-o to more than 1000 students since 1987. He has published approximately two dozen refereed papers on various aspects of object technology, and has been a frequent contributor to OOPSLA Educators’ Symposia. David Stotts is an associate professor in the Department of Computer Science at the University of North Carolina at Chapel Hill and currently is serving as associate chair for Academic Affairs. Dr. Stotts’s research interests include formal methods in software engineering, concurrent computation models, hypermedia, and collaborative distributed systems. From 1990 through 1995, Dr. Stotts served as an ACM distinguished lecturer. He served as general chair for the 1996 ACM Hypertext Conference, and is on the editorial boards of the Journal of Digital Information and World Wide Web.
Pair Programming: Addressing Key Process Areas of the People-CMM Gopal Srinivasa1 and Prasanth Ganesan2 1 Department
of Computer Science, North Carolina State University, Raleigh, NC - 27695, USA KVWVMRMZ$YRMX]RGWYIHY 2 Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC - 27695, USA TKERIWE$YRMX]RGWYIHY
Abstract. It has been long recognized that the quality of the people employed by a software organization is a major determinant of the quality of its products. Acknowledging the pivotal role played by people in software development, the Software Engineering Institute (SEI) devised the People – Capability Maturity Model. Like its software counterpart, the People CMM (P-CMM) defines five levels of maturity. An organization can achieve a level by institutionalizing the “best practices” for that level. The best practices are grouped together as Key Process Areas (KPAs). Pair programming is a practice in which two programmers work together at one computer on a single module of code designing, coding and testing it together. Evidence indicates that pair programming improves teamwork, communication and knowledge levels – all KPAs of the P-CMM. This paper establishes a link between pair programming and the KPAs defined in the P-CMM. Specifically, the paper provides an outline on the advantages and effects of adopting pair programming if an organization wants to achieve a higher P-CMM level.
1
Introduction
“Personnel attributes and human resource activities provide by far the largest source of opportunity for improving software development productivity.” [1] The importance of personnel as a resource in software development has been recognized for quite some time [2]. With the rapid growth in the knowledge economy, companies are competing in two markets, one for its products and services and one for the talent required to develop and deliver them. Recruiting and retaining top talent are now as important as production and distribution in the corporate business strategies of knowledge-intensive companies. Practices required to attract, develop, and retain outstanding talent have been understood for decades [2]. However, these practices have generally been applied in a piece-meal and half-hearted manner [1]. In view of this, the SEI came up with the People-CMM (P-CMM) model to provide a roadmap for companies to define and adopt best practices to guide workforce development. Developed on the lines of the Software-CMM, the primary objective of the P-CMM is to improve the capability of D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 221–230, 2002. © Springer-Verlag Berlin Heidelberg 2002
222
Gopal Srinivasa and Prasanth Ganesan
the workforce [1]. Workforce capability can be defined as the level of knowledge, skills, and process abilities available for performing an organization’s business activities. Workforce capability indicates an organization’s:
• Readiness for performing its critical business activities, • Likely results from performing these business activities, and • Potential for benefiting from investments in process improvement or advanced technology. In short, the P-CMM provides a roadmap for a company to manage its workforce better. Companies that intend to climb the P-CMM maturity ladder have to invest time and money in the process. Usually the climb takes anything from a few months to a few years, depending on the initial level of the company and the level it wants to achieve. Practices that satisfy the requirements of the various KPAs (of the P-CMM) with less cost, less developer distraction, and less off time become important. One practice that helps improve an organization’s maturity is pair programming. As mentioned earlier, pair programming is two programmers working together on one computer, developing a single piece of code. They design, code and test the software together. Though many people have practiced pair programming informally over the years, it is only recently that a sufficiently large body of work on pair programming has evolved. Most of the research supports anecdotal evidence that pair programming improves code quality, makes learning fun and that it cuts down development time [3]. Researchers have also tried to apply pair programming to education and have found that students learnt new programming concepts faster when they worked in pairs [4]. This paper formalizes the link between pair programming and the KPAs of the PCMM. We look at the benefits of adopting pair programming to manage some of the KPAs of the P-CMM. We also examine the impact that the transition to pair programming will have on KPAs other than those that will be directly benefited. Section 2 presents an overview of the P-CMM, along with an introduction to pair programming. It also presents a brief overview of research being done in both areas and indicates the approach taken for this paper. Section 3 identifies certain KPAs and discusses how they would be benefited by pair programming. Section 4 discusses policies that would need to be adopted in some KPAs other that those in section 3, so that pair programming as a practice can be introduced in the organization. Section 5 concludes with a general view on the whole topic.
2
Background
This section presents a brief overview to the P-CMM and pair programming. 2.1
People CMM
The People CMM was developed by the SEI to evaluate organizations according to their maturity in people management. The People CMM defines five maturity levels: At the Initial level, the work force practices tend to be inconsistent and ritualistic. Organizations at this level have an incoherent policy on many human resource issues.
Pair Programming: Addressing Key Process Areas of the People-CMM
223
At the Managed level, managers begin performing basic people management practices such as staffing, managing performance, and making adjustments to compensation as a repeatable practice. Organizations at this level develop the capability to manage skills and performance at the unit level. At the Defined level, the organization identifies and develops the knowledge, skills, and process abilities that are required to perform its business activities. The organization develops a culture of professionalism based on well-understood workforce competencies. In achieving Maturity Level 3, the organization develops the capability to manage its workforce as a strategic asset.
Fig. 1. Levels of the People-CMM [1]
The Predictable Level focuses on exploiting the knowledge and experience of the workforce framework developed at Level 3. The competency-based processes at the workgroup level are interwoven to create integrated, multidisciplinary processes at the organizational level. Workgroups are empowered to manage their own work processes and conduct some of their internal workforce activities. Individuals and workgroups quantitatively manage the competency-based processes that are important for achieving their performance objectives. The organization manages the capability of its workforce and of the competency- based processes they perform. The effect of workforce practices on these capabilities is monitored and corrective actions taken if necessary. Mentors assist individuals and workgroups in developing their capability. At the highest maturity level, called the Optimizing level, Process Areas focus on continually improving the organization’s capability and workforce practices. Individuals continually improve the personal work processes they use in performing competency-based processes. Workgroups continuously improve their operating processes through improved integration of the personal work processes of their members. The organization evaluates and improves the alignment of performance among its individuals, workgroups, and units both with each other and with the
224
Gopal Srinivasa and Prasanth Ganesan
organization’s business objectives. The organization continually evaluates opportunities for improving its workforce practices through incremental adjustments or by adopting innovative workforce practices and technologies. 2.2
Pair Programming
Pair programming is professed to have many benefits, including higher productivity, lower test effort, enjoyable work environment, and higher quality [3]. Several studies have been carried out to validate the benefits of pair programming. In 1998, Nosek studied 15 experienced programmers working up to 45 minutes on a problem related to their work. Five programmers worked individually, and ten in pairs. He found that though the pairs spent 60% more effort on the task, they completed the task in 40% less time than the individual programmers [5]. A survey conducted by Williams of professional programmers showed that nearly 94% of those surveyed enjoyed programming in pairs.[6] In 1999, Williams conducted an experiment [7], studying the performance of 41 senior software engineering students on four programming assignments. The experiment found that the pairs took approximately 15% more time to complete the assignments, but passed 15% more test cases than the individuals. Statistical evidence from previous work mostly corroborates anecdotal evidence about the benefits of pair programming. 2.3
Approach
The main purpose of this paper is to identify the goals of the P-CMM KPAs that may be satisfied by adopting pair programming. The following sections describe how some goals of chosen KPAs are affected by pair programming. We present a brief discussion about the benefits provided by pair programming in achieving the goals, using statistical and/or anecdotal evidence. The anecdotes used in the paper were obtained from a web survey conducted to gather information about pair programming. The paper recognizes that adopting pair programming in an organization might necessitate some changes to its policies for achieving other KPAs. We therefore suggest methods for introducing the practices of such KPAs in section 4.
3
Benefitted KPAs
Previous research indicates that pair programming improves code quality, makes work enjoyable and enhances the knowledge level of the people involved. [3,4]. Some KPAs of the P-CMM are directly impacted by these types of improvements. For example, evidence indicates that pair programming contributes to improve the work atmosphere [3,4] thus benefiting “Work Environment” - a KPA in level two of the P-CMM. Pair programming is also said to improve communication and coordination between team members, another KPA in level two. In the following sections, we examine, in some detail, the beneficial impact of pair programming on some of the P-CMM KPAs.
Pair Programming: Addressing Key Process Areas of the People-CMM
3.1
225
Work Environment
“The purpose of this KPA is to establish and maintain physical working conditions and to provide resources that allow individuals and workgroups to perform their tasks efficiently without unnecessary distractions” [1] The goals of this KPA are:
• The physical environment and resources needed by the workforce to perform their assignments are made available. • Distractions in the work environment are minimized. • Work environment practices are institutionalized to ensure that they are performed as managed processes. An anonymous survey of professional programmers [3] showed that 95% of those surveyed enjoyed work more while pair programming. In the same year, the students of the summer and fall Software Engineering classes at the University of Utah were surveyed three times. Consistently, over 90% agreed that they enjoyed their job more when pair programming. Interruptions and distractions reduce because (a) people have an opportunity to talk to a peer without having to distract him from his work and (b) the presence of the partner creates “peer pressure” on either programmer to stick to the task at hand – partially satisfying the second goal. In the words of Jeff Canna of Rolemodel Software, “…It [pair programming] keeps me focused and on task. While working on a particular task I tend to jump around a lot. Quite often doing things that have nothing to do with the project I am working on. Being paired keeps me focused not only the task but the project as well.” 3.2
Communication and Co-ordination
“Pair programming gets people on the same page faster than any other technique I have used. Especially in the early phases of a development project by pairing and rotating programmers everyone develops a shared mental model (metaphor?) of the project that accelerates development. …pairing is the best way I have found to synchronize a team, either at the beginning or periodically over the development cycle”. - Jim Murphy The Communication and Coordination KPA indicates the need for attempts to establish timely communication across the organization and to ensure that the workforce has the skills to share information and coordinate their activities efficiently [1]. The goals of this KPA are:
• Information is shared across the organization • Individuals or groups are able to raise concerns and have them addressed by management • Individuals and workgroups coordinate their activities to accomplish committed work • Communication and Co-ordination practices are institutionalized to ensure they are performed as managed processes.
226
Gopal Srinivasa and Prasanth Ganesan
Pair programming helps individuals and workgroups coordinate their work, through pair rotation. Since pairs are rotated amongst all team members and not only amongst members of a work unit, the team members are aware of the activities and technical details within all work units. All the team members are also aware of the problems in other work units and many of these problems may be solved just because a developer from another work unit (with relevant knowledge) pairs with a developer from the “problem” unit. This therefore minimizes the need for regular (technical) team meetings and saves developer time. Policies and procedures for pair rotation can also be drawn up partially satisfying the goal that the communication channels must be institutionalized. 3.3
Training and Development
“… By virtue of dynamic pairs (pair rotation), knowledge is shared throughout the group. … new hires are able to come up to speed more rapidly. Since pairing is a part of daily life, no one has to take downtime to help out the new person. ... Much of the mundane technical training can be assimilated as part of the job.” - Jeff Langr (Object Mentor Inc.) “When an important new bit of information is learned by someone on the team, it is like putting a drop of dye in the water. Because of the pairs switching around all the time, the information rapidly diffuses throughout the team just as the dye spreads througout the pool. Unlike the dye, however, the information becomes richer and more intense as it spreads and is enriched by the experience and insight of everyone on the team”- Kent Beck[9]. “The purpose of this KPA is to bridge the gap between the current skills of individuals and the skills they require to perform their assignments” [1]. The goals of this KPA are:
• Individuals receive timely training that is needed to perform their assignments in accordance with the unit’s training plan. • Individuals capable of performing their assignments pursue development opportunities that support their development objectives • Training and development practices are institutionalized to ensure that they are performed as managed processes. Usually there are limited opportunities for company-sponsored training programs and not every developer can be accommodated in them. Training also means loss of valuable developer time. Further, many concepts specific to a project can only be passed on from senior programmers to junior ones. Many of these problems can be addressed through pair programming. Projectrelated training can be provided by pairing experts with novices, thus allowing a “mentor-mentee” relationship. Evidence suggests that this is an effective way of training people, similar to the “workman-apprentice” model in sculpture [9]. Pair programming is also known to be effective in transferring knowledge related to good coding practices, design methodologies, and even small tool tricks, things that usually cannot explicitly be taught through formal training methods. A survey of senior Software Engineering students conducted by Dr. Williams showed that nearly 84% of the class learnt a technology faster because of pair
Pair Programming: Addressing Key Process Areas of the People-CMM
227
programming [8]. Thus pair programming can be a useful training and development methodology, which partially satisfies the first goal of this KPA. 3.4
Competency Development
“The purpose of competency development is to continuously enhance the capability of the workforce to perform their assigned tasks and responsibilities.”[1] The goals of this KPA are:
• The organization provides opportunities for individuals to develop their capabilities in its workforce competencies. • Individuals develop their knowledge, skills, and process abilities in the organization’s workforce competencies. • The organization uses the capabilities of its workforce as resources for developing the workforce competencies of others. • Competency Development practices are institutionalized to ensure they are performed as defined organizational processes. At the developer level, many competencies are technical in nature. These may be competencies related to tool usage, project-related knowledge, and methodologies. Pair programming is known to have a positive impact on all these. Many “experts” have pointed out, time and again, that pair programming with even “novices” can help them gain knowledge [3]. Pairing experts (particularly when they are compatible) allows both of them to gain from the experience of the other [9]. As a professional programmer observed “Instead of fighting ’religious wars’, we found ourselves sharing the experiences that gave us our ’religion’, with the result that we resolved differences in style and each learned techniques based on the other's experience.” Thus, pair programming can be a suitable technique to satisfy some of the conditions required by the goals of the Competency development KPA. 3.5
Mentoring
“The purpose of mentoring is to transfer the lessons of greater experience in a workforce competency to improve the capability of other individuals or workgroups” [1]. The goals of this KPA are:
• Mentoring programs are established and maintained to accomplish defined objectives. • Mentors provide guidance and support to individuals or workgroups. • Mentoring practices are institutionalized to ensure they are performed as defined organizational processes. Conventionally, mentoring has been a separate task for the senior programmer on the workgroup, forcing him to devote time to train the novices. This means some downtime for the senior programmer. Also, these sessions are usually one-way, as the senior programmer usually has no feedback about the progress of the mentee. Further, there is really no way of evaluating what the mentee learnt in the sessions until he was put on production code.
228
Gopal Srinivasa and Prasanth Ganesan
Pair programming, can solve these problems to a substantial extent. The senior programmer does not have to take time off to train the mentee. Training sessions are arranged on the fly, and are conducted as the mentor works, satisfying the second goal of the KPA without sacrificing the productivity of the people involved. Research done to verify the effect of pair programming on training time showed that with pair programming, the assimilation time came down from 28 days to 13 days, the mentoring time reduced from 32 % to 25 % and the training effort was cut down by half. [9]
4
Adoption Model for Pair Programming
Pair programming enhances the capabilities of an organization in managing certain KPAs. However, effective implementation of the practice requires that a few policies be implemented in some of the other KPAs. This section discusses the impacted KPAs and the policies that would need to be incorporated to achieve the professed benefits. 4.1
Performance Management and Workforce Planning
“The purpose of Performance Management is to establish objectives related to committed work against which unit and individual performance can be measured to discuss performance against their objectives and to continuously enhance performance.” [1] “The purpose of Workforce Planning is to coordinate workforce activities with current and future business needs at the organizational and unit levels.” [1] Selection of pairs and pair rotation would require important consideration by the managers. For example, the group could start with a mentor-mentee pair (expert – novice pair) working on modules. As certain components grow more complex then the expert-expert could be paired together. When the team sees a need to grow then depending on experience level of new team members the pairs could vary from a mentor – mentee relationship pair to an average-average pair. Also assessment and rating of individuals would have to be carried out in a pairprogramming environment. “Peer evaluation” whereby team members provide feedback to their manager on all their peers would then be a major factor in the performance evaluation of an individual. Thus the rating of the individuals would also reflect their effectiveness of working in pairs. With pair rotation in place, the module leaders could be part of the team of pair programmers and could assess individuals as they move around in the modules. Also in an Expert – Novice pairing, the expert could rate the novice and the manager could rate the expert based on the component delivery. This also enables a quick feedback mechanism and assessment would be a part of the ongoing process, rather than a onetime activity that happens once in few months. . Pair programming also influences effort estimates for any enhancements of the project. The manager would be able to use the assessment of the work by the team rather than an individual. This helps estimate the future needs for manpower. Since every team member is aware of the linkage between the various components of the
Pair Programming: Addressing Key Process Areas of the People-CMM
229
product (by virtue of pair rotating with “owners” of the components), better effort estimates can be evolved. Thus the management can take a better-informed decision. Further mechanisms would have to be adopted by the managers to handle more dynamic pairings and this discussion is outside the scope of this paper. Concisely, if the company requires achieving this KPA then it must have processes in place that could mature over time for handling pair rotation, individual evaluation and utilizing workgroup synchronization and maturity. 4.2
Competency-Based Assets and Continuous Capability Improvement
The purpose of competency based assets is to capture the knowledge experience and artifacts developed in performing competency based processes for use in enhancing capability and performance. The purpose of continuous capability management is to provide a foundation for individuals and workgroups to continuously improve their capability for performing competency based processes. [1] In non-pair programmed software development, as programmers work through lines of code and debug issues the knowledge level of the individual increases, but the knowledge is restricted to the individual. With pairing and pair rotation, the knowhow traverses the knowledge “pairvine” and becomes a part of the shared repository of knowledge in the group. But for the knowledge to grow for the organization, these knowledge tidbits will have to be documented. Here the navigator could play a pivotal role by acting as a scribe and documenting the discoveries that the pair makes. This knowledge could be used for the long-term benefit of the team and the organization. As experienced pair programmers acknowledge, even experts tend to learn a lot through the process of pair programming. Pair programming thus can be viewed as a tool in itself that supports the growth of individuals and workgroups. With mechanisms to increase the effectiveness of pair programming, the company would be on the path of achieving the Continuous Capability Improvement KPA.
5
Conclusion
In conclusion, we wish to state that pair programming is a convenient, enjoyable and low-cost technique to manage some of the KPAs in the People CMM. Pair programming leads to an improved work environment, offers alternate learning opportunities, and makes mentoring a part of every-day activities. Research has shown that these benefits are gained with not more than 15% extra development time while achieving 15% reduction in defects [6]. Minor adjustments do have to be made to manage other KPAs when pair programming is adopted. However, pair programming alone is not sufficient and parallel mechanisms will have to be introduced to achieve the desired KPAs. During the course of our investigation, we also observed that pair programming addresses many of the KPAs of the Software CMM. The KPAs that caught our attention were Risk Management, Defect Prevention, Quality Management, Code and design reviews [10].
230
Gopal Srinivasa and Prasanth Ganesan
Acknowledgements The authors are grateful to Dr. Laurie Williams of North Carolina State University and Dr. Aldo Dagnino of ABB for their guidance and support.
About the Authors Gopal and Prasanth are graduate students in the Computer Science and Electrical and Computer Engineering departments of North Carolina State University.
References 1. Bill Curtis, William E Hefley, Sally A Miller: People Capability Maturity Model® (PCMM[0]®) Version 2.0 CMU/SEI-2001-MM-01 2. Humphrey, Watts S, Managing technical people: innovation, teamwork and the software process. 3. Cockburn, A. and Williams, L., The Cost and benefits of Pair Programming 4. Williams, Laurie and Kessler, Robert R. The Effects of “Pair-Pressure” and “PairLearning” on Software Engineering Education. Conference of Software Engineering Education and Training 2000. 5. Nosek, John T. (1998). “The Case for Collaborative Programming,” Communications of the ACM, Volume 41, Number 3, 1998, pages 105-108. 6. Williams, Laurie, The Collaborative Software Process. PhD Dissertation 7. Williams, L., Kessler, R., Cunningham, W., & Jeffries, R. (2000, July/August 2000). Strengthening the Case for Pair-Programming. IEEE Software, vol. 17, no. 3 8. Williams, Laurie, Kessler, Robert R., Experimenting with Industry’s “Pair Programming” Model in the Computer Science Classroom, Journal on Software Engineering Education, December 2000. 9. Williams, L. and Kessler, R. Pair Programming Illuminated, Boston, MA: Addison Wesley, 2002. 10. Paulk, Mark C. Extreme Programming from a CMM Perspective. IEEE Computer 18:6:1926, Nov/Dec 2001
When Pairs Disagree, 1-2-3 Roy W. Miller Software Developer RoleModel Software, Inc. 342 Raleigh Street Holly Springs, NC 27540 USA +1 919 557 6352 VQMPPIV$VSPIQSHIPWSJX[EVIGSQ
Abstract. Pair programming requires two programmers to work together to solve the same problem. These programmers probably have opinions about most design decisions (method names, and anything else that comes up during a pairing session). Some of those opinions are stronger than others. It is quite likely that those opinions will differ, sometimes often. When they do, there needs to be a simple way to resolve the conflict. Arguing doesn’t work, and helps no one. When a disagreement about how to proceed comes up, each pair should score his opinion on a scale of 1-3. The highest score dictates what the pair does next. In the event of a tie, the pair should discuss the various options, agree to disagree, pick one, and move on. This simple approach works.
1
The Problem
Suppose you are using pair programming on your development project. You and your pair are coding along, writing tests first, etc. At some point, you reach a design decision, perhaps a major one. Before barreling ahead, you discuss the decision you have to make, the options each person can think of off the top of his head, and some pros and cons of each. After a few minutes of discussion, you reach the point where you need to decide how to proceed. Each person gives his opinion. Suppose those opinions are different, even diametrically opposed. What do you do? You have three options: 1. Give up, dissolve the pair, and refuse to move forward on the project. 2. Argue until one half of the pair gives up. 3. Discuss the options, agree to disagree for the time being, pick one of the options for how to proceed, and move on. The first option kills a project, one pair at a time. The second option wastes valuable time that could be spent making progress and learning. The third option is best, but how do you pick an option for how to proceed if you and your pair disagree about what to do next? This situation occurs every day on my projects. I doubt yours are different. Fortunately for my own pairing sanity, I stumbled upon an answer to the question that works in every case I’ve come across. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 231–236, 2002. © Springer-Verlag Berlin Heidelberg 2002
232
2
Roy W. Miller
Scoring Opinions
One Saturday afternoon, a colleague and I were pair programming at his house. We disagreed about some design point, and we reached an impasse. He told me a story about how he and another fellow on a previous project had ranked their opinions on a scale of one to 10. A rank of one meant, “If treed by a bear and forced to give an opinion, I’d say thus and so, but I really don’t care one way or the other.” A rank of 10 meant, “I believe this strongly, and I’ll quit if I don’t get my way.” I liked the sound of this, but thought the scale was unnecessarily complex (how was a rank of seven different from a rank of eight, for example). So I proposed a simpler approach I thought would work. It had five steps: 1. Make sure you understand both opinions first. 2. Rank your opinion on a scale of one to three (1 = “I don’t really care”, 2 = “I care enough to spend some time discussing it to hear your point of view and figure out the best option - I need more convincing before I’ll change my mind”, 3 = “I’ll quit if I don’t get my way”). 3. In order to learn, proceed with the option represented by the highest-ranking opinion. 4. In the event of a tie, handle it in the way that makes the most sense (see the next section, Handling Ties). 5. If there still isn’t a clear path, either flip a coin to pick one of two original options, or try both and see which works better. When ranking opinions, threes should be rare. In fact, I can remember only one, and it had nothing to do with code. Typically, the highest anyone goes is a two, and most often the other half of the pair is a one on that particular point.
3
Handling Ties
Take a closer look at step four in the previous section. You can have a tie at any number (1-1, 2-2, or 3-3). In a recent impromptu retrospective with a colleague, we realized that most of our ties have been at one. We have had a few at two. We have never had a tie at three. How should you handle ties at each level? 3.1
Ties at One
If neither member of a pair really cares that much about his opinion, simply pick one and move on. Flip a coin, or alternate whose opinion you pick. Discuss what you need to discuss to see if you can reach consensus, but don’t waste time arguing over trifles. Generally, pick the simplest approach that could possibly work and go with it. 3.2
Ties at Two
Ties at two are worth a little more attention. If each member of a pair cares about his opinion and wants to discuss it, do that. I have spent a half-hour to an hour discussing a 2-2 tie. It is worth the time to avoid
When Pairs Disagree, 1-2-3
233
running roughshod over your pair’s opinion, or yours. Design shouldn’t take too much time, but it does take some. I have used a couple approaches to try to reach consensus on a 2-2 tie. Sometimes a pair can agree after simply discussing the pros and cons of each opinion. Sometimes each member holds his opinion just as strongly after talking about the alternatives. In that case, the pair has to make a choice. Typically, my pairs have 1. Asked a third party to render an opinion. If it agrees with one of the opinions expressed by the pair, we have proceeded with that option. If it’s different, and better, we have proceeded with the new option. 2. Engaged in a interesting activity we call “flip-flop.” Each member of the pair pretends he holds the other member’s opinion, and argues as vigorously as he can for it. 3. Explore each opinion by coding a little, and let the code tell you which option is “best” for the current situation [1]. The second option takes more energy, usually requires more time, and assumes a healthy dose of humility. But it is amazing how arguing for another’s opinion can help you see flaws in your own, or understand the other better. The third option isn’t an official XP practice, but it happens often. Note, however, that it is a skill to determine the smallest amount of exploration necessary to demonstrate which approach is better. Don’t assume you have to write code to explore all of the options all of the time, but don’t be afraid to try it either. You’ll get better with practice. I have never had a tie at two go unresolved after using one or more of these tools. 3.3
Ties at Three
Ties at three are serious. If each member of a pair has an extremely strong opinion, and those opinions differ, that pair may have a problem. In this case, maybe one member has a pride issue that he has to resolve. Or maybe this is a “false tie” (more on this below). If the disagreement is that “simple,” there may be hope. If not (e.g., both opinions are very strong, both are very good, and nobody is screaming yet), to be brutally frank I’m not sure what to do. I would recommend using the tools for a 2-2 tie to see if they can help. If they can’t, the team may need to have a group conversation about whether it makes sense to proceed with both members of the pair on board. 3.4
False Ties
I alluded to “false ties” above. This is the case where each member of the pair scores his opinion the same, but there really is no conflict. There are two cases of “false ties” that I have seen: 1. Both members of the pair are saying the same thing, but aren’t understanding each other because they’re not really communicating 2. Each member of the pair is overlooking a third, better option on which they both can agree easily
234
Roy W. Miller
When two people are expressing opinions and arguing for or against, they need to be particularly careful not to stop communicating. Remember, the goal is to solve the same problem, not to “win” the argument. I have been in several situations where my pair and I were so caught up in arguing for our own opinions that we missed a third way. Seeing that third way isn’t always easy. Take a step back to evaluate both opinions rather than arguing about them. Ask yourself, “Are we missing something?” The answer may surprise you.
4
The Natural Point for Numbers
Recall the scenario I laid out at the beginning of the paper. You and your pair reach a design decision, talk about how to proceed, and reach a point where you need to choose a path. Assume you like the 1-2-3 approach and have decided to use it. How do you know when to rank your opinions? Sometimes, it’s natural for a member of a pair to score his opinion immediately and tell his pair. This is especially true when your opinion is a one for you. Rather than wasting time discussing options for something that you don’t care about, you can let your pair know you don’t care by saying, “We could do this, but it’s a one for me.” Most of the time, it will be natural for each member of a pair to tell the other what his opinion score is after some discussion. In my pairing sessions, we tend to talk for a while first, then we get tired of talking and somebody blurts out a score for his opinion. The other person does the same, and we see if we have a tie or a clear path to proceed. When to score shouldn’t be too formal. There shouldn’t be “rules” about it. Listen first, try to understand other viewpoints, then score your opinions when it feels right. If there needs to be more discussion, take the time.
5
A Counting Example
Suppose two people are pairing, and they reach a point where they disagree about a particular design decision. They are building a class that seems like it should implement a given behavior, but it just doesn’t feel right. One member of the pair (call him Bob) suggests simply adding a method to the given class for the time being. The other half of the pair (call him Jack) thinks creating a brand new helper class is the wiser move. Time to discuss. Bob and Jack discuss the alternatives. There is no attacking or evaluation going on here – this is brainstorming. They try to understand the other person’s opinion, and to develop each of the alternatives to its strongest point. Time to score opinions. Bob thinks creating the new class is premature – he doesn’t think the pair has the insight to know they need that class yet. He says, “Simply adding another method to the existing class is a two for me.” Jack sees his point. He hadn’t really thought the new class would be a premature abstraction. He gives his opinion a one. Bob’s opinion wins. The pair proceeds with adding the method.
When Pairs Disagree, 1-2-3
235
Suppose, though, that things hadn’t gone this way. Suppose Jack comes up with a pretty good argument for the new class, and he says, “Creating a new helper class is a 2 for me.” We have a tie. They ask Frank if he has a minute to render an objective, third-party opinion. They describe the situation to him and give him a chance to suggest an option they haven’t thought of. He doesn’t think of something new, but agrees with Bob that adding another method makes sense. Bob’s opinion wins. The pair proceeds with adding the method. Perhaps “wins” is the wrong word to use here. Bob and Jack aren’t competing in some sort of computer science beauty contest. They are learning together and have reached a fork in the road. They need to pick a path as efficiently as possible. Sometimes Bob’s recommendation will be the one they choose. Sometimes they’ll choose Jack’s. Sometimes they’ll agree. In all cases, making progress and learning is the most important point.
6
The Benefit of Counting
A key goal of XP is allowing the team to move at maximum speed all the time. This doesn’t mean the team should ignore important decisions, or implement designs without thinking. It does mean that pairs should not waste time on squabbles that interfere with making progress. Disagreement on a project is inevitable. Project stagnation and death don’t have to be. The team is learning constantly. The best way to learn is to do and to pay attention while you do. The best way to do when there is more than one option for how to proceed is to discuss the options just enough to make the smartest choice you know how to make, then get going. Adjust as you learn. Resolving disagreement efficiently is the best way to keep moving. Using the 1-2-3 approach is the most efficient approach I’ve ever seen: 1. It lets a pair surmount an impasse. If there is disagreement on how to proceed, scoring opinions helps you keep moving after thoughtfully considering what other people think (“It was a two for me, but I see your point – your approach seems better. Let’s go with it.”). 2. It gives a pair (and a team) a shorthand for expressing opinions. This is a common language and a shared understanding (“That’s a two for me. What do you think?”). 3. It defuses conflict by allowing each member of a pair to throw out ideas without seeming to contradict his pair all the time (“It’s a one – I’m just brainstorming here.”). I have seen this approach work for everything from picking a method name to choosing which path to take for implementing an entire user story. In one trivial case, it even worked in helping the team decide where to go to lunch. You don’t need to carry it that far.
7
Conclusion
Pair programming is an extremely difficult thing for human beings [2]. It can be even more difficult when both developers are good at what they do. Strong opinions are unavoidable. Fortunately, if two people are interested at all in pairing, they probably
236
Roy W. Miller
also share the goal of solving a problem together. If that is true, there is hope they can rise above their natural tendencies to argue about their opinions, and try to communicate. The question is how to do that simply and effectively. If the famed “pair programming mind meld” happens, two programmers could share a brain. In all other cases, having a simple approach to resolve conflict is essential. Scoring opinions is the simplest approach I’ve seen that could possibly work.
Acknowledgements Thanks to Duff O’Melia for being a good pair, and for describing his original “counting experience” with Andy Fekete at Organon Teknika Corporation. Duff helped me refine the idea and put it into practice. Additional thanks to Adam Williams, Michael Hale, Nathaniel Talbott, and Ken Auer for reviewing this paper and offering helpful suggestions.
References 1. Jeffries, R. et al. Extreme Programming Installed, Addison-Wesley (2000). Ron Jeffries, Ann Anderson, and Chet Hendrickson discuss this concept on page 70, as well as the idea that you should explore alternatives with code for “a few minutes.” 2. Auer, K. and R. Miller. Extreme Programming Applied: Playing to Win, Addison-Wesley (2001). We talked about the challenges of pairing at length in chapter 14.
Triggers and Practice: How Extremes in Writing Relate to Creativity and Learning Richard P. Gabriel Sun Microsystems VTK$HVIEQWSRKWGSQ
Abstract. Several XP principles are used by creative writers all the time. Creative writing is about exploration and learning. In this talk we examine the relationships between learning and making things. We will see that everything worth anything is extreme.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 237, 2002. © Springer-Verlag Berlin Heidelberg 2002
Extreme Teaching – An Agile Approach to Education Daniel Steinberg Dim Sum Thinking, Inc. HWXIMRFIVK$GSVIGSQ
Abstract. Extreme Programming is built around core values, principles, and practices designed to align the experience of developing software with reality. You can set out to design your application with the assumption that you can anticipate all issues that will arise in a world of static requirements. What do you do when you come across an unforeseen problem or when the requirements change over time? What happens when new and better solutions become apparent in the course of development? Similarly, as instructors, we begin each new term armed with a syllabus and an idea of how the course will run. Then reality rears its ugly head. The software that was supposed to be installed in the lab for the first day of classes isn’t installed. The textbook looked better when you evaluated it last spring than it does now that you are actually trying to teach from it. The students are either much quicker, much slower, or much more diverse than you anticipated. In this workshop we’ll begin the process of selecting core values, principles, and practices that we can use to guide us in the classroom the same way that XP can help guide developers. We do not expect the practices of XP to map over. On the other hand, as an example, much of XP benefits from quick, accurate feedback. Several of the practices are based around this idea. Quick, accurate feedback obviously applies in the classroom as well and may suggest other values, principles, or practices.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 238, 2002. © Springer-Verlag Berlin Heidelberg 2002
Extreme Programming as a Teaching Process Moderator J. Fernando Naveda NJRMGW$VMXIHY
Panelists Kent Beck1, Richard P. Gabriel2, Jorge Diaz Herrera3, Watts Humphrey4, Michael McCracken5, and Dave West6 1 Three
Rivers Institute OIRX$XLVIIVMZIVWMRWXMXYXISVK 2 Sun Microsystems VTK$HVIEQWSRKWGSQ 3 Southern Polytechnic State University NHME^$WTWYIHY 4 Software Engineering Institute [EXXW$WIMGQYIHY 5 Georgia Institute of Technology QMOI$GGKEXIGLIHY 6 New Mexico Highlands University H[IWX$GWRQLYIHY
Abstract. Programming languages are often chosen as "teaching languages" for beginning computing courses in a variety of fields such as computer science, computer engineering, and software engineering because they convey fundamental principles without being overly complex. Pascal and Java are examples of popular programming languages that have been used as teaching tools during the last decade. If a students can master a teaching language, the reasoning goes, he/she can will be able to easily advance to more complex, domainspecific languages. Likewise, Extreme Programming (XP) might be considered an appropriate teaching software development process because it teaches the fundamentals of software process without being overly complex and time consuming. One might contend that if a student masters the twelve practices of XP, it is likely he or she will be able to adapt these practices to others that might be more appropriate in a given context Our panelists will comment on their agreement (or disagreement) with this panel’s premise and will debate the virtues of XP as a valid vehicle for training software professionals in academic setups.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 239, 2002. © Springer-Verlag Berlin Heidelberg 2002
From the Student’s Perspective Moderator James Caristi
.EQIW'EVMWXM$ZEPTSIHY
Panelists Frank Maurer1 and Michael Rettig2 1 University of Calgary QEYVIV$GTWGYGEPKEV]GE 2
Thoughtworks
1MGLEIP6IXXMK$XLSYKLX[SVOWGSQ
Abstract. This panel will focus on problems associated with having students learn agile methodologies, whether those problems involve lack of motivation, perceived irrelevance, or difficulty. The panelists are people who have learned agile methodologies on the job, in an academic classroom, and from a commercial trainer. They comment on their experience, what worked, why, what failed, why and what they would have preferred to receive in their formal education as students.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 240, 2002. © Springer-Verlag Berlin Heidelberg 2002
Perceptions of Agile Practices: A Student Survey Grigori Melnik1 and Frank Maurer2 1 Department
of Information and Communications Technologies Southern Alberta Institute of Technology (SAIT) Calgary, Canada KVMKSVMQIPRMO$WEMXEFGE 2 Department of Computer Science University of Calgary Calgary, Canada QEYVIV$GTWGYGEPKEV]GE
Abstract. The paper reports on the results of a recent study on student perceptions on agile practices. The study involved forty-five students enrolled in three different academic programs (Diploma, Bachelor’s and Master’s) in two institutions to determine their perceptions of the use of extreme programming practices in doing their design and coding assignments. Overwhelmingly, students’ experiences were positive and their opinions indicate the preference to continue to use the practices if allowed.
1
Introduction
Emerging agile or lightweight software development methodologies have a great potential. According to Giga Information Group Inc., more than two-thirds of all corporate IT organizations will use some form of agile software development process within next 18 months [1]. However, so far only a small percentage of development teams have adopted an agile approach. Very often, high-profile consultants are hired to introduce agile methods into a company. These consultants usually are very talented developers and/or mentors. Hence, one issue that is often discussed is if agile methods work because of their engineering and management practices or because the people who introduce them are simply very good developers. A related issue is if they simply work because they focus on things that software developers like to do (e.g. writing code, producing quality work) while de-emphasizing aspects that developers often hate (e.g. producing paper documents). The argument there is basically: if you make your development team “happy”, you will get a very productive team. Our study is looking into the later issue. Our goal was to determine the perceptions of a broad student body on agile practices: Do various kinds of students like or dislike agile practices? Are there differences based on education and experience? We believe that agile methods will only be successful in the long run if the majority of developers supports them – particularly, they need to work with average developers as this is what (more or less by definition) the average project employs. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 241–250, 2002. © Springer-Verlag Berlin Heidelberg 2002
242
Grigori Melnik and Frank Maurer
Presently, there is a lot of anecdotal evidence and very little empirical and validated data on perceptions on agile methods. Studies of Williams and Kessler [2, 3] evaluate pair programming, which seems to allow students to complete their assignments faster and better than with solitary learning. Gittins and Hope [6] identify a number of human issues in communications, technology, teamwork and political factors that significantly influence implementation and evolution of XP into a small software development team. In the next four sections, we present an overview of the study, its subjects, empirical results and qualitative outcomes. We conclude with a summary of our findings.
2
Study Overview
The intent of our descriptive study was to see what the student perceptions of agile practices are and how they vary (it at all) depending on the programs. The study focused on agile engineering practices that were coming from eXtreme Programming (XP)1. The subjects of the study were students on various levels of experience (starting from students in the second year of higher education up to graduate students who often had several years of experience in software development). We developed a Web survey of 20 questions inviting students’ general comments on XP and on three specific aspects: pair programming, project planning using the planning game and test-first design. Both qualitative open-ended questions and quantitative questions (on a 5 point Likert scale) were included. Questions complemented each other and provided both the depth and the width of coverage on the topic. When looking at the students’ experiences, we asked a number of questions: Did the students enjoy XP practices? What worked for them? What problems did they encounter? Whether they would use XP practices in the future (if allowed)?What were their impressions of the test-first design? How did XP improve their learning? Informal discussions were also conducted during the semester to get some informal feedback on other aspects of XP that were used in the courses (continuous integration, collective code ownership, refactoring, coding standards). The use of a mix of qualitative and quantitative research methods provided an opportunity to gain a better understanding of the factors that impact students’ and developers’ experiences in XP. It should be mentioned that the study performed is not intended to be a complete formal qualitative investigation. Validation of the results with much larger studies is required (and planned to be undertaken in the future).
3
Student Populations and Courses
Students of three different levels of computer science programs from the Southern Alberta Institute of Technology (SAIT) and the University of Calgary were the sub1
We are using “agile practices” to make clear that we did not use the full set of XP practices in our study.
Perceptions of Agile Practices: A Student Survey
243
jects for this study. All individuals were knowledgeable about programming. Data was collected partially during the semester and partially at end of the academic semester in which XP practices were introduced. 3.1
College-Level Diploma Program
The Computer Technology Diploma program at SAIT is designed to make the graduates self-reliant by concentrating on their investigative and problem solving skills. We studied 2nd year students majoring in Information Systems. During the first year, they have studied fundamentals of object-oriented programming with Java using traditional software development methodologies. 22 respondents were enrolled in the “Data Abstraction and Algorithms”, a course with an emphasis on designing and building complex programs that demonstrate in-depth understanding of abstract data types. The following XP practices2 were selectively adopted: test-first design, pair programming, all code unit-tested, often integration and collective code ownership. Consistent coding styles were encouraged. Code exchanges were sometimes initiated and teams used the components designed by their peers. Emphasis was on the importance of human collaboration and shortened life cycles. Documentation was embedded in the code. Students were required to avoid redundancy and confusing remarks and to assign meaningful names to the properties and methods. All students were equipped with their own laptops. In usual pair programming when two students work simultaneously at one computer on a program, pairs in this course would utilize two computers: one for coding and one for verifying the API, writing and running short scriplets, etc. Both driver and observer had access to one of the laptops. Therefore, competition for mouse and keyboard was eliminated. 3.2
College-Level Post-Diploma Baccalaureate Program
The Bachelor of Applied Information Systems Technology (BAI) is a two-year program. Twelve students majoring in Information Systems Development and Software Engineering took part in the study; six, with prior work experience. Students were enrolled in the “Internet Software Techniques” course that introduces the concepts and techniques of Web development. They built real-life applications based on different tiers of distributed Web systems. Students were required to form pairs and to work on all programming assignments using the following principles of XP: test-first design, continuous integration, pair programming, and collective code ownership. 3.3
University M.Sc. Graduate Program
The subjects participating in the survey were enrolled in a graduate course “Agile Software Processes”3 as part of their M.Sc. program. Nine of twelve students had several years of software development experience as developers or as team leaders. 2 3
In fact, this set of practices is what these students now see as XP. http://sern.cpsc.ucalgary.ca/courses/SENG/609.24/W2002/seng60924.htm
244
Grigori Melnik and Frank Maurer
Four-year degree in computer science or software engineering was required for program entry. The course-based program requires an additional three years of experience in software development. The course is not required for completion of the M.Sc. degree.4 At least two of the students had prior industrial experience with XP. Agile practices such as eXtreme Programming, Scrum, Agile Modeling, and Feature-Driven Development were discussed and applied in the course. For the assignment, students were split up into two groups of six. Each group developed a small Web-based system. The teams were strongly encouraged to use all XP practices. Each team delivered three releases of this system over 12 weeks, one every 4 weeks. The survey was conducted prior to the second release. Generally, students do not work full time on a course. On average a student could spend about 5-7 hours/week on the course assignment5. Hence, the effort going into a release is approximately about 20 hours per student (which is much lower than in XP or any other agile method). Informal student feedback indicated that the first release was strongly impacted by the ramp-up time for learning the tools (IBM WebSphere Studio, DB2, CVS, Ant) and by environment instabilities (which were resolved for the second release). In addition, the feedback also pointed to problems in scheduling pair programming sessions as most of the students were part time and only rarely available at the UofC.
4
Empirical Data
Considering the relative simplicity of analyses undertaken, the conclusions we report are descriptive statistics only. Mean scores (M) were used to determine the strength of various statements where higher mean scores means that respondents agree with a particular statement. Standard deviation (SD) shows the consensus among respondents. The Likert Scale from 1=“strongly disagree” to 5=“strongly agree” was selected. Forty five students voluntarily and anonymously completed the questionnaire. Table 1. Summary of Respondents by Academic Programs
Invitations Respondents
College-level Diploma Program (2 years) 35 22
College-level PostDiploma Baccalaureate Program (2+2 years) 27 12
University Graduate Program (4+2 years) 12 11
Total, All Programs 74 45
Fig.1(a) illustrates that the overwhelming majority of respondents (91%) believe that using XP improves the productivity of small teams (M=4.11; SD=0.82). 87% of students (M=4.18; SD=0.71) suggested that XP improves the quality of code and 82% of all respondents (M=3.89; SD=0.82) would recommend to their company to use XP. 4
Hence, students taking this course are interested in agile methods. Most of them had a positive bias while only one student expressed some reservation at the beginning of the course. 5 This estimate is based on time sheets over 10 weeks from one of the UofC groups.
Perceptions of Agile Practices: A Student Survey
245
30
30
25
25
20
20
15
15
10
10
5
5
Q8
0
0
Q3
Strongly disagree
disagree
disagree
Q2
Somewhat
Somewhat disagree
Not applicable
Q6
Not applicable Somewhat
Q1
Somewhat agree
Q7
Strongly
agree
Strongly agree
(a)
Q5 Strongly agree
(b)
30
10
25 8
20 6
15 4
10
2
5
Q15 0
Q14 Strongly disagree
Not applicable
disagree
Q12 Somewhat agree
(c)
Q18 Strongly
Q13 Somewhat disagree
Q19
0
Q17
Somewhat disagree
Q16
Not applicable Somewhat agree
Strongly agree
Q1. I would recommend to my company to use XP. Q2. I believe that using XP improves the quality of code. Q3. I believe that using XP improves the productivity of small teams. Q5. I personally like pair programming. Q6. I believe that pair programming speeds up the development process. Q7. I believe that pair programming improves software quality (better design, code easier to understand) Q8. If allowed by my company, I will use pair programming in the future
Strongly agree
(d) Q12. Q13. Q14. Q15. Q16.
My team used test-first design for the assignments. Test-first design helped to improve software design. Test-first design speeds up the testing process. Test-first design improves software quality. Project planning using the planning game generates more accurate estimates that the methods that I/my team used so far. Q17. I’m confident with my estimate on user stories. Q18. Using the planning game makes the team more adaptive to changing requirements. Q19. Progress tracking works well following XP practices.
Fig. 1. Extreme Programming Perceptions Distribution
Fig.1(b) shows the results on practicing pair programming. 84% of all respondents in both institutions (M=4.07, SD=1.02) enjoy pair programming. This is reasonably high. 80% of students (M=3.80; SD=1.11) realized that pair programming speeds up the development process. 84% of respondents (M=4.04; SD=0.92) perceived pair programming as a technique that improves software quality and results in better
246
Grigori Melnik and Frank Maurer
designs/programs that are easier to understand. 89% of students (M=4.07; SD=0.80) expressed their intention to continue using pair programming if allowed. When asked about test-first design (see Fig.1(c)), only 58% of the selection group (M=3.51, SD=1.11) acknowledged doing so while working on their assignments. There is a number of factors why this technique may not have been as popular as others. One aspect mentioned informally by the UofC students was that for Release 1 they were struggling too much with getting the tools going. Another one is the known temptation to execute the code as quickly as possible. Another one is the students’ past experiences and practices when testing was done last. Only 69% (M=3.67, SD=0.97) believed that test-first design speeds up the testing process although 80% of all students (M=3.84, SD=0.84) stated that test-first design improves software quality. As only the UofC grad students were using the planning game in their assignment, the number of responses is very small (see Fig.1(d)). Overall, the perceptions on the planning game are ambiguous except on progress tracking (10 out of 11 responses agree to “Progress tracking works well following XP practices”). Our (subjective) interpretation of these results is that it takes more than two 20-hour iterations to implement the planning game consistently and effectively. 4.1
A Brief Summary Interpretation of the Quantitative Data
The perceptions of XP practices are overwhelmingly positive. This holds for XP in general as well as for pair programming and test-first design. And it holds across all levels of students (with the M.Sc. students slightly less positive).
5
Reflections: Qualitative Results
5.1
XP in General
The feedback on XP we got from students was positive. Concerns were often coming from the problems related to the non-co-location of the team in the course setting: “XP-programming is very effective as different people have the opportunity to apply different skills. I've noticed how the quality improves and the speed at which the project is completed also improves greatly. “ “I think it is a great tool to use, because different ideas are brought to the table.” “XP practices worked once we were able to get together. One problem we incurred was that we were not co-located.” “What worked - Commitment and Responsibility from the developers - Undoubtedly pair programming helped us a lot - Same level of playing field helped us to stay more focused - Increased level of confidence and enthusiasm for the team members. - Everybody is aware of other user stories and technical details. - Testing and on going build processes are in place and iterative integration over the whole life cycle of project. What didn't work - Estimation were very poor for the first round.”
Perceptions of Agile Practices: A Student Survey
5.2
247
Pair Programming
The first practice considered in more detail was pair programming. Most of the students found the interaction between partners helpful. “I liked working together because it helped me to keep up my energy.” There were a few requests to work in trios and those requests were not satisfied. Our observations show that there were some difficulties in adjustments when there was a big difference in skill level in a pair. A student stated: “There was a huge difference in skill level in my pair, so we weren't very productive when I wasn't driving. But that was only because they hadn't memorized some of the API, as far as identifying semantic errors, they were top-notch.” A number of students suggested that partners in a team should be matched according to their qualifications and experiences. Here we detected a split of opinions. In their understanding of the objectives of pair programming, some students only focused on getting the code written in a more efficient manner, and not on mentoring. “Pairing a strong person up with a weak person also doesn't really help the weak person… Often the strong person just does the work and the weak person points out the syntactic mistakes.” They found mentoring to be a drawback of pair programming. If the partner didn’t understand something, they would have to spend extra time explaining it over, which under tight deadlines was perceived to be a real problem. Other students considered this to be a plus in collaborative learning process: “Not only did it allow programmers to catch possible mistakes immediately that would have taken until the next debugging stage to sift through, but I noticed that it allowed weaker programmers the opportunity to learn from the stronger partner while working on actual material (as opposed to theory in a classroom).” In fact, similarly to the findings of Simon [5], our study recognizes that pair learning and extreme learning has the advantage of the traditional theories of learning that treat learning as a concealed process. A number of students reported difficulties with finding an adequate partner – demonstrating the “All good ones are taken” attitude. Although the majority of the students liked pair programming, many of them mentioned that “…sometimes it is nice to just struggle by yourself with a problem without having someone looking over your shoulders.” In terms of discipline and peer-pressure as some of the factors that make pair programming work, our findings were consistent with Williams et al [3]. At SAIT, all pairs but two handed in their assignments on time and the UofC teams met their release dates. For some of the students, their work, demanding schedules at the institute/university and other commitments intervened with practicing pair programming. Some of them were limited by a few hours of pair programming a week and as a result were not as successful as other teams. In their responses, students mentioned that “…pair programming works when our [team members’] schedules coincide.”
248
Grigori Melnik and Frank Maurer
Most students on all levels liked pair programming. Some students who were very skeptical at the beginning of the course turned around and changed their opinion. In fact, one of the skeptical students is now considering introducing pair programming in his team in industry. Some students resisted the idea of pair programming and demonstrated what is called “Not Invented Here” Syndrome – they had to always be drivers and they couldn’t trust other people’s code. On a few occasions they would modify the requirements specification because they didn’t feel it was the right one. They did that without asking the client (the instructor in this case). Needless to say, these students will find it hard to adapt in the real work environments where one must be able to work as a team and improvisation on client’s requirements without consulting with the client is rarely welcomed. One important aspect and a skill required for pair programming to succeed is the ability to communicate effectively. Johnson, Sutton and Harris in their study of 32 undergraduate students of the University of Queensland [4], find that “students seem to be aware of the value of effective communication and felt that role-play activities, discussion, small-group activities and lectures contributed the most to their learning.” Similar perceptions were observed at SAIT: “I think that 'networking' is an extremely important part of learning ... finding out how other people do things shared with your ideas can make anything unique and efficient!” 5.3
Test-First Design
Initially, test-first design was not easy to implement. Students found the tests were hard to write and they were not used to thinking the test-first way: “Difficult to write test cases before writing the code for the functionality.” We believe that the underlying reason is that test-first design is not about testing but about design. And doing design is hard – independent from how you document it (as test code or in UML). Hence, test-first design simply forces design issues forward while UML diagrams can be sloppy. This impression was supported by some of the students: “I think the test code is more a part of design then it is just testing.” and “I felt that testing first gave a better sense of "here is what must be done", and how to approach it.” Some of the students believed it was logically confusing and “almost like working backwards.” One student noted: “Sometimes we had no idea where or how to start to solve the problem so building a test first design was difficult and frustrating. Once we had an idea how the design should look (from a general point of view) then the testfirst approach helped.” The students did not know how many tests would be enough to satisfy that the desired functionality would be implemented correctly. Also, some believed testing involved too much work and they didn’t see the short-term benefits. Nevertheless, twothirds of respondents recognized that test-first design speeds up the testing process: “As much as I like to get right down to coding, I believe that writing tests first saves time and effort later. It is just a matter of discipline, old habits die hard.” and even more students believed that it improves the quality of code:
Perceptions of Agile Practices: A Student Survey
249
“The test-first design is a valuable technique because while you create possible test-cases you will undoubtedly uncover bugs, flaws or defects in the spec. By finding these defects at this early stage you are saving yourself or the development team time and money in the future since they address those problems earlier.” In some cases, we could observe that the test-first practice was ceased when the assignment due date was close and the project came under pressure. Jones studied the issue [7] and came to a conclusion that we support: “Although it is impractical to teach every student all there is to know about software testing, it is practical to give students experiences that develop attitudes and skill sets need to address software quality.” The XP approach of test-first design is quality-driven. Our evidence shows that even though students did not absorb the concept of test-first design as enthusiastically as the pair-programming method, they do realize the importance of testing and see the benefits of finding and fixing bugs at the early stage of application design. Additionally, an exercise of using their test cases to test another student’s implementation provided a broader view on the issues of quality of code. 5.4
The Planning Game
The UofC student comments on the planning game are more positive than the quantitative evaluation would indicate: “It's effective because of the brainstorm type of team work. I find estimates will become more accurate as the team accumulates more experience” and “The short time involved in getting through planning was nice.” Nevertheless, there were some critiques and room for improvement: “I felt that the planning game did not include enough communication of the overall goal. Visual communication would also have helped, such as models, to allow the group to see a common picture.” “It's effective because of the brainstorm type of team work. I find estimates will become more accurate as the team accumulates more experience.”
6
Summary and Future Work
Our study shows that students are very enthusiastic about extreme programming practices. They find their experiences positive and not only do they prefer to continue using extreme programming in the future, but they would recommend their companies to use these practices as well. We found there were no significant differences in the perceptions of students of various levels of educational programs and experiences. The UofC students (whose majority has several years of experience) were – overall – a bit more cautious then the SAIT students. We believe that this may come from their experience that no method is a silver bullet for software development. Overall, our results indicate that a broad
250
Grigori Melnik and Frank Maurer
range of students (although not everyone) accepts and likes agile practices. And this is in our opinion a prerequisite for their widespread adoption in industry. A limitation of the study is the use of university and college students as participants. As this is not a randomly chosen group of the overall developer population, the results of our study are not directly generalizable. Also, a larger population of voices needs to be considered for a more comprehensive study. Inferential statistical techniques may also be employed. However, given the limitations above, the present analysis does provide a snapshot of some aspects of perceptions of extreme programming practices. We hope that the observations made will provoke discussion and future studies on a wider selection of students and practitioners and would like to invite any interested parties (both academic and industrial) to take part in the future studies.
Acknowledgements The authors would like to thank all students from the University of Calgary and SAIT who participated in the study and provided us with their thoughtful responses. This work was partially sponsored by NSERC, ASERC, UofC and SAIT.
References 1. Sliwa, C. “Agile Programming Techniques Spark Interest”. ComputerWorld. March 14, 2002. 2. Williams, L., Kessler, R. “Experimenting with Industry's "Pair-Programming" Model in the Computer Science Classroom”, Journal on Computer Science Education, March 2001. 3. Williams L., Kessler, R., Cunningham, W., Jeffries, R. “Strengthening the Case for Pair Programming”. IEEE Software, Vol. 17, July/August 2000, pp.19-25. 4. Johnson, D., Sutton, P., Harris, N. “Extreme Programming Requires Extremely Effective Communication: Teaching Effective Communication Skills to Students in an IT Degree.” In Proceeding of ASCILITE 2001, pp. 81-84. 5. Simon, H. “Learning to Research about learning”. In S. Carver & D. Klahr (Eds.), Cognition and instruction: 25 years of progress. Mahwah, NJ: Lawrence Erlbaum, pp.205-226. 6. Gittins, R., Hope, S. “A Study of Human Solutions in eXtreme Programming”. In G.Kadoda (Ed) Proc. 13th Workshop of the Psychology of Programming Group, 2001, pp.41-51. 7. Jones, E. “Integrating Testing into the Curriculum — Arsenic in Small Doses”. In Proceedings 32nd Technical Symposium on Computer Science Education, February 21-25, 2001, Charlotte, NC, pp.337-341.
XP in a Legacy Environment Kuryan Thomas1 and Arlen Bankston2 1 Technical
Director, XP Coach, C.C. Pace Systems, OYV]ERXLSQEW$GGTEGIGSQ 2 Senior Consultant, User Experience Manager, C.C. Pace Systems, EVPIRFEROWXSR$GGTEGIGSQ Abstract. Extreme Programming is not just for the lucky few projects where applications are built from scratch. Learn how XP’s practices can be brought to bear on a poorly designed and untested legacy code base in need of salvation. The audience will operate on a J2EE version of the famous "Bowling Score" application, developed under a waterfall methodology and set within a living environment populated with convincingly nervous clients.
By Completing this Hands-on Interactive Tutorial, Participants Will Learn How to:
• Structure technical infrastructure tasks within the overall release plan. • Identify which XP practices can be implemented immediately on legacy code systems, and which ones must be gradually introduced. • Use techniques for coping while XP practices are only partly implemented. • Establish when and how to proceed to deep information-flow refactoring as the XP support infrastructure comes online. • Improve the legacy code base so that new functionality can safely be added. • Enhance the usability and user interface of the application with minimal risk. Audience. Developers with basic Java and J2EE experience, user interface designers, technical managers, and anyone else wishing to gain expertise in XP implementation on a project involving legacy code. Content Outline. In this tutorial we will undertake a journey to hunt and tame a beast of legendary proportions...
• Prologue The journey begins… Participants will be introduced to a land under siege by a beastly application, and an intrepid tribe of developers fearing for their livelihoods. • Chapter I: Hunting the Beast Scouring the land in search of the monster… Using a legacy implementation of the ubiquitous "Bowling Score" application as a J2EE project on the brink of failure, participants will probe for weaknesses such as lack of testability and overly complex design. • Chapter II: Laying the Trap The beast is captured and immobilized… Through role playing in a simple release plan, participants will seek to establish control of the project. This will include stabilizing the system with an automated build and unit tests, bolstering communication between teams through the planning game, and introducing usability tests into the user acceptance process. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 251–252, 2002. © Springer-Verlag Berlin Heidelberg 2002
252
Kuryan Thomas and Arlen Bankston
• Chapter III: Taming the Beast The beast is taught not to bite the hand that feeds it… Using control established in stabilization phase, participants will begin to add value to the application, making it more usable and crushing bugs mercilessly underfoot. This will be accomplished by means of acceptance tests, code refactoring, and examination of the information flow and visual presentation of the application. • Chapter IV: Domesticating the Beast The beast is compelled to return its stolen value… Participants will discuss how to implement new functionality while minimizing risk, expand XP’s influence by introducing previously subdued practices, and introduce usability and GUI enhancements into the system, while discussing how these practices can be cleanly integrated into XP and increase project velocity. • Epilogue And they lived happily ever after? Participants will engage in a process reflection to review intricacies and implications of the lessons learned. About the Presenters Kuryan Thomas is a Technical Director at C.C. Pace Systems in Fairfax, Virginia. Currently he is a Lead Developer and XP coach, and works with developers and managers to help maintain and enhance a large J2EE application using XP. Kuryan has over 15 years of experience with Object Technology applications ranging from embedded real-time systems to enterprise-class business applications. His professional interests include XP as a development methodology, object oriented design and implementation, web services, and mobile computing. Arlen Bankston is a Senior Consultant and User Experience Manager for C.C. Pace Systems in Fairfax, Virginia. He is currently engaged as a Usability & User Interface Design Mentor, in which capacity he works with business managers and analysts to gather requirements and translate them into usable interfaces, then with developers to implement these designs. His professional interests include the integration of usability and UI design best practices into agile methodologies, as well as the continuing advancement of usage-centered design methods and processes in general.
XP for a Day James Grenning Senior Consultant, Object Mentor, Inc. KVIRRMRK$SFNIGXQIRXSVGSQ
Abstract. Attendees will practice XP for a day. Students will experience the XP development cycle and many of the practices in the context of an XP iteration plan and two development iterations.
Students Will: • Form XP development teams • Participate in an iteration plan • Accept new stories • Identify development tasks • Sign up for tasks • Experience Pair Programming • Program stories using test first design • Write and Run automated acceptance tests Discussions occur through out the day to reflect on experiences and lessons learned.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 253, 2002. © Springer-Verlag Berlin Heidelberg 2002
Accelerated Solution Centers – Implementing DSDM in the Real World Alan Airth DSDM Consortium Director for Professional Development %PER%MVXL$\ERWEGSQ
Abstract. Accelerated Solution Centres (ASC) use a collaborative and agile approach to deliver effective business solutions. The ASC approach gives a rapid focus on business needs and project feasibility. Collaboration of business and technical resource ensures that high quality decisions can be made from the very beginning of the project lifecycle, thus ensuring that projects continue only when based on a sound rationale. Throughout the project the collaborative approach ensures a strong focus on business priorities. The ASC approach is based on the DSDM framework and techniques, supporting iterative and incremental development to deliver fit for purpose solutions. This tutorial describes the approach, supported by examples from practical experience.
Tutorial Duration Half-day Content Outline. A different way of working Accelerated Solution Centres (ASC) are an innovative and effective way to address business problems and deliver fit for purpose, technically enabled solutions. They are designed to support collaborative working, ensuring all stakeholders share the same understanding and give their full commitment to the project. Without this collaborative and co-operative approach projects rarely achieve success. The ASC uses the DSDM framework to shape the project lifecycle. This provides the control for agile and responsive development. It is designed for control of projects where change is expected – baselining at high level and converging on solution details ensuring commitment and collaboration of all key stakeholders Techniques. The ASC provides a set of techniques that enable the accelerated development of fit for purpose solutions. The techniques are designed to support the framework – they do not prescribe modeling and programming methods. The project may select such techniques as appropriate for the technology. Principles. • Active user involvement is imperative • Teams must be empowered to make decisions • Frequent Delivery • Fitness for Business Purpose is the essential criterion for acceptance of deliverables D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 254–255, 2002. © Springer-Verlag Berlin Heidelberg 2002
Accelerated Solution Centers – Implementing DSDM in the Real World
255
• Iterative and incremental development is necessary to converge on an accurate business solution • All changes are reversible • Requirements are baselined at a high level • Testing is integrated throughout the lifecycle • A collaborative and co-operative approach between all stakeholders is essential The Team. The team consists of experienced business and technical resource. Technical skills include application development as well as infrastructure. The ASC Environment and Culture. Appropriate environment and culture to foster collaborative working. The team is located in business area to emphasise businessdriven approach. The physical environment and facilities support a collaborative approach. The Practice. This section describes experiences from introducing the ASC approach into a large organisation, showing where it turned projects around and resulted in successful delivery. Presenter Resume. Alan Airth is the current DSDM Consortium Director for Professional Development and is a regular speaker for the DSDM Consortium at their annual conference and roadshows. Alan is the Chairman of Facilitation Accreditation Services; the first worldwide accreditation programme for facilitators recognised by the International Association of Facilitators. Alan has worked in the IT industry for fourteen years in a variety of roles including consultant, trainer, mentor, facilitator, project manager, team manager and programmer. His area of specialism is process improvement in the development and deployment of software. He has eight years experience on Rapid Application Development and DSDM projects in a wide range of industry sectors and is recognised as an expert in the field.
Refactoring: Improving the Design of Existing Code Martin Fowler Chief Scientist, ThoughtWorks, Inc. JS[PIV$EGQSVK Abstract. Almost every expert in Object-Oriented Development stresses the importance of iterative development. As you proceed with the iterative development, you need to add function to the existing code base. If you are really lucky that code base is structured just right to support the new function while still preserving its design integrity. Of course most of the time we are not lucky, the code does not quite fit what we want to do. You could just add the function on top of the code base. But soon this leads to applying patch upon patch making your system more complex than it needs to be. This complexity leads to bugs, and cripples your productivity.
Introduction. Refactoring is all about how you can avoid these problems by modifying your code in a controlled manner. Done well you can make far-reaching changes to an existing system quickly, and without introducing new bugs. You can even take a procedural body of code and refactor it into an effective object-oriented design. With refactoring as part of your development process you can keep your design clean, make it hard for bugs to breed and keeping your productivity high. In this tutorial we’ll show you an example of how a lump of poorly designed code can be put into good shape. In the process we’ll see how refactoring works, demonstrate a handful of example refactorings, and discuss the key things you need to do to succeed. Duration: Half Day Level: Intermediate Presenter’s Resume Martin Fowler is the Chief Scientist for ThoughtWorks, Inc., a leading custom ebusiness application and platform development firm. For a decade he was an independent consultant pioneering the use of objects in developing business information systems. He’s worked with technologies including Smalltalk, C++, object and relational databases, and EJB with domains including leasing, payroll, derivatives trading and healthcare. He is particularly known for his work in patterns, the UML, lightweight methodologies, and refactoring. He has written four books: Analysis Patterns, Refactoring, the award winning UML Distilled, and Planning Extreme Programming. Martin has been giving this tutorial in various forms since around 1997. It has been presented at a wide range of commercial and academic conferences including OOPSLA (where it was one of the top 5 tutorials in 1999), Software Development, and JavaOne. The tutorial is based on the early chapters of Martin’s book “Refactoring: Improving the Design of Existing Code”.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 256, 2002. © Springer-Verlag Berlin Heidelberg 2002
The Agile Database Pramod Sadalage1 and Peter Schuh2 1 Data
Architect and Lead DBA, ThoughtWorks, Inc. TVEQSH$XLSYKLX[SVOWGSQ 2 Independent Consultant, EKMPI$TIXIVWGLYLGSQ
Abstract. Does your development team’s database strategy suck? Does your team even have a database strategy, or is this vital portion of your enterprise application - the repository of the data upon which your entire application rests - in the hands of some guy who sits on another floor or two buildings down the street? Is he even a member of your team? The tutorial’s presenters do not have their sights trained on your database administrator, but they do intend to lay waste to the inefficiency and backward thinking inherent in the database-related practices of most development teams. The tutorial will present a proven plan for adding agility to the database (starting with making the DBA a true member of the team). Participants will be shown how the application database can be structured to provide each team member with control of their own data space (similar to individual application instances). The presenters will explain, in detail, how processes and tools can be used to make the database more manageable and open to refactoring. Finally, the presenters will address the issue of applications that are already in production, and detail how the above topics still can apply to production environments.
Objective: Attendees will receive both (1) a thorough introduction to the concepts behind an agile database strategy, and (2) implementation-specific details that will allow them to introduce these concepts onto a development team. Attendee Background: Developers, DBAs and project managers are encouraged to attend. Experience on a development team of eight or people is strongly recommended. Experience with an agile development methodology is recommended, but not required. A general understanding of relational databases and/or SQL is, similarly, recommended but not required. Experience Level: Intermediate Presentation Format: Lecture with slides Duration: Half Day Outline: 1. The Development DBA 2. Managing the Database: What Makes It So Complicated? 3. The Database Instance 4. The File System Metaphor D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 257–258, 2002. © Springer-Verlag Berlin Heidelberg 2002
258
Pramod Sadalage and Peter Schuh
5. The Strategy 6. Implementation 7. Agile Database Development in a Production Environment Presenters Pramod Sadalage works as a Data Architect and Lead DBA at ThoughtWorks, Inc, a leading custom e-business application and platform development firm. At present, he is working on a large J2EE application, which uses XP, an agile methodology, may not have been sufficiently discussed in the context of databases. While on this project, Pramod pioneered the practices and processes of Agility in the database. He has worked with a variety of Relational Databases including Oracle, MS SQL, Informix, and Sybase. Pramod has more than eight years of experience with domains including financial, leasing, material management, and insurance. Peter Schuh is an independent consultant, specializing in system analysis and project management. He has more than five years experience participating in and managing projects in the leasing, health care and e-commerce fields. Peter has written and spoken about Extreme Programming, the adoption of agile processes, agile development’s impacts upon database administration, and the ObjectMother pattern.
Change Wizardry – Tools for Geeks Joshua Kerievsky1 and Diana Larsen2 1 Industrial Logic NPO$MRHYWXVMEPPSKMGGSQ 2 Senior Consultant, FutureWorks Consulting LLC
Abstract. Have you noticed that it can be difficult to get your colleagues and/or your organization to adopt agile methodologies? "What’s wrong with our current process?" "We don't have time for change." "I’ll never pair-program." "It's not in the budget." "We've never done that before." Understanding the organizational dynamics of change is critical to influencing the changeover from traditional software development to XP or other agile methodologies. This tutorial will introduce you to four change tools, which the presenters have found helped people and organizations actually effect a change.
In this tutorial you’ll apprentice as a change wizard, learning about: Change Readiness. Assessments Conjuring up the right assessment tool will help you determine answers to critical questions like: "Is there a chance for change?" “Do pockets of readiness exist in my organization?” "What would need to be different around here for change to occur?" Will the path to change in my organization be a multi-step process or one transformational leap?" Project Retrospectives. Cast the spell of learning. Change cannot happen without learning and retrospectives help organizations learn. The examination of past performance data, which happens during a retrospective, yields a climate that fosters readiness for change. Paradoxes. Amaze and astound your colleagues with paradox voodoo. Agile Methodologies are filled with paradoxes, contradictory conditions that must be resolved for effective and harmonious development. Once you tap in to the power of personal and team paradoxes you are better equipped to understand and work with resistance to change. Chartering. Learn a little piece of magic called a Charter and you'll know how to make a commitment to change actually stick. A charter is a reference point or shared agreement between team players that provides direction during the course of a project. Join us and learn to do your own change magic. Outline: I. Introductions II. Understanding Change Dynamics – The Trances of the Status Quo D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 259–260, 2002. © Springer-Verlag Berlin Heidelberg 2002
260
III. IV. V. VI. VII.
Joshua Kerievsky and Diana Larsen
Change Readiness Assessments – Divining the Source Project Retrospectives – The Spell of Learning Paradox Voodoo Chartering – Change Wizardry That Sticks Wrap-Up
Presenters: Joshua Kerievsky is a software development coach and programmer. After programming on Wall Street for nearly 10 years, in 1995 he founded San Francisco Bay Areabased Industrial Logic (http://industriallogic.com), a company that specializes in Extreme Programming (XP). Since 1999, Joshua has been programming and coaching on small, large and distributed XP projects and teaching XP to people throughout the world. He regularly speaks about XP, has authored numerous XP and patterns-based articles, simulations and games and is hard at work on the forthcoming book, Refactoring to Patterns (http://industriallogic.com/xp/refactoring/). Diana Larsen, a senior consultant with FutureWorks Consulting LLC, works with leaders of high tech companies, engineering teams, and development groups to strengthen their ability to manage organizational changes (e.g., a migration to XP/Agile methods) and capture the learning from their development experiences for future projects. Based in Portland, OR, she has over twelve years experience with supporting effective interaction, planning, and review, as well as managing projects. Clients work with Diana because of her experience and expertise, but also because of her candor and her willingness to explore uncharted territory and innovative solutions. She is a co-sponsor of the upcoming Project Retrospectives Facilitators' Gathering and a speaker at conferences.
Beyond the Customer: Agile Business Practices for XP Paul Hodgetts Principle Consultant, AgileLogic, Inc. TLSHKIXXW$EKMPIPSKMGGSQ
Abstract. The role of the Customer/Product Owner in an Agile project is challenging. While much has been written about the Customer practices that address the interface with the Developers, a vital area of the product development process lies beyond that boundary in the business and product management domains. Customers are faced with many tough questions: Are the stories supporting the real business goals? Do the release plans address the needs of all of my users? How can I determine the value and priority of the stories? How can I turn highlevel business and product strategies into stories and release plans? How can I make sure that I’m using my Developers’ time to produce the most business value? In this tutorial we will explore some specific Agile solutions that we’ve successfully applied to the business side of real-world projects in concert with XP. Through a combination of light lecture, hands-on exercises, and group discussions, tutorial participants will learn new techniques and practices to introduce and apply to their projects.
In this tutorial, we’ll cover: • Applying Agile values and principles to create a relatively simple set of business practices that start with defining the corporate business objectives, and follow through to setting strategic product initiatives and developing specific product feature sets. • Practices for identifying and resolving the often-competing interests of multiple stakeholders and user constituencies to create a single, cohesive product plan. • Techniques for turning product initiatives and features sets into appropriate stories and release plans, using lightweight essential use cases and breadth-and-depth breakdowns. • Valuing stories based on tangible, testable business value, such as revenue or customer satisfaction, and applying that valuation to the release planning and story prioritization practices. • Creating a seamless, traceable link from high-level strategic business objectives to Developer’s individual tasks. Image being able to point to a Developer and say "she is writing code right now that supports our corporate revenue growth initiative by adding $10,000 in monthly revenue." • Providing test-like metrics-based feedback to evaluate if release plans are fulfilling the product strategies and in turn the business objectives.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 261–262, 2002. © Springer-Verlag Berlin Heidelberg 2002
262
Paul Hodgetts
Audience. This tutorial is intended for team members who are interested in expanding the adoption of Agile practices to the entire product development process. Content Outline Why Agile Business Practices? The Scope of Product Development The Core Values and Principles of Agility Towards an Overall Agile Product Development Process Concrete Feedback – Avoiding Guesswork and Pet Projects Traceability in The Process What To Do When There Isn’t One Customer? Working with Multiple Stakeholders Generating Stories from Product Strategies and Features Issues in Adopting Agile Business Practices Presenter Bio and Contact Information Paul Hodgetts is the founder and principal consultant of Agile Logic, a professional services company focusing on Agile development processes and enterprise development technologies. He has more than 19 years of experience in all aspects of software development from in-the-trenches coding to technical project management, on a wide variety of projects from embedded real time control to distributed internet business applications. His recent focus has been on expanding the application of Agile values and principles to the business side of product development, and on the organizational change aspects of adopting Agile processes.
XP Release Planning and User Stories Chet Hendrickson1, Ann Anderson2, and Ron Jeffries3 1 GLIX$LIRHVMGOWSR\TGSQ
2 EERHIVWS$JMVWXHEXEGSVTGSQ
VSRNIJJVMIW$EGQSVK
Abstract. The "Circle of Life" in XP is the cycle: On-Site Customer, Planning Game, Small Releases, Acceptance Tests. This cycle, repeated for every release, and for every iteration, is the heartbeat of an XP project. It provides the requirements that the programmers need in order to program, the development velocity and quality information that management needs in order to steer the project, and the connection between wish and reality that the customer needs in order to choose what to do next and when to release. We don’t call it the Circle of Life lightly.
Introduction In this half-day tutorial, you’ll have the opportunity to experience the same project three ways: with very low information, part of the information that XP can provide you, and with all the cost and value information that XP provides. You’ll experience the highs and lows of working on such a project, and learn hands-on why the XP planning process is so powerful within such a simple framework. The session is based on Ann, Chet, and Ron’s popular "small card" game, where we simulate an important valuable project. This is the same experience that we have offered at XP Immersions, Smalltalk Solutions, XP200n, and various on-site courses. We use the release planning game to explore various project planning strategies. The game is played in small groups of approximately four to eight players. Both programmers and customers may play, in any combination. In each cycle of the game, the players plan and "implement" a product. Each time through, they learn some new lessons enabling them to plan and implement better next time. Scoring is based upon each team’s total return on investment. Based on that experience, we’ll then discuss how teams like yours can get the cost and value estimates needed to run your project the XP way. The session has been rated as valuable and enjoyable by all those who have taken part in it. If you haven’t yet suffered the joys and agonies of the "Small Card" game, this is your chance!
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 263, 2002. © Springer-Verlag Berlin Heidelberg 2002
Steering the Big Ship: Succeeding in Changing an Organization’s Practices Lowell Lindstrom Vice President, Object Mentor, Inc. PMRHWXVSQ$SFNIGXQIRXSVGSQ
Abstract. “Welcome, welcome!, we’re so glad you’re here. We just have so many problems here at Mock Corporation. Our software development has been a problem as long as we can remember. We really thought we had it licked with the last process change we made. It was kinda structured, but the plans looked great at the start of the project! Now, some programmers are experimenting with this XP and Agile stuff and they really like it, but lots of people think it won’t work.” Welcome to Mock Corp. In this tutorial, YOU are the coach, helping Mock Corp. over come the challenges of changing the practices they use to create software. You’ll study Mock Corp’s people, teams, and the organization as a whole. Common challenges in adopting new practices will be explored as Mock Corp. encounters them. Resistance behaviors will be identified and tools to over come them will be presented. Mock Corp. is a generalization of real people and companies that have undergone transition to XP. Individuals from companies that have or are going through the transition will participate to tell their stories.
Introduction Changing the way individuals work is a big challenge. Changing the way teams work is an even bigger challenge. Changing the way organizations work is harder still. Moving to an Extreme or Agile way of developing software tests the ability of individuals, teams, and organizations to change. The degree to which an XP or Agile transition is successful long term will primarily depend on this ability to change. This tutorial will teach the participants about change at different levels in the organization and techniques to make change successful. Special emphasis will be placed on the organization level change issues and the infrastructure required to make change stick. Duration: half-day Content Outline Models of Organizational Change Specific change issues caused by the transition to Extreme Programming or other Agile methods Programming Team Changes D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 264–265, 2002. © Springer-Verlag Berlin Heidelberg 2002
Steering the Big Ship: Succeeding in Changing an Organization’s Practices
265
Customer Team Changes Management Changes Mock Corp. Individual Change and Techniques (exercise) Team Change and Techniques (exercise) Organizational Change and Techniques (exercise) Tales from the Trenches: Case Studies from Real World Transitions Presenter Resume Lowell Lindstrom has been involved in the teaching, coaching, and transitioning of numerous XP projects in organizations of all sizes and varying domains. He is currently responsible for the business development and strategy at Object Mentor and coaches organizations on customer skills and the organizational change aspects of Extreme Programming. He has been professionally involved across of the spectrum of software business for 17 years. After many years of developing and managing software projects, he took on organizational change, marketing, and sales roles at a large technology product company. He has a BSCS from Northwestern University and a Masters in Management from Northwestern’s Kellogg GSM.
Scrum and Agile 101 Ken Schwaber OIRWGL[EFIV$ZIVM^SRRIX
Abstract. Agile processes are different. They not only increase productivity, they bring focus and pleasure back to systems development. This tutorial explains the underlying theory and practices of all agile processes, and then explains how they are implemented in Scrum. A case study is presented. An exercise is then conducted to give the tutorial attendees a feel for the flow, practices, and rules of Scrum. Tutorial attendees are organized into teams that collaborate with the customer to formulate an iteration, self-organize to identify the work in the iteration, report on progress and impediments during the iteration, and present the results of the iteration to the users.
Duration: 3 hr (half day) Aims To provide the audience with a description of Scrum, why it works, and how it works. To describe the underlying basis and theory of agile processes, including emergence, self-organization, collaboration, iterations, and incremental delivery. To provide a first hand experience being in a Scrum team highlighting the different feel between agile and traditional processes. Audience Individuals that want to use Scrum’s high-productivity, agile methods for a project, and individuals that want to understand Scrum and agile practices. Outline Part 1: Scrum theory and practices discussion (75 min) Lecture style delivery Speaker introduction Defined control vs empirical control Incremental functionality, emergence, and self-organization Scrum process overview Product/ system vision What is the product backlog and who is the product owner? What is a Scrum team? How do you plan a Sprint (iteration)? How does a team commit to a Sprint goal? What is Sprint backlog? What are daily Scrums? How does a team balance its commitments within a fixed Sprint in light of emerging requirements, technology, and team dynamics? What does it mean to abnormally termate a Sprint? What happens at the end of Sprint review? D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 266–267, 2002. © Springer-Verlag Berlin Heidelberg 2002
Scrum and Agile 101
267
Part 2: Case Study Overview (15 min) Presentation of case study Vision Technology Requirements Risks Assumptions Part 3: Translation of Case Study to First Sprint (90 min) Facilitated workshop Synthesize vision into product goal Develop product requirements list Select first Sprint product backlog and define Sprint goal Break into Scrum teams Each team defines its Sprint backlog - self-organization Each team has a daily Scrum to report progress Presenter Resume Ken Schwaber is one of the developers of the Scrum agile process and has extensively used agile processes over the last seven years. Ken is one of the founders of the AgileAlliance and helped setup the AgileAlliance organization. In 2001, Ken co-authored "Agile Software Development with Scrum" with Mike Beedle. With over thirty years of software development, management and consulting experience, Ken is currently working with organizations to develop software with Scrum and a combination of Scrum and Extreme Programming, as well as helping the organizations plan and execute the required change management.
How to Be a Coach William Wake1 and Ron Jeffries2 1;MPPMEQ;EOI$EGQSVK 26SR.IJJVMIW$EGQSVK
Abstract. It’s difficult to start a new process, but a coach can make this easier. A coach is part developer and part manager. Coaches help a team stay on process and they help the team learn. A coach brings in some outside perspective to help a team see themselves more clearly. We’ll use a combination of lectures, games, and exercises to explore and practice skills that coaches (and team members!) can use.
Duration: Half-day Aims Give students:
• • • •
A deeper look at the coach’s role Tools to help communicate better Practice with charts, retrospectives, and other feedback tools Practice diagnosing team problems
Audience Coaches, budding coaches, and developers on XP or agile teams. Content Outline The coach’s role Team formation Big Visible Charts Retrospectives Working yourself out of a job Presenter Resumes William C. Wake is a software coach, consultant, and teacher. He’s the inventor of the XP Programmer’s Cube, and the author of Extreme Programming Explored and the forthcoming Refactoring Workbook. Ron Jeffries has been developing software since 1961, when he accidentally got a summer job at Strategic Air Command HQ, and they accidentally gave him a FORTRAN manual. He and his teams have built operating systems, language compilers, relational and set-theoretic database systems, manufacturing control, and applications software, producing about a half-billion dollars in revenue, and he wonders why he didn’t get any of it. For the past few years he has been learning, applying, and teaching the Extreme Programming discipline. Ron is the primary author of Extreme Programming Installed. Ron is a trainer and consultant at Object Mentor, Inc. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 268, 2002. © Springer-Verlag Berlin Heidelberg 2002
Sharpening the Axe for Test Driven Development Michael Hill Senior Consultant, Object Mentor, Inc. LMPP$SFNIGXQIRXSVGSQ
Abstract. Abraham Lincoln is famously (mis-) quoted: "If you showed me a big tree and an axe and gave me eight hours, I’d spend the first 6 hours sharpening the axe.” Software shops all over the world are now practicing XP, and specifically Test Driven Development. The tutorial below sets out to accomplish three goals: 1) Offer information on known axe-sharpening efforts in XPLand. 2) Provide a demonstration of a small set of hand-rolled tools for TDD in real-world projects. 3) Advocate the kind of deep laziness that gives making and refining tools its proper value in the framework of XP.
Duration and Target Audience. This is a half-day tutorial whose target is intermediate or advanced programmers who feel they are not getting maximum value from the TDD approach. Anyone who is trying TDD is welcome. Anyone who has understood the urgency of adopting TDD but who is still using crude and ineffectual tools should benefit. The presenters particularly welcome new adopters and those working with or leading them. Outline. Note: The tutorial will be handled in true XP style. What follows is a release plan, and as the class develops, it will surely flex to meet the attendees’ needs. The real point of the tutorial is to provide a focused topic and bring TDD adopters face to face with an experienced practitioner for our mutual entertainment and edification. The presentation includes working programs, sample code, and a variety of tips, tricks, and techniques. There are Axes and Axes: It’s Better Than Using A Plastic Knife The xUnit Family What a Dull xUnit Does, and How The Three Pieces of All xUnits A One-Day xUnit for Language X What a Sharp xUnit Does, and How A Very Sharp xUnit Integrated xUnit Deep Laziness: Preparing And Refining A Good Axe Introducing Tool Smells Learning Not To Type Creating Inaccesible Engines Making Your Tool Yours Buy, Borrow, or Build?
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 269–270, 2002. © Springer-Verlag Berlin Heidelberg 2002
270
Michael Hill
xAT, xVersion, xGUI, xProcess: Attacking The Rest Of The Forest What xUnit Doesn’t (and Shouldn’t?) Do Completing the Integration Acceptance Testing Around the Clock Tools for Writing Tests Hard Questions Open season on TDD and tools. The last hour of the tutorial will be focused on the questions and answers that the attendees offer. Real challenges from real applications are the order of the day. Presenter’s Resume Michael Hill has been testing first for three years, against a background of twenty+ years as an independent contractor. He is a senior mentor at Object Mentor, Inc., a job that has brought him into contact with literally dozens of different projects and platforms. He is presently at work on a book on Test Driven Development.
Pair Programming: Experience the Difference Laurie Williams1 and Robert Kessler2 1 Assistant
Professor, North Carolina State University [MPPMEQW$GWGRGWYIHY 2 Professor, University of Utah OIWWPIV$GWYXELIHY
Abstract. Pair programming is emerging as an important technique for developing higher quality code, faster. With pair programming, two software developers work on one computer, collaborating on the same design, algorithm, code, or test. This tutorial examines pair programming research results and anecdotal experiences of programmers who have transitioned to pair programming. It will discuss what works and what doesn’t and will also explain techniques for fostering support in making a transition to pair programming – support from management and support from peers. Hands-on activities will be used to demonstrate pair programming benefits.
Objective: Participants will experience the difference between working alone and working in pairs. They will understand the research results that show pair programming works, learn how to pair program, what not to do when pairing, and how to transition to pair programming. Level: Beginner Attendee Background: This tutorial is targeted toward software developers and technical software development managers who are interested in transitioning to pair programming. Presentation Formal: About half of the tutorial time will be presentation based. The remaining time will be spent on activities and discussion. Content Outline for Half-Day Tutorial (3 Contact Hours): I. II. III. IV. V. VI. VII. VIII. IX.
Welcome and Tutorial Objectives (5 min.) Activity I: Individual design (25 min.) Research Results in Pair Programming Presentation (30 min.) Activity II: Individuals working on a team (25 min.) BREAK – 20 minutes Adoption of Pair Programming Presentation (20 min.) Activity III: Pairs rotating around a team (25 min.) Pair Programming Implementation Items Presentation (20 min.) Summary and Conclusion (10 min.)
Presenter Resumes: Dr. Laurie Williams is an assistant professor at North Carolina State University. In 2000, she completed her dissertation which demonstrated statistically that pair proD. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 271–272, 2002. © Springer-Verlag Berlin Heidelberg 2002
272
Laurie Williams and Robert Kessler
grammers were able to produce higher quality produces in essentially half the time when compared to individual programmers. Prior to her recent academic career, Laurie worked at IBM for nine years. Dr. Robert Kessler is a full professor at the University of Utah. He has a BS, MS and PhD in Computer Science from the University of Utah. He has founded several companies and is on the board of several others. Dr. Williams and Dr. Robert Kessler coauthored a book entitled Pair Programming Illuminated, to be published by Addison-Wesley in July 2002.
How to Start an XP Project: The Initial Phase Holger Breitling and Martin Lippert Research Assistant, University of Hamburg _FVIMXPMRKPMTTIVXa$N[EQSVK
Abstract. Within Extreme Programming, the initial phase of a project is the exploration phase. This phase should establish a common view of the system to be built and provide the base for what XP calls “productionizing” – the fullfledged XP development iterations. Therefore this phase is of crucial importance to the whole project. While there is a lot of documented experience with the XP process itself, little has been said about the initial phase. This tutorial presents a number of best practices and experiences to structure and master the exploration phase. They will guide the team from the kick-off meeting to the first implementation of the system skeleton. The techniques presented have been adapted to XP from experience gained in twelve years of consulting in professional object-oriented development projects. They have been successfully applied in numerous XP projects, ranging from high-pressure short-term to large-scale long-term, from finance to the health domain, and from greenfield development to legacy system replacement.
Tutorial Objective. After the tutorial the participants will have a clearer idea of which concepts are helpful and which pitfalls should be avoided in starting an XP project. They will be equipped with a number of practice-proven techniques for gaining first insights into the problem domain and finding a system metaphor. They will have learned to use spike solutions and prototypes effectively. Finally, they will have a better idea when to end the exploration phase and to move on to the well-described XP development iterations. Level. Intermediate Attendee Background. The tutorial is targeted at (potential) XP project managers and experienced developers interested in the extreme programming process. It is assumed that the audience has some familiarity with the basic concepts of extreme programming or similar agile processes. The typical attendee is thinking about starting an XP project or has already faced the challenges of the exploration phase. Duration. Half Day Presenters Holger Breitling is a research assistant at the University of Hamburg and a professional software architect and consultant at APCON Workplace Solutions. He is a senior architect of the JWAM framework, which is developed applying most of the XP techniques. He has several years’ experience with extreme programming techniques working on several projects and is currently acting as a project coach and trainer in this domain. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 273–274, 2002. © Springer-Verlag Berlin Heidelberg 2002
274
Holger Breitling and Martin Lippert
Martin Lippert is a research assistant at the University of Hamburg and a professional software architect and consultant at APCON Workplace Solutions. He has several years’ experience with XP techniques and XP project coaching for various domains and has given a number of talks, tutorials and demonstrations (e.g. ICSE, XP, OOPSLA, HICSS, ICSTest), especially XP tutorials at ECOOP 2001 + 2002, OOP 2002 and ICSTest 2002. He is a member of the XP 2002 program committee. Among his publications are articles for “Extreme Programming Examined” and “Extreme Programming Perspectives” and he authored the book “Extreme Programming in Action”, which is due to be published by Wiley in July 2002.
Effective Java Testing Strategies John Goodsen President, Principle Consultant, RADSoft NKSSHWIR$VEHWSJXGSQ
Abstract. This tutorial will address common problems and introduce several mechanisms that will increase the effectiveness and speed of writing Java unit tests. In this tutorial, attendees will learn (a) how to effectively use the JUnit testing framework, (b) several patterns and practices of implementing mock behavior in Java and (c) techniques for identifying and structuring unit tests that are effective, communicate clearly what is being tested and are easy to understand and change by other members of a software team.
Tutorial Duration, Aims and Audience. This is a half-day tutorial that aims to introduce attendees to writing effective Java unit tests and thereby increasing the speed at which they develop software in a test-first environment like XP.
Tutorial Content Outline 1. Introduction to Junit 2. What do we test? a. Unit Tests vs. Functional Tests b. Test-first vs. Code-first Development c. Designing By Contract and Unit Tests d. Using Pre/Post Conditions to Guide Test Case Creation 3. Staying focused and fast with Mock Objects a. Mock Objects b. Stub Implementations c. Class Isolation With Mocks d. Mocking to Reduce Pain. e. Effective Mock Object Implementation Patterns i. Simple Mocks ii. Data Mocks iii. Mocks with Memory f. Mock Factory pattern g. Overriding Mock Method pattern h. Proper use of setUp() and tearDown() for Mock Object initialization i. Mocking Java API’s i. Mocking a Socket ii. Impossible Mock Situations iii. Abstracting Around Impossible Mock Situations iv. Using Semaphores to coordinate threads and their tests. D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 275–276, 2002. © Springer-Verlag Berlin Heidelberg 2002
276
John Goodsen
4. Fine Tuning Your Testing a. Common Test Case Refactorings b. Substitutability Abstract Test Cases c. Achieving High Code Coverage
Instructor Short Bio/Resume Mr. Goodsen is the President and founder of RADSoft, a software engineering consulting firm specializing in rapid software development techniques since 1996. He has worked on a variety of large-scale software projects over the last fifteen years.
Test Drive for Testers: What, When, and How Testers Do for XP Teams Lisa Crispin PMWEGVMWTMR$EXXRIX
Abstract. This tutorial shows testers, or anyone wearing the tester hat on an XP team, how testers contribute to the project, including what testers should do, when they should do it, and how they should do it. You’ll do exercises that show you how to either work on an XP team as a tester yourself, or work productively with a tester on your team.
Audience. XP newbies and veterans, testers, programmers, coaches, customers, analysts and managers. Anyone who might be expected to help with some aspect of acceptance testing on an XP team, and anyone who wants to help their XP team maintain a focus on quality. Duration. Half-day Description. We’ll take a trip through an iteration of an XP project step by step and show what goals to reach for, which activities to engage in, and some helpful techniques for testers to use. The exercises are built around an XP project to develop a simple web-based tracking application. Tutorial Outline 1. What testers do during release planning and story writing: a. How to identify hidden, questionable and incorrect assumptions b. How to define acceptance tests to make assumptions explicit c. How to accurately estimate time for acceptance test tasks d. How to enable accurate story estimates e. How to ask questions to identify potential problems 2. What testers do during iteration planning a. How to help team think of all tasks needed to complete a story, including those relating to infrastructure, packaging, environment, functional and acceptance testing b. How to promote understanding between the customers and the development team c. How to break out and accurately estimate tasks related to acceptance testing
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 277–278, 2002. © Springer-Verlag Berlin Heidelberg 2002
278
Lisa Crispin
3. What testers do during the actual iteration a. How your team can define detailed, but not too detailed, and effective acceptance tests, accounting for unusual and external events as appropriate b. How to document tests so they are easily understood and changed by the customer c. How to design and code effective and maintainable automated tests Presenter’s Resume Lisa Crispin has worked as a tester on Extreme Programming teams for one and a half years. Her article “Extreme Rules of the Road: How an XP Tester can Steer a Project Toward Success” appeared in the July 2000 issue of STQE Magazine. Her presentation “The Need for Speed: Automating Acceptance Tests in an Extreme Programming Environment” won Best Presentation at Quality Week Europe in 2000. Her papers “Testing in the Fast Lane: Acceptance Test Automation in an Extreme Programming Environment” and “Is Quality Negotiable?” will be published in a collection called Extreme Programming Perspectives from Addison-Wesley. She is co-writing a book Testing for Extreme Programming which will be published by Addison-Wesley in October 2002. Her presentations and seminars on testing for Extreme Programming in 2001 included “XP Days” in Zurich, Switzerland, XP Universe in Raleigh, and STAR West; in 2002, the Software Test Automation Conference in San Jose and the Rocky Mountain Software Symposium in Denver.
Scaling Agile Processes: Agile Software Development in Large Projects Jutta Eckstein Independent Consultant NIGOWXIMR$EGQSVK
Abstract. One saying is that, if you have a hundred people on a development team, get rid of at least eighty of them and keep the top twenty (or less). As a result the chances for project success will raise significantly. But maybe you don’t have even twenty top ones and/or the company just have these hundred people sitting around. It seems like the only places where large projects with huge teams seem to really work are projects where everything is formalized, the requirements are fixed and, most importantly, they don't change over time. A detailed plan can be set up and every successive action will stupidly follow the plan. Example are defense projects or projects with a similar structure, such as in airlines or nuclear power plants. We as software-engineers tend to question software engineering in the large not only because most of the agile processes claim to work only for small teams, also because most of the failed projects are really large ones. (Well, maybe nobody talks about failed small projects.) However, there are more than enough projects that are large in some sense. So, the question arises, how to use aspects of agile software development in large projects. We don’t want to focus on every aspect of agile processes, but on those who we encountered to be mainly different in large projects. The differences might be that some things have to be implemented differently, because at a specific size of the project things don’t work out the normal way anymore. Other differences are based on problems, which pop up especially in large teams, or rather which won’t be problems at all in small teams. This tutorial is based on our experience coordinating – so far successfully – a large (currently 160 people) project. Although we are still in the process of learning we would like to share our experiences about how agile processes can serve large projects. And besides – chances are high, that at the time XP Agile Universe 2002 is held, we have made even more valuable experiences.
Duration: Half-day tutorial Aims: – Ideas on how to overcome the obstacles of agile software development in the large – Know what the biggest problems of agile software development in the large are – Know how agile software development in the large could possibly work
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 279–280, 2002. © Springer-Verlag Berlin Heidelberg 2002
280
Jutta Eckstein
Audience – Those who tried to use agile methodologies in large projects and failed. – Those who tried to use agile methodologies in large projects and succeeded. – Those who didn’t try agile methodologies in large projects but would like to do so. – Proponents of the linear (waterfall-) model who think agile processes suck anyway – Proponents of agile processes who think an agile process would never work in the large Process. We want to share our experiences as well as hear from the audience’s experience and questions. Probably we will use interactive elements to work out patterns of agile processing in the large. Content outline Large Projects and their environment Getting Started Communication Customer involvement Difficulties in Planning Integration About the Author Jutta Eckstein is an independent consultant and trainer from Munich, Germany. Her experience in agile processes is based on over ten years experience in developing object-oriented applications. Besides engineering software she has been designing and teaching OT courses in industry. Having completed a course of teacher training and led many 'train the trainer' programs in industry, she focuses also on techniques, which help teach OT, and is a main lead in the pedagogical patterns project. She has presented work in her main areas at OOPSLA, OT and EuroPLoP. She is a member of the board of Hillside Europe e.V., the association to advance expert knowledge (in the shape of patterns) about practice-proven techniques for analysis, architecture and programming of software systems as well as for the formation of organizational and team structures for software development. She is furthermore a member of the program committee of XP 2002, XP- and Agile Universe 2002, EuroPLoP 2002, OT2002 and OOPSLA 2002.
Applying XP Tools to J2EE for the Extreme Programming Universe Richard Hightower CTO, Triveratech VLMKLXS[IV$XVMZIVEXIGLGSQ
Abstract. Applying XP Tools to J2EE for Extreme Programming is a half day technical tutorial that focuses on using Open Source Software tools to apply XP to Enterprise Java development, specifically for testing web components. This session will mainly encompass automating the build & integration process and automating the testing process. Essentially this tutorial is a cookbook on how to implement XP methodology in an Enterprise Java shop. XP is the methodology, i.e., the philosophy. Ant, JUnit, etc., that helps developers realize the XP philosophy in a J2EE environment. XP is comprised of much more than testing, integration & build tools, but testing and integration tools are one way that XP materializes in the development process. This tutorial covers the day-to- day application of XP tools - not the full XP process and methodology. Specifically, this session will cover testing web components with JUnit, Cactus, JUnitPerf, HttpUnit, Ant, and JMeter.
Aims and Intended Audience. Many books and papers on XP are focused on theory and methodology. Fewer publications, however, focus on practical application in a Java Enterprise Environment. XP is extremely popular! However, many development shops fail in instantiating the abstract concepts of XP in their Java development process. This tutorial is geared to address the gap between XP theory and real world application for automated testing and continuous integration in a J2EE development shop. This educational session is geared for developers using J2EE and XP. This includes developers who want ways to perform some level of testing and automated builds, as well as developers that use testing as a central role in their development due to the XP. Applying XP Tools to J2EE for Extreme Programming Universe has been designed for designed for an audience experienced in Java, as well as XP programmers who want to ease the development of Java web applications. These developers would be enterprise Web application specialists with a grasp of Enterprise Java (J2EE) technologies: EJB, JDBC, JSP and Servlets. However, they would not have to be J2EE experts. Outline. Each major concept will have supporting sample code. Code samples provided upon request. 1. Introduction 2. Complexities with J2EE development D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, pp. 281–282, 2002. © Springer-Verlag Berlin Heidelberg 2002
282
Richard Hightower
3. Continuous integration with Ant 4. Unit Testing with JUnit 5. Load Testing with JUnitPerf and JMeter About the Presenter Rick Hightower, CTO of Triveratech, has over a decade of experience as a software developer. In the past, he has helped development teams adopt new processes like Extreme Programming, and technology adoption like J2EE. Rick’s publications include Java Tools for eXtreme Programming, which covers deploying and testing J2EE projects (published by John Wiley, 2002); contributions to Java Distributed Objects (published by Sams); and several articles in Java Developer’s Journal. Rick also wrote a three part tutorial series on EJB CMP/CMR for IBM developerWorks (published in March 2002). Currently, Rick is involved in course development, mentoring, consulting and training on Java, J2EE, Web Services and XP projects. In his spare time, he does J2EE development.
Distributed Pair Programming David Stotts1 and Laurie Williams2 1
University of North Carolina at Chapel Hill WXSXXW$GWYRGIHY 2 North Carolina State University [MPPMEQW$GWGRGWYIHY
Agile methodologies stress the need for close physical proximity of team members. However, circumstances may prevent a team from working in close physical proximity. For example, a company or a project may have development teams physically distributed over multiple locations. As a result, increasingly many companies are looking at adapting agile methodologies for use in a distributed environment. The aim of this workshop is to bring together practitioners who have experiences with distributed pair programming. In addition, the workshop will be especially valuable to participants who are involved in development activity that is geographically distributed and are interested in applying distributed pair programming. We further want to incorporate experiences and research in Computer Supported Collaborative Work (CSCW) environments. Overall, the workshop will discuss how to make this type of distributed work as effective as possible and will help guide future research.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 283, 2002. © Springer-Verlag Berlin Heidelberg 2002
Agile Acceptance Testing Organizers Bret Pettichord1 and Brian Marick2 1 Pettichord
Consulting LLC FVIX$TIXXMGLSVHGSQ 2 Testing Foundations QEVMGO$XIWXMRKGSQ
Unit tests are tests of code modules by the programmers who created them. Acceptance tests are tests of software functionality from a customer, or user, perspective. Both types of tests are important. Tools and practices to support and encourage unit testing on extreme programming and other agile projects are welldeveloped and well-documented. This workshop discusses and develops practices for supporting acceptance testing on agile projects. (Some refer to acceptance testing as system or functional testing. GUI testing is a common approach.) How can acceptance testing be planned and executed on agile projects? What skills are required? Who should do this testing? And how should they interact with other members of the team? How can agile projects be managed to support the definition and automation of these tests? The purpose of this workshop is to understand the needs for system testing on agile projects and survey various approaches that participants have used or observed on their projects. Several themes provide the basis for the workshop discussion: Testing benefits from multiple perspectives. There is no best role or skill-set for testing. Rather, testing improves when multiple roles are involved in testing. There is often a need for dedicated testers on agile projects, especially on larger ones. Some testing must be done with the goal of finding problems, rather than verifying that the system meets requirements. Traditional testing methodologies do not suit agile projects well. They require too much planning, have trouble adapting to change, and are confounded by light-weight specifications. These themes guide the discussion of the different experiences workshop participants have had with agile acceptance testing. The organizers will publish a summary of the workshop discussions at http://www.pettichord.com/agile_workshop.html.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 284, 2002. © Springer-Verlag Berlin Heidelberg 2002
XP Fest Nathaniel Talbott and Duff O’Melia RoleModel Software _RXEPFSXXHSQIPMEa$VSPIQSHIPWSJX[EVIGSQ
Abstract. One of the core values of XP is feedback, which involves going beyond just talking about something to actually doing it as quickly as possible, because only then do you know if it makes any sense. So, instead of just coming to XP/Agile Universe and talking about XP, why not do some XP, too? It seems fitting to go beyond just talking about abstract concepts to actually implementing them in a simple format. In the style of OOPSLA’s AnalysisFest, DesignFest and CodeFest, with their waterfall view of software development, you can participate in XPFest at XP/Agile Universe.
Goals • • • • •
Learn XP by doing. Be exposed to different views of XP. Try out XP if you haven't tried it yet. Compare how different teams adapt XP. Have fun doing something you enjoy: programming using XP!
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 285, 2002. © Springer-Verlag Berlin Heidelberg 2002
Empirical Evaluation of Agile Processes Organizers Grigori Melnik1 , Laurie Williams2 , and Adam Geras3 1
Southern Alberta Institute of Technology
[email protected] 2 North Carolina State University
[email protected] 3 University of Calgary
[email protected]
Presently, there is a lot of anecdotal evidence and very little empirical and validated data of agile methods effectiveness. Much of the evidence is based on the stories and preferences of those who practice it. Imagine the benefits of knowing that an XP project expends more effort understanding software requirements than does a team using a typical traditional, or waterfall approach. Imagine the benefits of being able to predict that for this particular combination of customer, product, and project team, a small bit of modeling is going to benefit the team more than a strict XP implementation. The world we seek is one that demystifies the success of agile methods and supports everyday practitioners to apply the right method at the right time. We believe measurements are key to making these decisions and to making agile methods more accessible. The goal of this workshop is to work towards establishing a framework for 1) controlling an agile software process; 2) determining the situations when applying agile methods would be beneficial and 3) planning and budgeting for a software development effort that involve agile methods. The intent is to discuss the current state of ongoing empirical research efforts in agile methods and to work towards establishing an agile measurement framework.The workshop participants are expected to share keen interests in finding practical approaches to measuring the effectiveness of agile software techniques in various project situations (both industrial and academic). Examples of the thematic questions that the workshop participants will discuss: – If agile processes favor “individuals and interactions” over “processes and tools” then why do we need “control”? – What key areas/aspects of agile methods should we be studying empirically? – What types of measures are useful? How do we validate a given measure? – Is measurement really key to anything in this context? – What existing software measures satisfy our data requirements? – How do we design experiments? What is the process of collecting empirical data? – How does a project remain agile while collecting data? D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 286, 2002. c Springer-Verlag Berlin Heidelberg 2002
Are Testers eXtinct? How Can Testers Contribute to XP Teams? Moderator Ken Auer RoleModel Software OEYIV$VSPIQSHIPWSJXGSQ
Panelists Ron Jeffries1, Jeff Canna2, Glen B. Alleman3, Lisa Crispin4, and Janet Gregory5 1 Object
Mentor VSRNIJJVMIW$EGQSVK [[[SFNIGXQIRXSVGSQ[[[
Abstract. There are lots of success stories for XP teams where testers made a significant contribution. There are lots of success stories for XP teams that didn’t have an official tester. There are some horror stories of so-called XP teams who didn’t have testers and didn’t do a scrap of acceptance testing. As more organizations try XP, there are a lot of teams out there wondering if they need testers, and a lot of testers who are suddenly members of XP teams wondering what the heck they are supposed to do. This is relevant to anyone on an XP team, any QA / test professionals who are interested in doing XP, anyone in a software development role who is considering using XP. The current Extreme Programming publications don’t really define how an XP tester can contribute to the project. What should testers do? When should they do it? How should they do it? Is there even a place for testing and quality assurance professionals on XP teams? Can any tester be an XP tester? See http://www.testers.org for more information.
Format. Debate / discussion. There are a lot of different viewpoints on this subject, as we’ve seen in discussions on the agile-testing mailing list. Each panelist will give a two minute position statement. Then we’ll address questions taken ahead of time, and questions from the floor.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 287, 2002. © Springer-Verlag Berlin Heidelberg 2002
XP - Beyond Limitations? Moderators Steven Fraser, Independent Consultant WHJVEWIV$EGQSVK
Rachel Reinitz, Reinitz Consulting
VEGLIP$VIMRMX^GSRWYPXMRKGSQ
Panelists Ken Auer, Barry Boehm, Ward Cunningham, and Rob Mee RoleModel
Software OEYIV$VSPIQSHIPWSJXGSQ University of Southern California FSILQ$WYRWIXYWGIHY Cunningham & Cunningham [EVH$GGSQ VSFQII$LSXQEMPGSQ
Abstract. As eXtreme Programming grows in popularity and acceptance within the software development community interesting questions arise on whether XP practices can be applied effectively beyond their assumed limitations. For example, as team size and system complexity increase - is XP automatically out of the picture or are there mechanisms now available to scale XP? Panelists will discuss and debate the threshold for XP suitability based on organization/ infrastructure characteristics, project complexity, team size, legacy issues and other relevant topics.
Session Format: The session will be pair-moderated by Fraser/Reinitz in a "talkshow" style. Panelists will have a brief opportunity to state their positions (the panel will NOT be a collection of mini-presentations) after which questions will be taken from the floor using roving mikes, "instant polls" and other interactive techniques. This format is suitable to ensure maximum participation by both the panelists and the audience - and ensure the relevancy of the discussion. The session will close with a brief summary.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 288, 2002. © Springer-Verlag Berlin Heidelberg 2002
Extreme Fishbowl Directed by: Zhon Johansen, Symantec ^LSR$E\IRXGSQ
with celebrity guest programmers and commentators including: Ken Auer1, Brian Button2, Alistair Cockburn3, James Grenning2, Kay Johansen4, and Duff O’Melia5 1 RoleModel Software OEYIV$VSPIQSHIPGSQ 2 ObjectMentor _FFYXXSRKVIRRMRKa$SFNIGXQIRXSVGSQ 3 Humans & Technology %'SGOFYVR$ESPGSQ 4 ONSLERWIR$ZIVMSRIX 5 RoleModel Software HSQIPME$VSPIQSHIPWSJXGSQ
Abstract. Attendees at Extreme Fishbowl will see an XP team in action. As the XP fishbowl team is busy creating software, a small group of "experts" will be commentating on the activities of the programming team. Members of the audience will be able to participate by joining the "fishbowl" team. Participants will see/participate in spontaneous XP programming including refactoring, pair programming, test driven design, and team communication. See http://www.xputah.org/cgi-bin/wiki?ExtremeFishbowlAtXpUniverse for more information.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 289, 2002. © Springer-Verlag Berlin Heidelberg 2002
Agile Experiences Moderator: Dave Thomas Bedarra Corporation, Carleton University and University of Queensland HEZI$FEHEVVEGSQ
Panelists: John Manzo1, Narti Kitiyakara2, Russell Stay3, and Aldo Dagnino4 1 Geneer
LXXT[[[KIRIIVGSQ 2 NOLA Computer Services ROMXM]EOEVE$RSPEGSQGSQ 3 Symantec VWXE]$W]QERXIGGSQ 4 ABB EPHSHEKRMRS$YWEFFGSQ
Abstract. The proponents of various Agile Methodologies often share anecdotal evidence about how their favorite methodologies work. Come to this panel to hear first-hand experiences of those who have applied these methodologies in their organizations.
D. Wells and L. Williams (Eds.): XP/Agile Universe 2002, LNCS 2418, p. 290, 2002. © Springer-Verlag Berlin Heidelberg 2002
Author Index
Airth, Alan 254 Alleman, Glen B. 70, 287 Anderson, Ann 263 Antley, Angus 131 Auer, Ken 287, 288, 289 Ayoob, Azeem 100 Baarman, Matthew 100 Baheti, Prashant 208 Bankston, Arlen 251 Basili, Vic 197 Beck, Kent 239 Boehm, Barry 197, 288 Bowers, Jason 100 Breitling, Holger 273 Button, Brian 289 Canna, Jeff 287 Caristi, James 240 Cockburn, Alistair 289 Costa, Patricia 197 Crispin, Lisa 277, 287 Cunningham, Ward 288 Dagnino, Aldo 290 Dangle, Kathleen 197 Diaz Herrera, Jorge 239 Eckstein, Jutta
279
Fowler, Martin Fraser, Steven
256 288
Gabriel, Richard P. 237, 239 Ganesan, Prasanth 221 Gehringer, Edward 208 Geras, Adam 286 Goodsen, John 275 Gregory, Janet 287 Grenning, James 253, 289 Hendrickson, Chet 263 Henninger, Scott 33 Hericko, Marjan 174 Hightower, Richard 281 Hill, Michael 269
Hodgetts, Paul 261 Humphrey, Watts 239 Ivaturi, Aditya
33
Jain, Apurva 153 Jalis, Asim 122 Jeffries, Ron 263, 268, 287 Johansen, Kay 52, 289 Johansen, Zhon 289 Kerievsky, Joshua 259 Kessler, Robert 271 Kitiyakara, Narti 112, 290 Kleb, William L. 89 Krebs, William 60 Larsen, Diana 259 Lindsey, Mark 131 Lindstrom, Lowell 264 Lindvall, Mikael 197 Lippert, Martin 273 Manzo, John 290 Marick, Brian 284 Maurer, Frank 13, 240, 241 May, John 100 McCracken, Michael 239 Mee, Rob 288 Melander, Erik 100 Melnik, Grigori 241, 286 Miller, Roy W. 231 Naveda, J. Fernando 239 Newkirk, James 144 Nuli, Krishna 33 O’Melia, Duff
285, 289
Patton, Jeff 1 Perkins, Anthony 52 Pettichord, Bret 284 Ramachandran, Vinay 166 Rasmusson, Jonathan 45 Reifer, Donald J. 185 Reinitz, Rachel 288
292
Author Index
Rettig, Michael 240 Rostaher, Matevz 174 Sadalage, Pramod 257 Schuh, Peter 257 Schwaber, Ken 266 Shaw, Steve 23 Shukla, Anuja 166 Shull, Forrest 197 Srinivasa, Gopal 221 Stay, Russell 290 Steinberg, Daniel 238 Stotts, David 131, 208, 283
Talbott, Nathaniel 285 Tesoriero, Roseanne 197 Thirunavukkaras, Ashok 33 Thomas, Dave 290 Thomas, Kuryan 251 Turner, Richard 153 Wake, William 268 West, Dave 239 Williams, Laurie 197, 271, 283, 286 Wood, William A. 89 Zelkowitz, Marvin
197