. 1
( 10)


This page intentionally left blank

In an IT world in which there are differently sized projects, with different
applications, differently skilled practitioners, and onsite, offsite, and offshore
development teams, it is impossible for there to be a one-size-¬ts-all agile
development and testing approach. This book provides practical guidance for
professionals, practitioners, and researchers faced with creating and rolling out
their own agile testing processes. In addition to descriptions of the prominent
agile methods, the book provides twenty real-world case studies of practitioners
using agile methods and draws upon their experiences to populate your own
agile method; whether yours is a small, medium, large, offsite, or even offshore
project, this book provides personalized guidance on the agile best practices
from which to choose to create your own effective and ef¬cient agile method.

John Watkins has more than thirty years of experience in the ¬eld of software
development, with some twenty-¬ve years in the ¬eld of software testing. Dur-
ing his career, John has been involved at all levels and phases of testing and
has provided high-level test process consultancy, training, and mentoring to
numerous blue chip companies.
He is both a Chartered IT Professional and a Fellow of the British Computer
Society, where he is an active member of the Specialist Group in Software
Testing (SIGiST), previously serving on committees of the Intellect Testing
Group (representing the U.K. technology industry) and the SmallTalk User
He is author of Testing IT: An Off-the-Shelf Software Testing Process (Cam-
bridge University Press, 2001) and currently works for IBM™s software group.
How to Succeed in an Extreme
Testing Environment

Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore,
São Paulo, Delhi, Dubai, Tokyo

Cambridge University Press
The Edinburgh Building, Cambridge CB2 8RU, UK

Published in the United States of America by Cambridge University Press, New York

Information on this title: www.cambridge.org/9780521191814
© John Watkins 2009

This publication is in copyright. Subject to statutory exception and to the
provision of relevant collective licensing agreements, no reproduction of any part
may take place without the written permission of Cambridge University Press.
First published in print format 2009

ISBN-13 978-0-511-59546-2 eBook (EBL)

ISBN-13 978-0-521-19181-4 Hardback

ISBN-13 978-0-521-72687-0 Paperback

Cambridge University Press has no responsibility for the persistence or accuracy
of urls for external or third-party internet websites referred to in this publication,
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
“To my Father, My Methodical Role-Model”

Foreword by Bob Bartlett page xi
Acknowledgments xiii

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Why Agile? 1
1.2 Suggestions on How to Read This Book 3


2 Old-School Development and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Introduction 7
2.2 So, What Is Process? 7
2.3 Waterfall 8
2.4 Spiral 9
2.5 Iterative 10
2.6 Traditional Elements of Test Process 13
2.7 Summary 16

3 Agile Development and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1 Introduction 18
3.2 Rapid Application Development 19
3.3 Extreme Programming 20
3.4 The Dynamic Systems Development Method 21
3.5 Scrum 23
3.6 Other Agile Methods 24
3.7 Summary 27



4 From Waterfall to Evolutionary Development and Test . . . . . . . . . . . . . . . 31
Tom Gilb and Trond Johansen

5 How to Test a System That Is Never Finished . . . . . . . . . . . . . . . . . . . . . 37
Nick Sewell

6 Implementing an Agile Testing Approach . . . . . . . . . . . . . . . . . . . . . . . 44
Graham Thomas

7 Agile Testing in a Remote or Virtual Desktop Environment . . . . . . . . . . . . . 49
Michael G. Norman

8 Testing a Derivatives Trading System in an Uncooperative Environment . . . . . 53
Nick Denning

9 A Mixed Approach to System Development and Testing: Parallel Agile and
Waterfall Approach Streams within a Single Project . . . . . . . . . . . . . . . . . 62
Geoff Thompson

10 Agile Migration and Testing of a Large-Scale Financial System . . . . . . . . . . 66
Howard Knowles

11 Agile Testing with Mock Objects: A CAST-Based Approach . . . . . . . . . . . . . 72
Colin Cassidy

12 Agile Testing “ Learning from Your Own Mistakes . . . . . . . . . . . . . . . . . . 81
Martin Phillips

13 Agile: The Emperor™s New Test Plan? . . . . . . . . . . . . . . . . . . . . . . . . . 86
Stephen K. Allot

14 The Power of Continuous Integration Builds and Agile Development . . . . . . . 93
James Wilson

15 The Payoffs and Perils of Offshored Agile Projects . . . . . . . . . . . . . . . . . 103
Peter Kingston

16 The Basic Rules of Quality and Management Still Apply to Agile . . . . . . . . . 115
Richard Warden

17 Test-Infecting a Development Team . . . . . . . . . . . . . . . . . . . . . . . . . . 122
David Evans

18 Agile Success Through Test Automation: An eXtreme Approach . . . . . . . . . 132
Jon Tilt

19 Talking, Saying, and Listening: Communication in Agile Teams . . . . . . . . . 139
Isabel Evans

20 Very-Small-Scale Agile Development and Testing of a Wiki . . . . . . . . . . . . 151
Dass Chana

21 Agile Special Tactics: SOA Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Greg Hodgkinson

22 The Agile Test-Driven Methodology Experiment . . . . . . . . . . . . . . . . . . 180
Lucjan Stapp and Joanna Nowakowska

23 When Is a Scrum Not a Scrum? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Dr Peter May


24 Analysis of the Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
24.1 Introduction 193
24.2 Agile Development and Testing 194
24.3 Agile Process and Project Management 200
24.4 Agile Requirements Management 207
24.5 Agile Communication 210
24.6 Agile Meetings 212
24.7 Agile Automation 216
24.8 Summary 222

25 My Agile Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
25.1 Introduction 224
25.2 Foundation Agile Best Practices 225
25.3 Agile Best Practices for Small-Sized Projects 230
25.4 Agile Best Practices for Medium-Sized Projects 232
25.5 Agile Best Practices for Large-Sized Projects 238
25.6 Agile Best Practices for Offsite and Offshore Projects 248
25.7 Summary 250

26 The Roll-out and Adoption of My Agile Process . . . . . . . . . . . . . . . . . . . 251
26.1 Introduction 251
26.2 Roll-out and Adoption 252
26.3 Maintenance of Your Agile Process 255
26.4 Summary 256

Appendix A. The Principles of Rapid Application Development . . . . . . . . . . . 259
Appendix B. The Rules and Practices of Extreme Programming . . . . . . . . . . . 263
Appendix C. The Principles of the Dynamic Systems Development Method . . . . 270
Appendix D. The Practices of Scrum . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Appendix E. Agile Test Script Template . . . . . . . . . . . . . . . . . . . . . . . . . 284
Appendix F. Agile Test Result Record Form Template . . . . . . . . . . . . . . . . . 292
Appendix G. Agile Test Summary Report Template . . . . . . . . . . . . . . . . . . 300
Appendix H. My Agile Process Checklist . . . . . . . . . . . . . . . . . . . . . . . . . 305

References 309
Index 313
Bob Bartlett, CIO, SQS

It is fascinating to see that so many members of our global software development and
implementation community are at last agreeing violently on the principles behind
agile. It is gratifying to see developers and testers working side-by-side aiming for
the same goals and supporting each other. If you didn™t know what agile was but
could see productive, disciplined, and self-organizing teams engaged in developing
working software prioritized by business value, you would know that whatever they
are doing must be right. Testers in particular have bene¬ted from greater pride in
their work as they are accepted as equal partners in the software development team.
Agile development and testing is a thirty-year-old overnight success; in fact, most
of the younger developers and testers who so enthusiastically promote this new way
of developing software had almost certainly not been born when the likes of Barry
Boehm and James Martin ¬rst began to develop the Rapid Application Development
(RAD) method!
Agile certainly seems to be being promoted as the latest silver bullet for all
software development problems; the topic is included at pretty much any software
event, user group, or standards group you attend, and much of the IT literature is
full of agile references. For example, I recently hosted a conference on Software and
Systems Quality and invited as a keynote speaker a senior development manager from
IBM, who spoke of the corporate-wide initiative “ spearheaded by the Chairman “
to implement agile methods and processes groupwide. He reported (with good
evidence) that product teams have adopted an agile development method in order to
be responsive to customer needs and deliver software on time and to budget, with
much higher quality “ measured by incredibly low rates of postrelease faults.
This is not to say that agile doesn™t have its detractors; many traditional IT
development practitioners will tell you agile only works for small, simple, and well-
bounded software development projects, where the customer and development team
are co-located and staffed by experienced and capable practitioners. Why wouldn™t
a project succeed under such circumstances? So what happens if the customer or
some of the team are offsite (or even offshore)? What happens if the application is
large, complex, and carries signi¬cant technological risk? What if you can™t afford


to staff your development project with highly experienced, capable, motivated, and
very expensive agile experts?
These are certainly some of the major challenges that agile must meet if it is
going to be widely accepted, adopted, and used successfully. But in an IT world
where there is no universal one-size-¬ts-all software development process, how can
agile be successfully applied to small, medium, large, and offsite/offshore projects?
How can agile be used to address complex, dif¬cult, or special-needs IT projects?
How can IT practitioners of varying experience and ability adopt and use agile best
practices effectively?
There is certainly a risk that such a megatrend as agile becomes overhyped and
overexploited commercially. However, the enthusiasm to share experiences and the
availability of free and open tools and information is building the community belief
and commitment. I have observed many times how intuitively people adopt the agile
principles and practices “ particularly testers, who embrace their new-found ability
to contribute from the outset and “design in” quality on projects. The enthusiasm is
contagious and the results of agile teams as seen in systems and software products
show what can be produced when there is a dynamic and ¬‚exible approach to
achieving well-understood goals relentlessly prioritized by business value.
This book is the product of a number of people who felt con¬dent and committed
enough to document their experiences in the hopes that others would share their
positive results and success. Each case study tells a different success story and at
¬rst you may feel overwhelmed with good ideas and ways to develop software. Just
bringing these stories together makes for a worthy and valuable book. I am pleased
to see that the whole story is told from many perspectives, not just the testing side.
I have known John Watkins for about ten years now. During that time, I have
seen and listened to him evangelize about testing and have supported events he has
organized, such as the Rational Industry Testing Forum and the Rational Testing
User Group “ both of which I have spoken at. His passion for effective and professional
testing has been constant, as has his commitment to the industry.
John has spoken several times at Software Quality Systems (SQS) events that I
have organized, as well as at numerous industry-wide events. John was an invited
keynote and session speaker at the Scandinavian Ohjelmistotestaus testing confer-
ences, and spoke at the Software Quality Assurance Management (SQAM) testing
conference in Prague. He has also been active in the British Computer Society (BCS)
(having made Fellow in 1997) and the Object Oriented (OO) and Specialist Group in
Software Testing (SIGiST) special interest groups (where he has spoken many times,
sat on testing discussion panels, chaired “birds of a feather” sessions, and so forth).
He helped me tremendously in setting up and running the Intellect Testing Group,
where I was grateful to have him on the management committee and to have his
participation by writing and presenting on test process.
John has done a tremendous job to elicit the contributions in this book, but he
provides an even greater service by ¬nding the common threads and practices and
explaining why twenty-three different people shared in success. I am sure John feels
proud that so many people can share in the creation of this book and the contribution
to the “My Agile” process he describes.

I would very much like to thank the following people for their advice, assistance, and
encouragement in the writing of this book:

Scott Ambler, Christine Mitchell-Brown, David Burgin, Dawn Davidsen, Abby
Davies, Dorothy Graham, Andrew Grif¬ths, Dr Jon Hall, Karen Harrison, Elis-
abeth Hendrickson, Ivar Jacobson, Anthony J Kesterton, Nick Luft, Frank Mal-
one, Simon Mills, Simon Norrington, Jean-Paul Quenet, Andrew Roach, Manish
Sharma, Jamie Smith, Ian Spence, Julie Watkins, and Nigel Williams.

I would also like to thank the following people for their invaluable contribution in
providing the agile case studies:

Stephen K. Allott, Managing Director of ElectroMind
Colin Cassidy, Software Architect for Proli¬cs Ltd
Dass Chana, Computer Science Student
Nick Denning, Chief Executive Of¬cer of Diegesis
David Evans, Director of Methodology at SQS Ltd
Isabel Evans, Principal Consultant at the Testing Solutions Group Ltd
Tom Gilb, independent testing practitioner and author
Greg Hodgkinson, Process Practice Manager, Proli¬cs Ltd
Trond Johansen, Head of R & D Norway, Con¬rmit AS
Peter Kingston, Consulting Test Manager
Howard Knowles, Managing Director of Improvix Ltd
Dr Peter May, Technology Consultant, Deloitte
Michael G. Norman, Chief Executive Of¬cer of Scapa Technologies Ltd
Joanna Nowakowska, Rodan Systems S.A.
Martin Phillips, Test Lead, IBM Software Group


Graham Thomas, independent testing consultant
Nick Sewell, European Managing Director of Ivar Jacobson Consulting Ltd
Professor Lucjan Stapp, Warsaw University of Technology
Geoff Thompson, Services Director for Experimentus
Jon Tilt, Chief Test Architect ESB Products, IBM Software Group
Richard Warden, Director of Software Futures Ltd
James Wilson, Chief Executive Of¬cer of Trinem

I would also like to give particular thanks to Bob Bartlett for his excellent foreword;
he is a well-recognized testing industry ¬gure, someone I have great personal respect
for, and someone I have had the pleasure of working with on numerous occasions,
and I am very grateful for his assistance in providing the foreword to this book.
And last, but certainly not least, I would like to express my appreciation for the
insight and experience of my technical reviewer Duncan Brigginshaw, and for the
constant “encouragement” and guidance from my editor Heather Bergman and her
assistant David Jou of Cambridge University Press.
1 Introduction
If you try to make the software foolproof,
they will just invent a better fool!
Dorothy Graham

1.1 Why Agile?
In today™s highly competitive IT business, companies experience massive pressures
to be as effective and ef¬cient as possible in developing and delivering successful
software solutions. If you don™t ¬nd strategies to reduce the cost of software devel-
opment, your competitors will, allowing them to undercut your prices, to offer to
develop and deliver products faster, and ultimately to steal business from you.
Often in the past, testing was an afterthought; now it is increasingly seen as the
essential activity in software development and delivery. However, poor or ineffective
testing can be just as bad as no testing and may cost signi¬cant time, effort, and
money, but ultimately fail to improve software quality, with the result that your
customers are the ones who ¬nd and report the defects in your software!
If testing is the right thing to do, how can you ensure that you are doing testing
If you ask managers involved in producing software whether they follow industry
best practices in their development and testing activities, almost all of them will
con¬dently assure you that they do. The reality is often far less clear; even where a
large formal process documenting best development and testing practice has been
introduced into an organization, it is very likely that different members of the team
will apply their own testing techniques, employ a variety of different documentation
(such as their own copies of test plans and test scripts), and use different approaches
for assessing and reporting testing progress on different projects. Even the language
is likely to be different, with staff using a variety of terms for the same thing, as well
as using the same terms for different things!
Just how much time, effort, and money does this testing chaos cost your organiza-
tion? Can you estimate just how much risk a project carries in terms of late delivery,
with poor testing resulting in the release of poor-quality software? To put this in per-
spective, the U.S. National Institute of Standards and Technology recently reported
that, for every $1 million spent on software implementations, businesses typically
incur more than $210,000 (or between a ¬fth and a quarter of the overall budget) of

Agile Testing: How to Succeed in an Extreme Testing Environment

additional costs caused by problems associated with impact of postimplementation
faults [1].
The most common reason that companies put up with this situation is that they
take a short-term view of the projects they run; it is much better to just get on with
it and “make progress” than to take a more enlightened, but longer-term, view to
actually address and ¬x the problems.
Many organizations are now adopting some form of formal test process as the
solution to these problems. In this context, a process provides a means of document-
ing and delivering industry best practice in software development and testing to all
of the staff in the organization. The process de¬nes who should do what and when,
with standard roles and responsibilities for project staff, and guidance on the correct
way of completing their tasks. The process also provides standard reusable templates
for things like test plans, test scripts, and testing summary reports and may even
address issues of process improvement [2].
Although there have been numerous attempts to produce an “industry standard”
software testing process (e.g., the Software Process Engineering Metamodel [3]),
many practitioners and organizations express concerns about the complexity of such
processes. Typical objections include:

“The process is too big” “ there is just too much information involved and it takes
too long to rollout, adopt, and maintain.
“That™s not the way we do things here” “ every organization is different and there
is no one-size-¬ts-all process.
“The process is too prescriptive” “ a formal process sti¬‚es the creativity and
intuition of bright and imaginative developers and testers.
“The process is too expensive” “ if we are trying to reduce the cost of soft-
ware development, why would we spend lots of money on somebody else™s best

Interestingly, even where individuals and organizations say they have no process,
this is unlikely to be true “ testers may invent it on the ¬‚y each morning when they
start work, but each tester will follow some consistent approach to how he or she
performs their testing. It is possible for this “approach” to be successful if you are
one of those talented supertesters or you work in an organization that only hires
“miracle QA” staff. For the rest of us, we need to rely on documented best practices
to provide guidance on the who, the what, and the when of testing, and to provide
reusable templates for the things we create, use, or deliver as part of our testing
So, here is the challenge: how is it possible to produce good-quality software, on
time and to budget, without forcing a large, unwieldy, and complex process on the
developers and testers, but still providing them with suf¬cient guidance and best
practices to enable them to be effective and ef¬cient at their jobs? To restate this
question, what is the minimum subset of industry best practice that can be used
while still delivering quality software?

This book provides practical guidance to answer this question by means of real-
world case studies, and will help you to select, adopt, and use a personally customized
set of agile best practices that will enable you and your colleagues to deliver quality
testing in as effective and ef¬cient a manner as possible.

1.2 Suggestions on How to Read This Book
This book is divided into three main sections (plus the appendices), each of which
are closely linked, but each of which can be read and applied separately.

Part 1 of the book provides a review of both the traditional or “classic” view of
software testing process and examples of agile approaches:

If you are interested in reviewing the early history of software development
and testing process, Chapter 2 (Old-School Development and Testing) begins by
reviewing the traditional or “classic” view of process. This chapter explores the
good and the bad aspects of classic test process, and provides a useful baseline
for the rest of the book to build on.
If you are interested in understanding the development of agile approaches to
software development and testing, Chapter 3 (Agile Development and Testing)
provides an overview of the principal agile approaches that have been used to
develop software, with particular emphasis on the testing aspects of the method
Although Chapter 3 provides a high-level overview of the principal agile
approaches, if you require a deeper understanding of these methods then refer
to Appendices A through D. You may ¬nd this to be of particular bene¬t in
preparation for reading the agile case studies in Part 2 of the book.

Part 2 of the book contains twenty case studies, which provide real-world examples
of how different organizations and individual practitioners have worked in an agile
development and testing framework or have implemented their own agile testing
approaches. Each chapter reviews the speci¬c testing requirements faced by the
testers, provides a summary of the agile solution they adopted, describes the overall
success of the approach, and provides a discussion of which speci¬c aspects of the
approach worked well, and which aspects might be improved or omitted in future
testing projects.

Part 3 of this book provides an analysis of the agile case studies presented in
Part 2 and draws upon the material from Part 1 to make a series of proposals
about what components might be used to generate your own successful agile testing

If you would like some guidance on agile best practices from a practitioner
perspective, Chapter 24 (Analysis of the Case Studies) examines in detail the
Agile Testing: How to Succeed in an Extreme Testing Environment

agile case studies presented in Part 2, identifying particularly successful agile
techniques, common themes (such as successful reusable templates), as well as
those testing approaches that were not successful and which may need to be
treated with caution.
If you are interested in guidance on how to set up your own agile development
and testing process, Chapter 25 (My Agile Process) draws on the information
provided in the case studies and their analysis to make a series of proposals for
how you might set up and run a practical, effective, and ef¬cient agile testing
If you would like some guidance on how to introduce your agile testing method
into your own organization, Chapter 26 (The Roll-out and Adoption of My Agile
Process) provides a series of tried and tested best practices describing how you
can roll out the process and drive its successful use and adoption.

The Appendices
If you would like to ¬nd more detail on the agile methods described brie¬‚y in
Chapter 3, Appendices A through D provide further description of each of the key
agile approaches covered in Chapter 3, with particular emphasis on the software
quality aspects of each approach. You may ¬nd value in reading these appendices in
preparation for reading the case studies presented in Part 2 of this book.
Appendices E through G provide a set of reusable testing templates that can
be used as a resource to be reused in your own agile process (these templates are
also available in electronic format from the Cambridge University Press Web site at
http://www.cup.agiletemplates.com), including

an agile test script template,
an agile test result record form template, and
an agile test summary report template.

Appendix H contains a checklist of agile best practices that shows which practices
are particularly appropriate for the different styles and sizes of agile project described
in Chapter 25. This checklist can be used as a summary of the practices and as an
aide memoire to assist you in populating your own agile process.
References cited in the text are fully expanded in the References section at the
back of the book.

Fact of the matter is, there is no hip world, there is no straight world. There™s a
world, you see, which has people in it who believe in a variety of different
things. Everybody believes in something and everybody, by virtue of the fact
that they believe in something, use that something to support their own
Frank Zappa

This section of the book provides a review of both the traditional or “classic” view of
software testing process and agile approaches.
The chapters in this section are:
Chapter 2 “ Old-School Development and Testing, which begins by reviewing the
traditional or “classic” view of software testing process. This chapter will explore
the good and bad aspects of classic test process, and provides a useful baseline
for the rest of the book to build on
Chapter 3 “ Agile Development and Testing, which provides a review of the
most prominent agile approaches that have been used to develop software, with
particular emphasis on the testing aspects of the method described. If additional
information on a particular approach is needed, more complete details of each
method are provided in Appendices A to D.

2 Old-School Development and Testing
Testing is never completed, it™s simply abandoned!
Simon Mills

2.1 Introduction
This chapter discusses what software development and testing process is, reviews
the historical development of process, and concludes by providing a review of the
elements of a traditional or “classic” software testing process, providing a useful
baseline for the rest of the book to build on.

2.2 So, What Is Process?
A process seeks to identify and reuse common elements of some particular approach
to achieving a task, and to apply those common elements to other, related tasks.
Without these common reusable elements, a process will struggle to provide an
effective and ef¬cient means of achieving those tasks, and ¬nd it dif¬cult to achieve
acceptance and use by other practitioners working in that ¬eld.
Test process is no different; we have many different tasks that need to be achieved
to deliver effective and ef¬cient testing, and at a variety of different levels of testing
from component/unit/developer testing, through integration/module testing, on into
systems testing, and through to acceptance testing [4].
Even before testing process was “invented”, good testers have done things in a
particular way to achieve good results “ such as the best way to ¬nd the most defects,
to complete testing more quickly or more cheaply, to save time by reusing things
they had produced in earlier testing projects (such as a template for a test plan or a
test script), or to ensure consistent nomenclature (such as common terms for testing
Such enlightened practitioners were even known to share such best practices
with their colleagues, passing on or swapping reusable templates, publishing papers
on testing techniques, or mentoring other staff on test management approaches, for
As the IT industry matured, with customers demanding increasingly complex
systems, of ever higher quality, in shorter timescales and with lower cost, the

Agile Testing: How to Succeed in an Extreme Testing Environment

Risk Profile





2.1 The Waterfall Phases and Risk Pro¬le (dotted line).

resulting commercial pressures forced those organizations developing software to
seek methods to ensure their software development was as effective and ef¬cient as
possible. If they did not ¬nd the means to deliver software faster, cheaper, and with
better quality, their competitors would.
Successive waves of new technologies, such as procedural programming, fourth-
generation languages, and object orientation, all promised to ensure reductions in
the occurrence of defects, to accelerate development times, and to reduce the cost
of development. Interestingly, it was observed that it was still possible to write poor-
quality software that failed to achieve its purpose and performed poorly or included
defects, no matter what technologies were used!
As with so many instances of a new technology failing to solve a particular
problem, the issue actually turns out to be a people problem. Human beings need
guidance, they need to build upon the knowledge and experiences of others, they need
to understand what works and what doesn™t work, and they need to avoid wasting time
reinventing things that other practitioners have already successfully produced and
used. Project chaos, where each project and practitioner uses different techniques,
employs different terminology, or uses (or worse, reinvents from scratch) different
documentation, was increasingly considered to be unacceptable.
The following sections review a number of the early approaches to software
development and testing that sought to avoid such project chaos.

2.3 Waterfall
One of the earliest approaches to software development is the waterfall approach.
A paper published by Winston W. Royce in the 1970s [5] described a sequential
software development model containing a number of phases, each of which must be
completed before the next begins. Figure 2.1 shows the classic interpretation of the
phases in a waterfall project.
Old-School Development and Testing

From a quality perspective, the waterfall approach has been often criticized
because testing begins late in the project; as a consequence, a high degree of project
risk (that is, failure of the software to meet customer expectations, to be delivered
with acceptable levels of defects, or to perform adequately) is retained until late into
the project. With the resultant reworking and retesting caused by the late detection
of defects, waterfall projects were also likely to incur additional effort, miss their
delivery dates, and exceed their budgets.
The waterfall approach has also been criticized for its lack of responsiveness to
customer requests for changes to the system being developed. Historically, it was
typical for all of the requirements to be captured at the start of the project and to
be set in stone throughout the rest of the development. A frequent result of this
approach was that by the time the software had been delivered (sometimes months
or even years later), it no longer matched the needs of the customer, which had
almost certainly changed by then.
Because of increasing dissatisfaction with the rigid structure of waterfall projects,
other solutions were investigated that would be more ¬‚exible in terms of addressing
changing requirements.

2.4 Spiral
Many attempts were made to address the shortcomings of the waterfall approach,
such as the spiral model of software development de¬ned by Barry Boehm in 1988 [6].
Intended for use in large, complex, and costly projects, and intended to address the
issues of meeting customer requirements, this incremental development process
relied heavily on the development and testing of a series of software prototypes of the
¬nal system. The typical steps involved in a spiral model“driven project are as follows:

1. In discussion with the customer, the requirements for the system are de¬ned and
documented in as much detail as possible.
2. An initial design is created based on the requirements.
3. A sequence of increasingly complete prototypes are constructed from the design
in order to
test the strengths and weaknesses of the prototypes, and to highlight any
assist in re¬ning the requirements by obtaining customer feedback; and
assist in re¬ning the planning and design.
4. The risks identi¬ed by testing the prototypes are reviewed with the customer,
who can make a decision whether to halt or continue the project.
5. Steps 2 through 4 are repeated until the customer is satis¬ed that the re¬ned
prototype re¬‚ects the functionality of the desired system, and the ¬nal system is
then developed on this basis.
6. The completed system is thoroughly tested (including formal acceptance testing)
and delivered to the customer.
Agile Testing: How to Succeed in an Extreme Testing Environment











Review Progress











2.2 Graphical Overview of the Spiral Model.

7. Where appropriate, ongoing maintenance and test are performed to prevent
potential failures and to maximize system availability.

Figure 2.2 provides a graphical overview of a typical interpretation of the spiral
Although considered to be an improvement over the waterfall approach in terms
of delivering systems that more closely match the customer™s requirements, and
for delivering higher-quality software (achieved in large part by the spiral model,
which encourages early and continued testing of the prototypes), issues existed
regarding the dif¬culty of estimating effort, timescales, and cost of delivery; the
nondeterministic nature of the cycle of prototype development and testing meant
that it was dif¬cult to bound the duration and effort involved in delivering the ¬nal

2.5 Iterative
Iterative models of software development evolved to address issues raised by both
waterfall and spiral approaches, with the goal of breaking large monolithic develop-
ment projects into smaller, more easily managed iterations. Each iteration would
produce a tangible deliverable (typically some executable element of the system
under development).
The Objectory method [7] provides a good example of such an approach. In
1987, while assisting telecommunications company Ericsson AB with its software
Old-School Development and Testing

development efforts, and concerned with the shortcomings of earlier methods, Ivar
Jacobson brought together a number of the development concepts he had been think-
ing about, such as use cases [8], object-oriented design [9], and iterative development,
to create a new approach to developing successful object-oriented applications.
The Objectory method supported innovative techniques for requirements anal-
ysis, visual modeling of the domain, and an iterative approach to managing the
execution of the project. In essence, Objectory would break down a project that
might have been run in a large and in¬‚exible waterfall manner into smaller, more
easily understood, implemented, and tested iterations.
Such an approach brought a number of important bene¬ts for software quality:

Testing could begin much earlier in the project (from the ¬rst iteration), enabling
defects to be identi¬ed and ¬xed in a timely manner, with the result that
timescales were not impacted, and that the effort and cost of ¬xing and retesting
defects were kept low.1
Testing would continue throughout the project (within each iteration), ensuring
that new defects were found and ¬xed in a timely manner, that newly added
system functionality did not adversely affect the existing software quality, and
verifying that defects found in earlier iterations did not reappear in the most
recent iteration.
The valuable visual metaphor provided by use cases and visual modeling enabled
the developers and the customer to more easily understand the intention of the
system functionality “ in effect the customer, analyst, designer, and tester share
a common language and understanding of the system.
Testers discovered that the scenarios described by the use cases could very easily
be used to design and implement effective test cases2 “ the actors (people or other
systems) identi¬ed in the use cases and their interactions with the system under
development, mapped easily onto the steps and veri¬cations needed to develop
the test scripts.

The Objectory process was organized around three phases:

1. The requirements phase “ which involves the construction of three models that
describe in a simple, natural-language manner what the system should do:
The use case model “ which documents the interactions between the actors
and the system, which are typically captured using use case diagrams and
natural-language descriptions.
The domain model “ which documents the entities within the system, their
properties, and their relationships.

1 It is generally considered that for each phase of the project during which you fail to ¬nd a bug, the cost of
¬xing it increases by a factor of 10 [4]. In practice, my experience is that this is a conservative estimate.
2 A test case represents the design for a test. A test case is implemented by means of a test script.
Agile Testing: How to Succeed in an Extreme Testing Environment

The user interface descriptions “ which document the various interfaces
between the actors and the system.
2. The analysis phase “ which involves the construction of two models that are a
re¬nement of the information captured during the previous phase:
The analysis model “ which is a re¬nement of the domain model produced
in the previous phase and which documents behavioral information, control
objects (linked to use cases), and entity and interface objects.
The subsystem descriptions “ which partition the system around closely
coupled and similarly behaving objects.
3. The construction phase “ which re¬nes the models produced in the analysis
phase. During this phase the following models are generated:
Block models “ which represent the functional modules of the system.
Block interfaces “ which specify the public operations performed by the
Block speci¬cations “ which are optional descriptions of block behavior using
¬nite state machines [10].

In its time, Objectory was considered to be a very successful software development
method, and many of its key principles, such as use case analysis and design, continue
to be widely used today.
In 1991, after having worked closely with the Objectory process for several years,
Ericsson AB purchased a major stake in Ivar Jacobson™s company (Objectory Sys-
tems), changing its name to Objectory AB.
In 1995, Objectory AB merged with the Rational Software Corporation, and
shortly thereafter, the Rational Objectory Process version 4.0 was published,
which incorporated elements of Grady Booch™s object-oriented analysis and design
method [11] and Jim Rumbaugh™s object modeling technique (OMT [12]). Much of
the procedural side of the Objectory method (such as use case modeling) was incor-
porated into the Rational Objectory Process, with the addition of many notational
and diagramming elements from the OMT and Booch methods.
Ultimately, through an incremental process of extension and enhancement,
the Rational Objectory Process evolved into the Rational Uni¬ed Process (RUP)
version 5.0, which incorporated modeling and design extensions addressing business
engineering, plus best practice guidance on con¬guration and change management,
data engineering, and user interface design.
The release of version 7 of RUP provides extensive best-practice guidance on
traditional and agile software development and testing methods, such as the ability
to optionally select an “extreme programming” style of development (see Appendix B),
as well as the means of customizing the RUP framework to support the user™s own
speci¬c agile approaches.
The RUP is covered in further detail in Chapter 3, along with a number of other
agile approaches.
Old-School Development and Testing

Plan Acceptance Tests

Acceptance Testing
Plan System Tests

Specification System Testing

Plan Integration Tests

Design Integration Testing
Plan Unit Tests

Implementation Unit Testing

2.3 The V-Model; Waterfall Phases and Associated Testing Phases.

2.6 Traditional Elements of Test Process
The ¬nal section of this chapter reviews the typical elements of a traditional testing
process, focusing in detail on the relationship between the classic model of testing
and the development approaches described in the earlier sections.
Many traditional views of software testing are based on the V-model [4], which
itself is largely based on the waterfall view of software development. The V-model
approach to organizing testing has also been applied to spiral and other models of
software development, producing a number of variants, such as the W-model [13].
Figure 2.3 shows a typical interpretation of the V-model.
The V-model provides a powerful means of assisting testing practitioners to
move testing earlier into the software development life cycle, and to encourage more
frequent testing.
A major tenet of the V-model is that testing can begin as early as the requirements
acquisition phase “ with the test manager reviewing the requirements to determine
the resources needed for testing, and with the test analyst or designer reviewing the
requirements to determine testability and identify errors in the requirements (such
as omissions, contradictions, duplications, and items that need further clari¬cation).
As a generalization, we can identify four basic test levels or phases associated
with the V-model approach to testing:3

1. Unit test (also known as developer or component testing) is typically conducted
by the developer to ensure that their code meets its requirements, and is the
3 However, in practice we might also include other variations of the testing phases, such as systems
integration testing (often employed where the application under test has a requirement to interact with
a number of other independent systems), user acceptance testing, and operations acceptance testing.
Agile Testing: How to Succeed in an Extreme Testing Environment

lowest level of testing normally identi¬ed. Historically, unit testing was a manual
process (often involving a test harness or some other means of simulating other
planned software components that would communicate with the component
under test, but which had not been coded at that point). Today, unit testing is
usually conducted using one of the many developer testing tools that are available
(see, e.g., [14]).
2. Integration test (also known as module testing) is used to demonstrate that the
modules that make up the application under test, interface and interact together
correctly. As a general rule (but clearly dependent on the size and importance
of the project), integration testing is conducted by the developers under the
supervision of the project leader. Where a more formal approach to development
and testing is being followed, possibly on a large or commercially important
project, an independent tester or test team could be involved in conducting
integration testing.
3. System test is employed to establish con¬dence that the (now completed) applica-
tion under test will pass its acceptance test. During system testing, the functional
and structural stability of the system is examined, as well as the nonfunctional
aspects of the application under test, such as performance and reliability. Typi-
cally, although again dependent on the size and importance of the project, system
testing is conducted by an independent testing team. Often, a customer or user
representative will be invited to witness the system test.
4. Acceptance test (also known as user acceptance test or UAT) is used to ensure
that the application under test meets its business requirements, and to provide
con¬dence that the system works correctly and is usable before being formally
handed over to the end users. Typically, UAT will be conducted by nominated
user representatives (sometimes with the assistance of an independent testing
representative or team).

The role of regression testing is also worth noting; its purpose is to provide
con¬dence that the application under test still functions correctly following a new
build or new release of the software (perhaps caused by requests from the customer
for modi¬cations or enhancements to the software, or a change to the environment
in which the software runs, such as a new release of the underlying operating
Within each of the test levels or phases, we can identify a number of common
elements that they all share. In each testing phase the following must be addressed:

Overview of test phase “ providing information on the purpose of the testing
phase, its goals, and its characteristics. Information should also be provided
relating this testing phase to the appropriate development phase within the
V-model and to the adjacent testing phases.
Test phase approach and test data requirements “ describing the overall approach
to the testing used in this phase (such as white box or black box testing; see [4]),
Old-School Development and Testing

and addressing the format, content, and origin of the data required to successfully
test the software.
Test planning and resources “ providing the plan used to drive the testing con-
ducted during this phase, its timescales, milestones, dependencies, risks, and
deliverables, as well as information on the resources needed to complete the
testing successfully (including both the staff and the physical resources, such as
computer equipment).
Roles and responsibilities “ specifying the staff required to ful¬ll the various
roles within this phase (such as test team leader, test analyst, or tester) and their
speci¬c responsibilities (e.g., “the test analyst shall be responsible for designing
and implementing the test scripts required to test the application under test . . . ”).
Issues of reporting, management, and liaison should also be speci¬ed.
Inputs and outputs “ specifying those artifacts (the items created and/or used
within phases) required as inputs to this phase to ensure successful testing of
the application under test (such as the test plan, the software itself, and the
requirements documentation), artifacts created during this phase (such as test
designs, test scripts, and test result record forms), and artifacts output from this
phase (such as the tested software, completed test result record forms, and the
test summary report).
Speci¬c test techniques for this test phase “ describing any speci¬c testing tech-
niques (such as boundary analysis, equivalence partitioning, or state transition
analysis; see [4]) or tools to be used within this testing phase such as test execution
tools; see [15].
Reusable assets “ providing test practitioners with access to standard reusable
assets (such as test plan, test script, and test summary report templates) that
are shared across the project (or even across an organization). These assets can
ensure that signi¬cant project time, effort, and cost are saved by preventing
different practitioners on different projects from reinventing different versions
of the same artifacts again and again. Standardization also makes it easier for
staff to move between projects without having to relearn a new and different set
of project documentation.

The preceding information is traditionally recorded in one or more testing docu-
ments (usually within the test plan document and the test speci¬cation document).
Although this approach to software testing has been widely adopted and used, as
with the traditional development processes, this classic view of testing process has
its critics:

A key criticism leveled against the approach is that it is underpinned by the
V-model, which is itself based on the frequently criticized waterfall model of
software development. The relevance of the classic view of test process is often
questioned when applied to more contemporary, agile views of software develop-
Agile Testing: How to Succeed in an Extreme Testing Environment

Where the approach is used on small, simple projects with low complexity, prac-
titioners often complain that there is “too much process,” and that the reality is
that “experienced testers don™t need so much detailed guidance.” In effect, the
process itself is criticized for taking too much time and effort to follow.
Paradoxically, the use of a formal and prescriptive testing process may also be
criticized for preventing testers from ¬nding defects; many a well-hidden defect
has been unearthed by the intuition of a skilled and experienced tester. Testing
places high reliance on practitioners who are able to track down defects using
their creative and lateral thinking skills, but a highly prescriptive testing solution
is often thought to constrain and inhibit such skills.

2.7 Summary
Since the earliest days of programming, when engineers manually set noughts and
ones on huge electromechanical devices, pioneering practitioners have looked for
ways of making the process of developing programs easier, quicker, and more reliable.
Once, software was used in just one or two places around the world by a handful
of people to implement the complex algorithms needed to decode secret military
messages. Now, the world runs on software; it is everywhere “ in your home, in your
car, in your mobile phone, sometimes even inside you (think about the complex
calculations made by heart pacemaker devices or the new generation of intelligent
digital hearing aids). Today, a world without software is absolutely unthinkable.
As the technologies for developing software have evolved and increasing num-
bers of software practitioners have found employment4 in more and more companies,
producing ever larger and more complex software systems, the need to ¬nd easier,
quicker, and more reliable practices for delivering that software has become abso-
lutely critical.
In the early days, practitioners began to share the successful practices that they
had found, often through trial and error, with their colleagues. As the IT industry
matured, such best practices began to be documented and distributed, resulting in
the appearance of papers and books on waterfall and spiral software development
methods, for example.
While of initial value in managing relatively simple software projects where cus-
tomer requirements were unlikely to change across the duration of the project,
many workers became increasingly dissatis¬ed with these early methods as con-
tinuous improvements in the capability of programming technologies encouraged
developers to produce systems of ever-increasing size and complexity, and as the
number of software disaster stories began to accumulate.

4 It is an incredibly sobering thought that in the 1940s there were just a few people in the entire world who
could implement a computer program, but that today each year some 85,000 software engineers leave
U.S. universities, with an additional 400,000 Indian graduates joining them, while China™s education
system produces some 600,000 developers annually! Inevitably, these numbers can only increase year
after year.
Old-School Development and Testing

Increasingly, the need to be responsive to changing customer needs, to be able to
quickly develop, test, and deliver useful functionality to the customer in a planned and
managed incremental manner, the need to reduce the cost and effort of development,
and the need to deliver high-quality software led practitioners to challenge the role
of traditional approaches to software development and testing.
In the twenty-¬rst century, software development needs to be agile; to deliver
quality software that meets the customer requirements and that is delivered on time
and within budget.
Because, bottom line, if you can™t “ your competitors will.
3 Agile Development and Testing
Nanos gigantum humeris insidentes “
We are but dwarfs standing upon the shoulders of giants.
Bernard of Chartres

3.1 Introduction
The long history of software development is too frequently characterized by failure
rather than by success. When you consider that the practice of software development
and testing has spanned two centuries (and arguably three or more centuries1 ), it
seems incredible that such a high proportion of projects are unsuccessful. One highly
respected industry report, for example, suggested that as many as 76% of all software
projects fail to come in on time, to budget, or to customer satisfaction [2].
Growing dissatisfaction with the failure of traditional heavyweight approaches
caused a number of workers in the ¬eld of software development to begin to question
the role of ponderous, in¬‚exible, and frequently ineffective development processes.
From the 1980s onward, new lightweight or agile approaches to developing and
testing software in an effective and ef¬cient manner began to appear, which chal-
lenged the need for cumbersome and ineffectual process. Such approaches frequently
focused on the need for good communication in projects, the need to adopt smaller,
more easily managed iterations, and the need to be responsive to changing customer
This chapter provides a review of the most prominent agile methods that have
been used to develop and test software. Speci¬cally, the agile approaches covered in
this chapter include the following:

Rapid Application Development (RAD),
Extreme Programming (XP),
the Dynamic Systems Development Method (DSDM), and

Each of these agile approaches is discussed at a relatively high level in this
chapter; greater detail is presented in Appendices A through D, respectively.

1 If we include the patterns developed for guiding the actions of weaving machines, for example, or even
the astronomical software druids ran on their early silicon and granite hardware systems.

Agile Development and Testing

The chapter concludes with a brief overview of the key features of a number of
other agile methods and approaches, including

the Enterprise Agile Process (previously XBreed),
Ruby on Rails (RoR),
Evolutionary Project Management (Evo),
the Rational Uni¬ed Process (RUP),
the Essential Uni¬ed Process (EssUP), and
the Software Process Engineering Metamodel (SPEM).

Inevitably, this is a snapshot of the agile methods because it is by necessity a very
fast-moving subject.

3.2 Rapid Application Development
The RAD ¬rst appeared in the 1980s, when James Martin developed the approach in
response to increasing dissatisfaction with the failure of earlier methods such as the
waterfall model of software development [5].
These earlier approaches were characterized by being highly prescriptive, with
developers following an in¬‚exible series of development phases in which require-
ments were gathered early in the process and then set in stone throughout the rest of
the project. Typically, customer involvement was limited to the initial requirements
capture and ¬nal acceptance testing, resulting in an unacceptably high number of
delivered systems that did not match the actual customer needs (which had almost
certainly changed since they had been ¬rst documented anyway). Cost and time
overruns were the norm for such projects and, since much of the testing was left to
the later phases, the systems were frequently delivered with major quality issues.
Initially inspired by the work of other prominent workers in the ¬eld of devel-
opment and testing, such as Barry Boehm, Brian Gallagher, and Scott Schults, and
following a gestation period of several years, Martin™s thoughts on rapid software
development were ¬nally formalized as RAD in his 1990 book, Rapid Application
Development [16].
The key goals of RAD are

high-quality systems,
fast development and delivery, and
low costs.

RAD seeks to break down the approach taken in monolithic waterfall projects into
smaller iterative steps, increasing the opportunities for the customer to be involved
in the development and to be exposed to earlier prototypes and working increments
of the developing system. In fact, the development of software prototypes is a key
aspect of RAD, providing a means of exploring the customer needs for the system
through informal testing and managing customer expectations of how the ¬nal
delivered system should look and perform.
Agile Testing: How to Succeed in an Extreme Testing Environment

RAD proved to be of bene¬t over traditional development approaches in a number
of areas:

RAD™s iterative approach to developing software meant that testing could begin
earlier in the development process and could continue more frequently through-
out the project. This approach signi¬cantly reduced the risk of late identi¬cation
of serious defects and the associated cost overruns and late delivery frequently
seen in waterfall projects.
Closer customer involvement in the development process, with early opportuni-
ties to test how well the software met customer needs using the prototypes, meant
there were less show-stopping surprises when the system was ¬nally delivered.
Accepting that requirements would no longer be set in stone at the very begin-
ning of the project and that changes could be managed effectively through-
out the development using requirements planning workshops and joint applica-
tion design workshops, helped ensure the delivered software would more closely
match customer expectations.
Tighter project management of development priorities, costs, and timescales
through the use of techniques such as time boxing [17] kept project progress on
track and controlled the extent that the development could get out of control.

On the down side, for many practitioners who followed more traditional software
development models, RAD gained the reputation of being an excuse for “hacking”
or “opportunistic coding,” as it has also been called [18]. It is possible that, as one
of the earliest agile methods, RAD may have been perceived as having less rigor
than the more established approaches. Also, RAD™s focus on developing a series of
rapid prototypes, many of which inevitably would not be carried forward into the
¬nal deliverable, was often blamed for wasting time and effort and jeopardizing
progress. Finally, the process of prototyping was often poorly implemented due to
a weak understanding of what prototyping actually involved [19], leading to a poor
perception of the technique.
Despite some suspicion from the defenders of the more traditional development
methods, RAD arguably set the scene for the development of a number of the later
agile methods.
Appendix A contains further details of the rules and practices of RAD.

3.3 Extreme Programming
In the early 1990s, Kent Beck, a practitioner in the ¬eld of software development, had
begun to consider how the process of developing software could be made simpler
and more ef¬cient. In March 1996 Kent embarked upon a project with a major
automotive customer that would employ a number of the software development and
testing concepts that he had been considering; the result was the genesis of Extreme
Programming (XP [20]).
Agile Development and Testing

XP emphasizes customer satisfaction as one of its key drivers; the methodology
aims to deliver the software the customer needs when they need it.
XP seeks to improve project success in four important areas:

1. Communication “ XP encourages the team developing and testing the software
to maintain effective and frequent communication both with the other members
of the team and with the customer.
2. Simplicity “ XP promotes the ideal of simple, clear, and understandable design,
translated into similarly clear and understandable code. Even though the cus-
tomer may not get to see the source code, XP encourages programmers to develop
elegant and effective software solutions of which they can be proud.
3. Feedback “ throughout the development project, the programmers obtain feed-
back on their work from other programmers and, crucially, the customer, who
is exposed to early deliverables as soon in the project as possible.
4. Courage “ programmers are empowered to respond positively and proactively to
changing customer requirements and changes in the development environment
(such as availability of new technologies).

By embracing these four principles, development projects are more agile and
responsive; better communication across the team means that the developing system
is integrated more easily and with fewer errors, plus the improved communication
with the customer combined with good feedback ensures that the delivered software
more closely matches the users™ needs.
From a software quality perspective, XP emphasizes the importance of effective
and ef¬cient testing. An important goal of testing in XP is that test development
should begin even before any code has been written, continuing throughout the
coding process, and following code completion. As defects are found, new tests are
developed and are added to the growing test suite to prevent the same bug from
reappearing later and getting through to the delivered software.
Appendix B contains further details of the rules and practices of XP.

3.4 The Dynamic Systems Development Method
Developed in the United Kingdom in the 1990s by a consortium of organizations
and agile practitioners, the Dynamic Systems Development Method (DSDM) is an
agile framework that builds upon the principles of RAD. DSDM is based on an
iterative development model, which seeks to be responsive to changing customer
requirements, and which aims to implement the business requirements on time, to
budget, and with acceptable levels of quality.
The DSDM is typically applied to information systems projects that are charac-
terized by challenging timescales and budgets, and seeks to address many of the
common reasons for information systems project failure, including exceeding bud-
gets, missed delivery deadlines, poor quality, lack of user involvement, and lack of
senior management commitment.
Agile Testing: How to Succeed in an Extreme Testing Environment

The DSDM is founded upon nine key principles:

1. Active user involvement is imperative.
2. DSDM teams must be empowered to make decisions.
3. The focus is on frequent delivery of products.
4. Fitness for business purpose is the essential criterion for acceptance of deliver-
5. Iterative and incremental development is necessary to converge on an accurate
business solution.
6. All changes during development are reversible.
7. Requirements are baselined at a high level.
8. Testing is integrated throughout the life cycle.
9. A collaborative and cooperative approach among all stakeholders is essential.

These principles were framed by combining the agile best practices and experi-
ences of the DSDM consortium members. The ¬rst draft of the DSDM framework
was delivered in January 1995, followed in February that year by formal publication
of the framework [21].
Within its overall iterative approach, a DSDM project is structured into three
distinct phases:

1. Preproject phase “ this phase sets the context of the project and ensures it is set
up correctly from the outset to give the project the best likelihood of success. Key
products delivered from this phase include the initial de¬nition of the business
problem to be addressed, budget and resourcing, outline scope and plans for the
feasibility study, and a decision whether or not to proceed with the project.
2. Project life cycle phase “ this phase combines both sequential and iterative stages,
which drive the incremental development of the system. Following the initial
feasibility and business study stages, the project iterates through the functional
model iteration, design and build iteration, and implementation stages.
3. Postproject phase “ this phase covers postdelivery activities such as maintenance,
enhancements, and software ¬xes, and is used to ensure the ongoing effective
and ef¬cient operation of the delivered system.

In terms of its continuing adoption and use, DSDM seems to have been more
successful at being accepted by senior management than earlier agile approaches
such as RAD due to the perception that it represents a more mature and formal agile
method. This perception is further reinforced as a result of the frequent integration
of DSDM and the PRINCE2 (PRojects IN Controlled Environments) project manage-
ment method [22]. As a result, more conservative senior managers have been more
receptive to the use of DSDM on larger/higher-pro¬le projects with the result that a
substantial and vigorous user population has been established around the globe.
The DSDM has also proven to be popular with development and testing prac-
titioners, who have continued to re¬ne and enhance the method; since its initial
Agile Development and Testing

publication in 1995, DSDM has continued to evolve, incorporating new best prac-
tices and technologies wherever appropriate.
Appendix C contains further details of the principles and practices of DSDM.

3.5 Scrum
Scrum is a project management method for agile software development and testing
that enables the creation of self-organizing teams by encouraging co-location of all
team members (including customer representatives) combined with effective verbal
communication among all team members and across all disciplines that are involved
in the project.
A key principle of Scrum is the recognition that during a project, the customers
are likely to change their minds frequently about what they want and need (often
called requirements churn), and that such customer needs cannot be addressed
successfully in a traditional predictive or planned manner. As such, Scrum adopts
an empirical approach “ accepting that the problem cannot be fully understood or
de¬ned, focusing instead on maximizing the team™s ability to deliver quickly and
respond to emerging requirements.
In terms of the genesis of Scrum, as early as 1986 Takeuchi and Nonaka [23]
had observed that projects employing small, cross-functional teams were typically
the most successful, and they coined the phrase “rugby approach” to describe the
phenomenon. The ¬rst explicit reference to Scrum2 in the context of software devel-
opment came in 1990 in work by DeGrace and Stahl [24].
In the early 1990s, work on agile methods by Ken Schwaber and Jeff Sutherland, at
their respective companies Advanced Development Methods and Easel Corporation,
led Sutherland and Schwaber to jointly present a paper at the 1996 International
Conference on Object-Oriented Programming, Systems, Languages, and Application
describing Scrum [25]. Schwaber and Sutherland collaborated during the following
years to further develop and document their experiences and industry best practices
into what is now known as Scrum.
In 2001, Ken Schwaber teamed up with Mike Beedle to write up the method in
Agile Software Development with SCRUM [26].
A major factor in the success of Scrum projects is the drive for ef¬cient commu-
nications, with techniques such as daily short focused stand-up meetings combined
with explicit team roles (e.g., Pig and Chicken; see Appendix D) to manage who may
contribute to the meetings and in what manner.
Although Scrum was originally intended to be used for the management of
software development projects, it can be employed in running software maintenance
teams or as a program management approach: Scrum of Scrums.
Appendix D contains further details of the rules and practices of Scrum.
2 Scrum: a play in the ball game rugby in which two groups of players mass together around the ball and,
with their heads down, struggle to gain possession of the ball. Typically held following an illegal forward
Agile Testing: How to Succeed in an Extreme Testing Environment

3.6 Other Agile Methods
This section provides a high-level overview of the key features of a number of other
agile methods and approaches to software development and testing. By necessity,
this is only a snapshot of the current popular methods and does not seek to be
a comprehensive catalog of all agile approaches. Because this subject evolves so
quickly, it is recommended that the reader also refer to [26] as a useful source of
updates on agile methods.

3.6.1 The Enterprise Agile Process (formerly XBreed)
Developed by Mike Beedle with the goal of developing “reusable software in record
time,” this twenty-¬rst-century agile approach combines a number of best practices
from other agile methods, such as XP and Scrum. Speci¬cally, EAP [28] employs best
practices from Scrum as the basis of its management framework and incorporates a
subset of XP techniques for its development process.
A key feature of the method is the use of design patterns to create reusable objects;
EAP creates a library of components that can be easily reused in new software projects
to save time, effort, and cost of development and testing, while improving quality by
the reuse of “tried and tested” components.
The EAP development process promotes the use of well-trained, skilled, and
experienced practitioners to populate its teams, and encourages the use of effective
knowledge transfer between team members, which, combined with a low communi-
cation overhead, helps keep the project running smoothly and ef¬ciently.

3.6.2 Ruby on Rails
Created by David Heinemeier Hansson from his earlier work on the Basecamp web-
based project management tool [29], Ruby on Rails (RoR [30]) is an open-source web
framework that is optimized for “programmer happiness and sustainable productiv-
ity.” Typically used by developers for relatively short client-driven web development,
and often referred to as “Rails,” RoR use is guided by two key principles:

Convention over con¬guration (CoC) “ In effect development by exception, the
bene¬t of CoC is that only the unconventional aspects of the application need to be
speci¬ed by the developer, resulting in reductions in coding effort. For example,
if the developer creates a class “Part” in the model, the underlying database table
is called “Parts” by default. It is only if one deviates from this convention, such
as calling the table “parts_on_hand,” that speci¬c code needs to be written that
utilizes these names.
Don™t repeat yourself “ RoR promotes an approach where information is located
in a single, unambiguous place. For example, using the ActiveRecord module of
RoR, the developer does not need to explicitly specify database column names in
class de¬nitions. Instead, RoR can retrieve this information from the database.
Agile Development and Testing

. 1
( 10)