<<

. 6
( 10)



>>

to provide leadership in software development process. It has to be said that I started
with a love“hate relationship with agile methods born out of many years™ experience
of seeing them both being successfully used, but also (sometimes) abused; let me
explain:

I love some of the techniques that agile methods champion as I truly believe that
they have, in their own way, had a revolutionary impact on the way the industry
approaches software engineering.
At the same time though, I hate seeing them being abused to excuse bad practices,
or even worse as a political tool to bash other valuable practices.

So, taking a step back, it™s not the methods themselves I have a problem with “
far from it “ but rather the blind faith from those who have jumped on the agile
bandwagon without proper consideration for what they are trying to achieve.
And who hasn™t jumped on a bandwagon at some stage or other? In fact I™d suggest
that one of the best things about the emergence of agile methods is that it has stirred
up passions for the use of software development processes as a way of improving the
lot of the IT professional “ fantastic!

160
Agile Special Tactics: SOA Projects
161


But blindly applying agile methods without really trying to understand what
problems you™re trying to solve with them, and also which problems you™re not
relying on them for solution, can cause unnecessary problems. I™d like to start by
observing three reasons why adopting methodologies sometimes goes wrong:

Good method, wrong type of project “ Some methods just do not suit certain
styles of projects “ period. Good luck trying to use XP (Extreme Programming
[20]) “out of the box” to deliver a “system of systems” [90] style of project covering
hardware and software, involving multiple large teams who are distributed around
the world. No use blaming Kent Beck if it goes wrong “ XP itself describes the
class of projects for which it is intended.
Good method, overrelied on “ Make sure you understand the limits of the method
that you are using. Scrum is not going to tell you the form that your architecture
needs to take. But then, it doesn™t profess to provide method support for architec-
ture. If you need architectural guidance then you shouldn™t be relying on Scrum
to provide it. Some people fall into the trap of overrelying on a method to cover
all their method needs. You need to have a clear idea as to what you want out of a
method, and then choose the best bits from those methods that meet your needs.
Good method, misunderstood “ To illustrate, a ¬ctitious but yet not unrealistic
quote from a project manager “ “I bought a really good book on Rational Uni¬ed
Process [7] and I™ve made sure that we are applying all of its roles, work products,
and tasks, and yet the team hate it and my projects are not delivering. I thought
the three amigos were supposed to be process gurus!” Erm, do you know that
RUP is a process framework and not a process? Guess maybe you aren™t applying
all of the roles, especially not the process engineer, whose role responsibilities
include “tailoring the process to match the speci¬c needs of the project” and
“assisting the project manager in planning the project” [91].

So building on this, for anyone who has spent any amount of time thinking long
and hard about software development methodologies, the inevitable and oft-written
conclusion is that the best thing to do is to customize your own “ consider the types
of projects you need to run in your organization and then take the best bits of the
various methods out there (as applicable) and combine them to “grow your own!”
Apply care and attention, prune where required, and over time you will end up
with a mature method that perfectly ¬ts the needs of your organization. It also helps
to start with the right seeds (base methods) and cuttings (customizations). Don™t
get me wrong “ there is no reason for the average organization to have to build a
custom method from scratch. The exercise should be one of assembly rather than of
build. If you™re building your own from the ground up then you™re doing it the hard
(and expensive!) way.
This case study focuses on methods to support a speci¬c class of project “ those
that produce SOA solutions. To support these projects requires “special tactics” over
and above what is covered by “out-of-the-box” agile methods. This is the raison d™ˆ tre
e
for me growing my own method “ AgileSOA, which, as the name suggests, combines
Agile Testing: How to Succeed in an Extreme Testing Environment
162


the best of two classes of methods: agile methods and methods used to produce SOA
solutions.
This case study is based on my experiences in applying this method to a number of
signi¬cant customer projects over the years at Proli¬cs and at my previous company,
7irene, where this method was ¬rst formed. Any method should be judged by its
performance, and we have yet to have a project use this method that has not been
a success (although we have learned along the way!). I hope you ¬nd some parts of
this case study useful to your own efforts to “grow your own” method!


21.2 Overview of the AgileSOA Testing Challenge
I™ll start with my three favorite things about agile methods:

They have improved many organizations™ ability to deliver with short-term focus
as they place a lot of focus on the iteration to plan the project team™s work and
deliver the solution incrementally.
They encourage customer involvement in the process.
They are relatively simple to understand and succinct to describe.

Expanding on this, one of the most tangible ways I™ve seen agile methods impress is
in improving the amount of focus of activity to get the ¬rst release of a solution into
production. This solves the problem that many organizations have, which is that
their projects seem to drag on forever without actually delivering anything.
Agile methods normally achieve this level of focus by planning short-term, using
iterations, and by measuring progress in terms of functionality produced by each
increment. This works very well as teams are motivated by short-term success. After
all, success breeds success! Also, if there are any issues with the project™s delivery
capability they will soon be encountered early in the project life cycle rather than
being hidden until late. The sooner the issues are identi¬ed, the sooner they can be
resolved.
Turning to SOA, it™s important to note that many of the important bene¬ts of
SOA are realized when building “long-term IT solutions.” It™s fundamentally about
building your software in such a way that it improves the long-term costing and
¬‚exibility of your IT solutions.1 The following de¬ne the usage of “long term” and
“short term”:

Long-term IT solutions (strategic)
support an ongoing business function,
have a “long” life span (5 years plus),
are core IT systems that support the business need to be able to be extended
to support new business requirements,
1 It is worth pointing out that this is not to say that there are only long-term bene¬ts. We™ve achieved many
short-term bene¬ts in building SOA solutions “ less risk, more organized work effort, and improved
direction, to name a few.
Agile Special Tactics: SOA Projects
163


have high quality requirements,
often have a high degree of interconnection with other IT systems, and
the consequences of failure are considerable.
Short-term IT solutions (tactical)
solve a niche problem,
have a “short” life span,
are often required to provide an edge over competitors,
have lower-quality requirements,
do not have considerable consequences of failure, and
can sometimes be R&D-type efforts.

Agile methods have achieved huge success when approaching short-term IT solu-
tions. However, I have seen issues arise when these same methods have been applied
to SOA “long-term solutions.”
This doesn™t need to be the case. The challenge for our agile SOA projects is to stay
simple to understand and to keep a short-term focus while adding the necessary sup-
port to plan and design SOA solutions and addressing the long-term considerations
of the solutions you deliver.
Before we move on to look at my development process, I™d like to outline this
challenge by mentioning some of the issues we want to avoid when building SOA
solutions:
“Loose” requirements factoring results in duplication of effort and/or increased
refactoring. A requirement is “a statement that identi¬es a necessary attribute,
capability, characteristic, or quality of a system in order for it to have value and
utility to a user” [92].
I very much like the “factory/production-line” analogy when thinking about
our agile software development projects. A master list of requirements is created
using one of the popular mechanisms for expressing requirements “ features, use
cases, or story cards, to name a few. Iteration planning will scope out a subset of
this master list to be delivered by each iteration.
The “factory” (combination of team and tools) will then continue to produce
increments “production-line style” by running these scoped iterations, with each
iteration producing an increment.
As it is the requirements that de¬ne the contents of each iteration, it is
important that there is no functional overlap between any of the requirements;
otherwise, the result will be duplication of effort and inevitably this will result in
some refactoring of previous increments.
“Fat requirements” don™t ¬t well into iterations. Your requirements speci¬cations
need to be structured in such a way that they ¬t well into iterations. If the smallest
unit of requirement that you have results in two months of work to deliver, then
you™ve got a problem as this doesn™t ¬t well with an iterative mode of delivery.
Lack of agility in speci¬cations. If we take agility to mean the ability to change
direction quickly, then we know that our speci¬cations need to be able to handle
Agile Testing: How to Succeed in an Extreme Testing Environment
164


two kinds of changes: (1) changes to modify functionality (i.e., functionality that
has already been delivered) and (2) changes to include new functionality.
By “lack of agility in speci¬cations,” I mean speci¬cations that are not able
to quickly incorporate these kinds of changes. Otherwise we risk ending up in
a situation where the speci¬cations get sidelined and become out of touch with
the solution code.
So we need to focus on producing agile speci¬cations as we know that we™re
at a minimum going to encounter a lot of the second type of change when doing
iterative development, as each iteration adds new functionality that builds on
that delivered by previous iterations. Hopefully we™ll also try and minimize the
¬rst type of change, but more on that later.
Extensibility and ¬‚exibility qualities suffer without design speci¬cations. Two
very important qualities in a good SOA solution are its degree of extensibility and
¬‚exibility:
A good SOA solution is able to incorporate functional extensions without
undue effort (extensibility). Also, a good SOA solution should allow for its con-
stituent parts to change without compromising its functionality and reliability
(¬‚exibility). This is required when the same parts are shared between multi-
ple SOA solutions and one of the solutions requires changes that shouldn™t
compromise the other solutions.
Extensibility and ¬‚exibility don™t happen by accident. They need to be built in,
either as a forethought (design speci¬cations) or an afterthought (refactor-
ing). There are limits to the extensibility and ¬‚exibility that you will achieve
in a solution through refactoring “ so design speci¬cations are critical to
achieve these qualities in meaningfully sized solutions.
Mixing requirements and design makes it harder to understand what the system
will do. Requirements speci¬cations provide a tangible understanding of what the
solution will do. As such they need to be understandable by the customer, who
needs to be satis¬ed that what the solution is planned to do (requirements) is what
they want it to do. Mixing in solution design in your requirements speci¬cations
means it becomes dif¬cult to achieve this.
Lack of speci¬cations means code can become impenetrable (or costly to under-
stand). Not only can a lack of speci¬cations cause problems with the initial
production release of the solution, but there will de¬nitely be problems later.
As we™re focusing on our previously de¬ned “long-term IT solutions” with SOA,
it is inevitable that during a sizeable part of the solutions life span there will
be a number of pairs of eyes that will be peering at the code and will need to
understand how the solution works. Now, even with the best of intentions, there
is a limit to the degree that the code for a solution can be “self-documenting.”
Speci¬cations will play an important role in helping to understand the solution™s
requirements and design in a cost-ef¬cient manner.
Solution costs don™t end with delivery of the ¬rst production release. It is a well-
known fact that when it comes to long-term IT solutions as de¬ned previously,
Agile Special Tactics: SOA Projects
165


the cost of producing the ¬rst release is relatively small compared to the costs
over the total life span of the solution. Examples of later costs include
costs associated with delivering functional extensions,
costs associated with providing support to the users,
costs associated with ¬xing defects found, and
costs associated with technology upgrades.

A number of practices already mentioned contribute to minimizing these costs:
up-to-date agile speci¬cations, built-in ¬‚exibility, and extensibility through design
speci¬cations.
An overriding focus on functional delivery at the expense of these practices
will mean that any “savings” made by taking shortcuts in the delivery of the ¬rst
production release will be heavily outweighed by later resulting costs.

Lack of design guidance often results in technologies being inappropriately
used. An end-to-end SOA solution might end up including a number of different
implementation technologies “ Portlets2 running in a Portlet engine for the user
interface, an enterprise service bus to plug together the various service-oriented
parts, executable business processes running in a Business Process Execution
Language (BPEL) engine, messaging middleware to connect to external systems,
adapters for connecting to speci¬c external solutions. Many of these technologies
are fairly speci¬c in terms of the type of solution part they should be used
to implement, which means that it takes a bit of foresight to plan to use the
technologies appropriately. As an example, although you could write most if not
all of your solutions logic as BPEL, is that a good idea? Probably better to only
use it to write your business process logic which, strangely enough, is what BPEL
was designed for.

Without a design for the SOA solution being built, it is that much harder for
developers to pick and choose where to use implementation technologies appropri-
ately.

Overreliance on refactoring to ¬x design problems results in projects stalling.
Agility is about being able to quickly change direction when you need to.3 How-
ever, this doesn™t mean that it is a good idea to rely on changing direction a lot
to get you on the right track. This is especially true when it comes to your SOA
design. Sure, you could try to “evolve” this by building the code to meet the func-
tional requirements and then refactor later to achieve a good design for this code “
but this isn™t a way of working that scales well or suits SOA-style solutions. I have

2 Portlets are pluggable user interface software components that are managed and displayed in a web
portal.
3 “When you need to” is an important caveat. Too much change in direction over the life span of a project
can result in a large amount of avoidable cost.
Agile Testing: How to Succeed in an Extreme Testing Environment
166


seen projects where the delivery of new functionality has stalled for months while
the team refactored the current solution before adding new functionality. This
just isn™t acceptable to the customer. Although refactoring might have its place,
overreliance on it as a way of evolving the solutions design does not work when
it comes to SOA solutions.
Planning and measuring just developer activities is not enough. Our software
projects consist of more than just developers. Excluding project managers and
other supporting roles, they include systems analysts, architects/designers, and
testers. Each of these subteams needs to be taken into account in both planning
of project activities and measuring of project progress. If you™re not measuring
the progress of your system™s analysts in creating requirement speci¬cations,
then you might miss the fact that you are running out of completed requirement
speci¬cations to hand to your architect/designers, which will have a knock-on
effect on your developers and then later testers.
Relying on having the entire project team co-located is often not practical.
Having your entire project team co-located will undoubtedly have bene¬ts in
terms of improving communications. However, this is often not practical. Most
of the SOA projects I™ve worked on have included multiple suppliers, and more
often than not each of the supplier™s resources is geographically distributed.
Your development method cannot rely on co-location of resources to achieve
good communication between team members.


21.3 De¬nition of an Agile Testing SOA Process
Now that we™ve spent some time dwelling on the challenges faced by teams producing
SOA solutions, I™d like to tell you a little bit about my AgileSOA process, which we™ve
been using for many years to deliver SOA solutions for our customers. I™ll do so by
describing each of the key aspects that I think have made it work for us.

Specialized Subteams in the Iteration Work¬‚ow
Using the RUP as a framework, AgileSOA has a specialization of roles in the project
team. These are organized into subteams, each having well-de¬ned work products
that they are responsible for, and each having their place in the overall iteration
work¬‚ow. In this way, the “workers” in the team are divided up quite sensibly (if not
a bit predictably) as follows:

The requirements team is responsible for creating speci¬cations that describe
the requirements for the software solution.
The design team is responsible for interpreting the requirements speci¬cations
and in turn creating corresponding design speci¬cations. This includes describing
the overall architecture of the software.
The implementation team, as any developer will tell you, is where the “real
work” is done. Not only is the team responsible for creating the solution code,
Agile Special Tactics: SOA Projects
167



Time box 1 Time box 2 Time box 3 Time box 4 Time box 5 Time box 6

Solution
Iteration 1 Requirements Design Implementation Test
Increment 1

Solution
Iteration 2 Requirements Design Implementation Test
Increment 2

Solution
Iteration 3 Requirements Design Implementation Test
Increment 3




21.1 AgileSOA iterations across time boxes.


but also for producing developer tests that will live with the code to be used for
regression testing.
The test team is responsible for rooting out any defects that exist in the code
produced by the developers. These defects are passed back to the implementation
team for ¬xing.

The SOA solution is delivered incrementally using iterations, with each iteration
planned as four end-to-end time boxes, with each time box being used to focus the
efforts of the four subteams just described earlier. In practice this looks a bit like
what is described in Figure 21.1.
The net result is that, in any given time box, each of your subteams will be
working on a different iteration “ each being one ahead of the other in the sequence
of requirements, design, implementation, and test.
This organizes the work done as part of each of the project iterations. Note
that not all the work is organized this way. As an example, you may have business
modellers producing models that provide a useful business context for the solution
in terms of process models and information models. This would be done in advance
of your iterations. Also you will have people responsible for setting up and executing
the performance and acceptance tests of the solution. Most of this happens after your
iterations have executed because they need to test the entire solution (all increments)
that will be released to the end users.

Structuring Requirements for Iterative Delivery
Requirements are crucial to our development process as they are used to scope out
the work that the overall project life cycle needs to deliver but also to divide this
work into iterations.
We use two mechanisms for scoping requirements for our SOA projects:

First lightweight lists of needs/features are used to quickly produce a view of what
is in scope for the project life cycle and, just as importantly, what is out of scope.
Second, these features are traced to use cases, which become the primary means
for structuring the project requirement speci¬cations and scoping the contents
of each iteration.
Agile Testing: How to Succeed in an Extreme Testing Environment
168


Use Case Flows

Basic


Use
Case
Alternative 3 Alternative 1



Alternative 2




21.2 Use cases contain ¬‚ows.



So why do I think use cases make such ideal units for our requirements speci¬cations?
Let™s look at two useful properties:

They are carefully factored nonoverlapping partitions of system functionality.
Each system scenario that the system should support will end up in a single use
case. This means that by choosing the use case as the primary unit for grouping
our system behavioural speci¬cations, we avoid overlap in coverage between
speci¬cations.4
Use cases can be further divided into use case ¬‚ows. Each use case has a basic ¬‚ow
which should describe the simple scenario. Scenarios that deviate from this basic
¬‚ow are covered by describing their deviations as alternative ¬‚ows (alternatives
to the basic ¬‚ow) “ see Figure 21.2.

These two properties make use cases the ideal format for our requirements speci¬-
cations. First, the fact that use cases don™t overlap in their coverage of the systems
scenarios means that they make good units for planning solution increments. You
know that by allocating the full set of in-scope use cases to iterations you will have
covered all the required functionality without any duplicated effort across iterations.
Second, the fact that we can break up use cases into smaller parts means that we
reduce the problem of having “fat” speci¬cations that don™t ¬t into a single iteration.5
It is possible to assign a use case™s basic ¬‚ow to an iteration, and then assign groups
of its alternative ¬‚ows to later iterations.

4 This is not to say that the same step might not occur in more than one system use case. However, you
wouldn™t have entire scenarios duplicated between system use cases.
5 It is still necessary to sometimes have to take a single ¬‚ow (normally the basic ¬‚ow) and break it up over
more than one iteration.
Agile Special Tactics: SOA Projects
169


UC# Use Case Use Case Flow Iteration

UC17.1 Capture client campaign order Basic ¬‚ow 1
UC17.2 Capture client campaign order Order information is incomplete 1
UC24.1 Request carrier certi¬cation test Basic ¬‚ow 1
UC24.2 Request carrier certi¬cation test Issues with certi¬cation 2
UC33.1 Make payment short code/s ordered Basic ¬‚ow 2
UC33.3 Make payment short code/s ordered Credit card payments rejected 2
UC34.1 Note short code details Basic ¬‚ow 2


21.3 Use case plays planning example.


I de¬ne a use case play as follows. Each iteration that a use case is assigned to
contains a play of that use case. That is, a use case play is one or more ¬‚ows of a
use case assigned to an iteration for delivery. For planning purposes, it is normally
a good idea to split a use case up into multiple use case plays to manage the risk
associated with delivering the use case.
For an idea as to what this looks like in practice, Figure 21.3 shows a set of use
case plays from an example iteration planning work product. This example shows
seven use case plays across two iterations. To understand how this works, we note
the following:

Iteration 1 contains two use case plays “ the ¬rst contains both the basic ¬‚ow and
an alternative ¬‚ow (order information is incomplete) of the Capture client order
campaign use case, while the second contains just the basic ¬‚ow of the Request
carrier certi¬cation test.
Request carrier certi¬cation test has two use case plays “ the ¬rst is assigned to
iteration 1 and contains just the basic ¬‚ow; the second is assigned to iteration 2
and contains an alternative ¬‚ow (issues with certi¬cation).



Speci¬cations That Are Rich but Get to the Point
One of the popularist aspects to agile methods is the disdain with which they treat
lengthy, verbose speci¬caton documents. And quite rightly so! Vague, ambiguous
speci¬cations defeat the purpose of putting together speci¬cations. Worse still is
when the speci¬cations attempt to make up for quality with quantity!
Speci¬cations need to be concise, clear, and unambiguous for them to have value.
But also they must be easy to change “ and this would suggest that they are no more
“wordy” than they need to be.
In the AgileSOA process, the focus is on a key set of work products that attempt to
achieve clarity and unambiguity while being as concise as possible. This is achieved
by making the semantics of the speci¬cations as rich as possible so that much can
be said with little effort.
Agile Testing: How to Succeed in an Extreme Testing Environment
170


For SOA solutions, the key thing that we need to specify is the individual service
contracts that are consumed by the various service consumers, and provided by the
service providers.
Figure 21.4 shows a set of work products that contributes to our goal of creating
quality service contracts:

The domain model is probably the simplest and yet most widely used of these
work products. It provides a structured view of the business information in the
business domain. This is important as our service contracts will need to pass
information around, and it is crucial to understand how this information is
structured for you to have a good-quality set of service contracts.
The domain model is coupled with the business process model, which describes
the ¬‚ow of the business processes. This is important as the services that you
produce are internal aspects of a solution that needs to support a business. Any
¬‚aws in the understanding of the business and its processes can have a knock-on
impact on the solution requirements, which in turn will have a knock-on impact
on the service contracts.
Having taken a look at the in-scope features for the SOA solution, a set of use
cases is created in the use case model (i.e., System Use Cases), which provides
a factored view of the requirements expressed by these features. The use cases
should be cross-referenced against both (1) the business domain types in the
domain model, so it is clear what business information they will act upon, and
(2) the tasks in the business process model, so it is clear what business tasks
they will automate. Use case speci¬cations provide us with descriptions of what
solution needs to do, and the service contracts will be the focus points of the
collaborations that provide this behavior.
The external systems model contains speci¬cations of any systems that are exter-
nal to your SOA that your SOA solution needs to interface with. The use cases in
the use case model should identify which use cases these systems are involved in;
that is, the system actors (as opposed to human actors) in your use case model
should have corresponding external system speci¬cations in the external systems
model. Certain service contracts in our solution will have their behavior pro-
vided by service providers that integrate with these external systems. Therefore,
an understanding of these external systems is crucial to ensure that the service
contracts for these service providers are suitable for their integration.
The service model is where the design for your service-oriented solution is
captured [93] and is ultimately where the service contracts live. The service
contracts are speci¬ed using service speci¬cations, which de¬ne the interface
points between service consumers and service providers. These are organized
into service-oriented systems. The behavior of these systems is described using
service interaction speci¬cations, which are organized into service collaborations
that match the use cases one-to-one.
171
Use case specification
Cancel work request UC
Level User Goal
Request Workflow
Primary Actor
Goal Mark the work request as cancelled
BusinessProcessModel UseCaseModel
DomainModel
Pre-Conditions A work request has been submitted to the Reqest Workflow systema
Domain Model Use Case Model
Business Process Model
A work request may be cancelled at anytime by a service of the Request Workflow system. When this happens,
Triggers
the Request Workflow system will automatically initiate this use case.
subsystem
Business Process
DomainType DomainType
Use Case Model
Type1 Type2
Basic flow
1. The System looks up the WorkRequest
Task B
DomainType
Task A 2. The System updates the status of WorkRequest (WorkRequest.status = CLOSED_CANCELLED_BY_USER)
Type3
Use Case 3. The System updates the status of any outstanding task (Task.status = RELEASED) to Task.status=WORK_REQUEST_CANCELLED
System
Human
Alternative flows
1a. Unknown work request
DesignModel ServiceModel 1a1. Write an entry to the log file with a reference to the rejected work request and reason for rejection
External Systems Model Service Model FAIL - Work request unknown


ServicePartition
Service-Oriented System
Cancel work request (Basic flow)
ExternalSystem
requestWorkflowComp
Request Workflow requestWorkflowDialog workRequest
External System
:Service Consumer
Interface A
serviceCollaboration
Interface C
1: prelimRequest
service Collaboration
Interface B


:Service Provider 2: cancelWorkRequest
:Service
2.1: cancelWorkRequest ( )
2.1.1: cancelWorkRequest ( )




Service interaction specification

21.4 Key AgileSOA speci¬cation work products.
Agile Testing: How to Succeed in an Extreme Testing Environment
172


Service
Use
Collaboration
Case




Service Interactions
Use Case Flows


Basic


Basic

Alternative 3 Alternative 1


Alternative 1
Alternative 2
Alternative 2


Alternative 3



21.5 Requirements and design speci¬cations.

Let™s brie¬‚y consider these work products in the context of the subteams working in
our iterations (see Figure 21.5):

The requirements team writes a use case speci¬cation6 describing the steps in
each of the ¬‚ows that are a part of that use case play.
The design team writes a service interaction spec that provides the behavior
described by the matching use case speci¬cation.
The implementation team creates and tests an implementation of the require-
ments and design speci¬cations (i.e., of the use case speci¬cation and the match-
ing service interaction speci¬cation).
The test team tests the solution implementation to ensure that it matches the
speci¬cations.

Speci¬cations Structured for Change
Not only must the speci¬cations be concise in order to make them easy to change,
but they must also be structured to make changes easier. In order to do this, for any
given change we have to achieve the following:

make it easy to quickly identify the various speci¬cation sections that need to
change,
make it easy to change those and only those sections, and
avoid having to make the same change in multiple places.
6 Depending on the project these are normally supplemented by other requirements speci¬cation work
products such as screen mock-ups.
Agile Special Tactics: SOA Projects
173


Some examples of this structuring follow:

Each use case speci¬cation is clearly divided up into each of the basic ¬‚ow and
alternative ¬‚ows. Once it is known which ¬‚ows are affected by a change, it is clear
exactly where in the document these changes should be made. Use case ¬‚ows can
be inserted by reference instead of by copy-and-paste.
The interactions in the service interaction speci¬cations match these ¬‚ows
exactly “ there is a separate service interaction diagram for each use case ¬‚ow.
This means that, for any given use case speci¬cation change, it is easy to see
exactly which matching parts of the service interaction speci¬cation need to
change. As with use case ¬‚ows, service interactions are inserted by reference
instead of copying and pasting.
Each service consumer and service provider lives in its own package, and these
are separate from the service-oriented solutions that use these. This means that
the consumers, providers, and systems can all be modi¬ed and pop in and out
of existence individually. Also, service speci¬cations are used by reference and
therefore changes are automatically re¬‚ected in the service interactions they
appear in.

Work¬‚ow Tooling to Help Track What™s Going On
Most of our projects consist of hundreds of individual use case ¬‚ows. According to
the process, there are four high-level pieces of work to do for each of these use case
¬‚ows “ requirements, design, implementation, and test.
That means that, for a 200“use case ¬‚ow project, there are at least 800 individual
pieces of work that need to be done. As there are sequencing dependencies between
these 800 individual work items (requirements needs to be done before design,
design before implementation, and implementation before test), it™s crucial for a
project manager to know what the status is of each of these work items so that any
bottlenecks or other planning problems can be spotted.
We use work¬‚ow tooling to help out here. This has two main bene¬ts:

It™s easier to track progress for the individual, the subteam, and the project team
as a whole.
It™s easier for each person to know what work is currently on their plate and what
is coming down the line.

Handle Change While Avoiding Unnecessary Change
Being agile is about being able to handle change well when required. However, we
still want to avoid unnecessary change as much as is possible as it causes unnecessary
expense.
Two types of change exist:

extension “ it already does X and Y. Now need to also make it do Z.
modi¬cation “ the way it does X is incorrect. We need to change it.
Agile Testing: How to Succeed in an Extreme Testing Environment
174


Now the ¬rst kind of change is unavoidable in our iterative development world. And
this isn™t a bad thing. Each iteration will be adding new functionality to what was
produced by the previous iteration.
However, we want to avoid as much of the second type of change as possible. I™ll
subdivide this modi¬cation type of change into

modi¬cations to the requirements “ changing the way the solution behaves, and
modi¬cations to the design “ not necessarily changing the way the solution
behaves, but rather changing the shape of the implementation.

We avoid unnecessary modi¬cations to the requirements as follows:

Create a shallow but complete view of the requirements early on, and re¬ne that
later during the iterations. This means that all requirements have at least been
considered before worrying about detailing any of the requirements.
Cross-reference requirements against the business domain model and business
process models. This is a useful exercise to pick up issues.
Tackle dif¬cult use cases in early iterations. These are the use cases that are most
likely to have a knock-on effect on other use cases once you get into the detail.
Showcase increments to end users as soon as you have working increments.
Anything learned from these showcases can save time for those use case plays
that haven™t been assigned to the requirements team yet.
We avoid unnecessary modi¬cations to the design as follows:
Create a shallow but complete view of the design early on and re¬ne during the
iterations. This mean the overall architecture can be assessed before any of it gets
re¬ned.
Build prototypes early on to validate the design.
Design focus should follow requirements focus. Focus your design work on those
parts of the solution where the requirements are most solid.

Risk Mitigation on Integration Projects
With any software development project there are the risks that the solution might
not do what the end users want, that it might not do it in a way that suits their
pattern of working, or that the technology that you™re using to build the solution
doesn™t really suit the solution. But integration projects bring along a whole new
bag of risks.
To provide a few examples:
What if the mainframe application experts we™ve been provided don™t understand
their own applications as well as they tell us they do?
What if the route planning software that we™ve bought doesn™t actually work the
way that the Application Programming Interface (API) says it does?
What if the interface speci¬cations that we have for our accounting package aren™t
up to date or complete?
Agile Special Tactics: SOA Projects
175


These are the kinds of risks that, if they materialize, can bring an integration project
to its knees.
For our projects we try and achieve two things:

Let™s make sure that we have a clear description of what the system does.
Let™s make sure that what it actually does matches this description.

For the ¬rst of these we use the previously mentioned external systems model. In
this we capture a concrete view of the interfaces that we need to interact with and
the information that the system holds. This will ensure that the way that we plan
on integrating with the solution will work and also that it can be described to the
developers.
Second, before we try and design against these interfaces, we write tests that
verify that the system works the way we think it works. It doesn™t help to put in
design effort against an interface speci¬cation that is incorrect. A large amount of
the risk is taken out of integration projects by creating interface veri¬cation tests
early on in the project.

Usage of SOA Design Patterns to Ensure Extensibility and Flexibility
The success of your SOA solutions over time will be measured by how well these
solutions handle change. How easy is it to add new functionality to the solu-
tion? What impact does this have on other solutions? Does change cause a reduc-
tion in the quality of service offered by your solutions? Does it compromise their
behavior?
The key to ensuring this success is good design. And one of the keys to good
design is to have a good set of design patterns that are consistently used across your
solutions.
An example list of these is provided in Figure 21.6.

Estimation and Planning
To make our iteration planning effective, we ideally need an estimate for each of the
work items on the project. By our previous calculation, a project of 200 use case
¬‚ows will have an estimated 800 work items. We need a way to quickly get useful
estimates for each of these 800 work items!
The procedure we follow is a simple one:

For each use case ¬‚ow, the team places it into one of the categories of complexity
shown in Figure 21.7. The factor to the right is used to adjust estimates. So a
Complex is seen to be twice as much work as a Medium Complexity, a Simple is
half as much work, and so on.
We then have some basic estimates (in ideal days) for the work of each of the
subteams in order to deliver a Medium Complexity use case ¬‚ow (Figure 21.8).
The result is a set of estimates for each of our work items (Figure 21.9).
Agile Testing: How to Succeed in an Extreme Testing Environment
176


1. Factor
composition logic
12. Drive
away from process
applications using
logic
business processes

2. Factor atomic
11. Use shared
reusable logic into
messages and
lower reuse layers
parameter types

10. Keep 3. Factor application
architectural specific logic out of
elements totally reuse layers
Twelve
decoupled
Acrhitectural 4. Base architecture
Patterns on business relevant
9. Keep service
things
operation signatures
meaningful
5. Manage
complexity using SO
8. Model data
systems
ownership

6. Derive atomic
7. Service enable services from domain
non-SO systems model



Related to: Pattern 1 Pattern 2

Pattern 2
Mutually exclusive: Pattern 1

21.6 Example of SOA design patterns.

The use case ¬‚ows (row in Figure 21.10) are each assigned to an iteration. In
doing this we can assign each of the work items to a time box. So if UC17.1 is
assigned to iteration 1, then the requirements work for it is assigned to time
box 1, the design work to time box 2, the implementation to time box 3, and the
test to time box 4. Similarly, if UC18.1 is assigned to iteration 3, then the require-
ments work will be in time box 3, the design in time box 4, the implementation
in time box 5, and the test in time box 6.
This allows us to quickly sum up how much effort is required in each time box
for each team and, therefore, how many resources are required in each team.

Using this simple mechanism, the planning factors of number of resources, number
of time boxes, and amount of in-scope work can be adjusted until the best plan is
found.

Very Complex (VC) 4
Complex (C) 2
Medium Complexity (MC) 1
Simple (S) 0.5
Very Simple (VS) 0.25
Very Very Simple (VVS) 0.1

21.7 Complexity ratings.
Agile Special Tactics: SOA Projects
177


Requirements 1
Analysis & Design 1
Implementation 4
Test 1.25

21.8 Estimates for iteration work items.

21.4 Results of an Agile Approach
Having applied the process for a number of years, there has been a lot of opportunity
to work out what works well and what doesn™t work so well. Let me share some of
the results with you:

We™ve used this process on projects that included multiple vendors working in
multiple geographies and the process allowed the necessary collaborations to
take place in a predictable and ef¬cient way.
Using design to guide the development teams has improved the usage of imple-
mentation technologies and has allowed specialist implementation teams (e.g.,
Portal, BPEL, integration connectors) to work effectively together as a team.
The consistent view of the design across multiple technologies from different
suppliers makes the overall solution easier to understand.
The process has made it possible for teams to leverage design parts built by
previous teams to produce new solutions.
Quality issues are caught early by our projects due the fact that QA testing kicks
in from the very ¬rst iteration.
Our early showcases to the end users prove very effective at both getting useful
feedback and making end users feel an integral part of the process.
Doing proper progress tracking based on plans that are well thought out to start
with reduces the amount of shortcuts required later.
Having a continuous build and test in place is very important for developer
productivity.
Missing or bad design puts pressure on the development team, who end up
releasing bad-quality code, which then puts further pressure on the testers.
Tracking change is crucial to ensuring the team™s ability to execute the plans.
If changes are “slipped in” without planning updates, then the plans become
unachievable and meaningless.




21.9 Estimated use case plays.
Agile Testing: How to Succeed in an Extreme Testing Environment
178




21.10 Work items assigned to time boxes.

Velocity and other progress tracking mechanisms allow for mature conversations
with customers and other suppliers. It helps to be able to discuss progress in
black-and-white terms.


21.5 Lessons Learned
Succeeding in applying software development process requires continually learning
and improving. These are a few lessons learned that are worth sharing:

Keep an eye on whether everyone is playing by the rules. Sometimes you need to
make sure that other process participants are playing by the rules. Does the code
being produced meet the design speci¬cations? Are speci¬cations complete? It™s
no use having a group of suppliers sign up to a process, only to then go ahead and
completely ignore the process. I™ve worked on a project where this has happened
as the other supplier felt under pressure to say that they were following the
process. They had a requirements team writing use cases and a design team
working on design. At the same time, though, they had a large development team
ploughing ahead, writing code before speci¬cations were even in place. The net
result was that the project nearly ground to a halt when they had to spend a lot
of time reworking the solution, and solution quality took a hit as software was
delivered to the test team late, which placed them under a lot of pressure.
Watch the defect count! One of the advantages of using iterations is that it allows
you to learn from your mistakes. If iteration 1 testing has a 90% failure rate on
its use case plays, it doesn™t help if iteration 2 implementation is focusing all of
its effort on releasing another set of use cases instead of addressing the current
problems. Delivering loads of new use cases when the ones that you™ve already
delivered are full of bugs is a strategy that is going to end in tears. While it might
look like the team is making progress, having a big backlog of defects is just
asking for trouble.
UAT is not QA. There is often a temptation to go ahead and start user acceptance
testing (UAT) for a solution before QA has signed off that all the defects have been
¬xed. This is likely to cause delays in UAT as this phase of testing is not meant
Agile Special Tactics: SOA Projects
179


to root out defects in the code but rather to check that the solution does its job
in the hands of test end users. Rather it would be better to arrange another time
box to sort out the remaining defects before starting UAT.
Increased automation. It would be useful to make use of increased automation to
check things such as whether the code matches the design or to check whether
the design adheres to the design patterns.
22 The Agile Test-Driven Methodology Experiment
Lucjan Stapp, Assistant Professor, Warsaw University of Technology,
and Joanna Nowakowska, Rodan Systems S.A.




SYNOPSIS
This case study describes a research project to determine the differences between one group
of subjects following an agile testing method and another group of subjects following an ad
hoc approach to testing. Results showed that the use of an agile approach improved defect
detection by some 50% with just a 10% increase in test effort over the control group.


22.1 Introduction
My name is Lucjan Stapp and I work as an Assistant Professor at the Warsaw University
of Technology. This case study describes research conducted at the Warsaw University
of Technology, Faculty of Mathematics and Information Science, to determine the
differences between one group employing an agile testing method and a control
group using an ad hoc approach to testing.


22.2 Overview of the Testing Challenge
Agile approaches are often proposed as a solution to improving a number of charac-
teristics of the testing process; typical improvements are claimed in speed and cost
of testing, but many of these claims are only backed up by qualitative or anecdotal
evidence.
This research was conducted to try to quantify the bene¬ts of agile versus ad hoc
testing approaches.

Teams and Exercises
To check the cost of using test-driven methodology, we have created two groups with
the same knowledge of programming in Java:

the test group, later called the TDD team (using test-driven development [TDD]
methodology), and
the control group, which, traditionally, does not have unit tests built in.

180
The Agile Test-Driven Methodology Experiment
181


Time
8
7

6

5

4

3

2

1

0
Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5

TDD group Control group

22.1 Time needed for preparation of application.



Both groups have made the same, rather simple applications (about 6 hours of coding
for a three-person group). For these two groups we have prepared six exercises.

Programming and Test Environment
Both teams work in the same environment:

Eclipse SDK 3.0.2 with
built-in JUnit1 for unit tests and
built-in Hammurapi2 for code review; and
EMMA3 for code coverage.


22.3 Experiment Results
Time Results
In Figure 22.1, we present times needed for preparation of the given application. As
you can see, at the beginning, the TDD team needs much more time (about one-
third more) than the control group, mostly for getting used to new tools (JUnit and
Hammurapi “ in our experiment EMMA was used only to check the predetermined
level of test coverage). After the time needed for training (i.e., the ¬rst two exercises),
this difference is not so signi¬cant; the TDD team needs about 7“10% more time to
build the application.

1 www.junit.org.
2 www.hammurapi.org.
3 emma.sourceforge.net.
Agile Testing: How to Succeed in an Extreme Testing Environment
182


Number of Bugs
14

12

10

8

6

4

2

0
Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5

TDD group control group

22.2 The number of defects.

Number of Defects in the Applications
In the next ¬gure (Figure 22.2) one can see the number of defects that were found
during the test process.4 As one can observe, in the beginning period (i.e., the ¬rst
exercise) the control group obtained the better result (produced fewer defects). In
our opinion, it is the result of the additional stress on the TDD group and poor (not
enough) knowledge of using tools by members of TDD group.

Quality of Source Code
In the last ¬gure (Figure 22.3) we present the results obtained after checking the
quality of the source code (using Hammurapi) “ as one can observe, the TDD group
created code that was much more coincident with coding standards than the code
presented by the control group.


22.4 Lessons Learned
Summarizing this simple experiment, one can ¬nd that, using the agile TDD method-
ology in development and testing:

Not much additional cost is incurred; when the developing teams are familiar
with methods used, the extra time needed is rather small (less than 10%).
Substantially (about 50%) fewer defects are found in the source code.

Also important is that the prepared code is amenable to refactoring. Analyzing the
Hammurapi reports, one can ¬nd the proper scheme for code development.

4 These tests are done by an independent experienced test team, which used white box and black box
techniques to test applications.
The Agile Test-Driven Methodology Experiment
183


Coincident with Coding Standards (after Hammurapi)
18
16
14
12
10
8
6
4
2
0
Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5

TDD group Control group

22.3 Coincident with coding standards.

On the other hand, we also want to underscore some weaknesses of the TDD
method:

When we used the TDD method, the rule everything or nothing stands “ if we
break before ¬nishing all tests (e.g., because there is no more time), we do not
obtain anything; the module does not exist.
TDD does not replace unit testing “ if we are working to a contract that states
that unit testing has to be completed, we are obliged to prepare and execute these
tests in addition to the TDD tests.
It is an agile methodology; hence, some organizations reject it.

The last remark is at least controversial; the ¬rst two are much more important.
However “ in our opinion “ the gain (a greater than 50% reduction in defects
with only 10% additional time) is an interesting argument for using this method.
23 When Is a Scrum Not a Scrum?
Dr Peter May, Technology Consultant at Deloitte




SYNOPSIS
This chapter presents a case study in which a large project was managed with Scrum by
splitting the deliverable into three separate parts. Each part was assigned to a separate
development team and the project was run as a “Scrum of Scrums.” The case study focuses
on one of these teams, describing how they followed the Scrum methodology closely to
build and system-test their part of the deliverable. It then examines a few of the reasons
for the exceptional outcome achieved by this team: zero code bugs in their ¬rst live code
release.
The chapter closes with a look at common deviations from the prescribed Scrum method-
ology, and whether a project that deviates in any of these ways can still be considered to be
following Scrum.


23.1 Introduction
My name is Peter May and I have worked for Deloitte as a technology consultant
for the past six years. During this time, I™ve been involved in a number of agile
projects in various roles, including developer, test manager, and latterly, as a project
manager. Each of these agile projects has used the Scrum project management
approach, but each has deviated from the “pure” Scrum approach to a greater or lesser
extent.
In the case study that follows, I look at a model example of how Scrum, when
implemented in a form close to its “pure” form, can lead to the production of very-
high-quality software artifacts. I describe the testing challenges and how these were
overcome, but I also consider the wider development aspects.
I have also worked on a number of other projects where pragmatic deviations
from the prescribed Scrum methodology were made, to satisfy client or wider project
needs. To close this chapter, I consider some of these deviations and discuss whether
a project that deviates from Scrum™s prescribed approach in these ways can still be
considered to be following Scrum.


184
When Is a Scrum Not a Scrum?
185



23.2 Overview of the Testing Challenge
Although I™ve worked on a range of Scrum projects, with a range of technologies
(including SOA-based projects), the testing challenges have, by and large, always
been the same:

Frequent releases of the software product have been required, requiring stable
software each time. This puts immense pressure on the test team within the
Scrum to not only thoroughly system-test the functionality developed during
the sprint but also to execute an ever-increasing amount of regression testing.
The detail of how pieces of functionality will be implemented is not known before
the sprint starts, so test scripts cannot be written in advance.
Frequent releases typically means that the code has to be branched. This of course
means additional work for the testers.
In larger projects (>10 developers), it is good practice to split the team into two
or more separate Scrums, then run the project as a “Scrum of Scrums.” This can
be very effective, but it does create the additional challenge of a cross-supplier
integration test phase before integration testing with existing business systems
can commence.
The volume of testing resource required is frequently underestimated, causing a
great deal of stress for the testers and impacting on quality. (As a rule of thumb,
I would recommend two testers for every four or ¬ve developers.)


23.3 De¬nition of an Agile Testing Process
The ¬rst Scrum project I was involved with (my role was test manager) was the
project that was able to follow the principles of Scrum most closely.
The client was a large U.K.-based organization that was interested in developing
a new delivery channel for its products. The process (and technology) required to
achieve this aim could be split into three distinct segments. Thus, the client decided
to assign development of each of these segments to a separate Scrum team and then
to run the project as a Scrum of Scrums. I was test manager for the Scrum team
responsible for one of these segments.
My team followed Scrum™s methodology by

completing all system testing (as opposed to cross-supplier integration testing)
within the sprint, ensuring that each sprint ended with the production of a
working piece of software “ a release candidate.
co-locating developers and testers in the same area. We couldn™t get a closed
room, but we did get our own area. To ensure that the testers knew exactly what
the developers were developing (and, hence, what they had to test), we initially
tried inviting the testers along to the development design meetings that followed
on from the sprint planning meetings. This tended to bore and confuse the testers,
Agile Testing: How to Succeed in an Extreme Testing Environment
186


so we soon switched to an alternative approach whereby the development lead
would brief the testers on the detailed design for a user story, once the developers
had agreed the design.
encouraging frequent interaction between developers and testers “ including
through table tennis!
estimating pieces of functionality in terms of user story points. This was hard at
¬rst, but got easier as time went on. We used this to calculate a sprint velocity and
hence to estimate when the product backlog (as it stood at any given time) would
be ¬nished. Crucially, we ensured that testing effort was taken into consideration
during estimation.
having stand-up brie¬ngs every day and ensuring that people turned up! We also
ensured that they were kept brief.
frequent unit testing. An automated set of unit tests ran every night to make sure
that recent code changes had not broken previously developed functionality.
keeping the sprints short. The client suggested four-week sprints, but we opted
for two-week sprints as we knew that the client™s development priorities would
change rapidly. We were proven right and two weeks proved to be the optimum
sprint length.
starting each sprint with a sprint planning meeting, which would be in three
phases:
1. Sprint review: The output of the last sprint was demonstrated to the product
owner (see later).
2. Sprint retrospective: The team discussed which behaviors they should start
doing, which they should stop doing, and which they should continue doing
(and positively reinforce) to help make future sprints more successful.
3. Sprint planning: The team received candidate user stories from the product
owner and committed to develop and system-test some or all of these during
the sprint.
not de¬ning requirements too tightly. We had a product backlog of user stories
and we had a solution architecture that described the basic components required
in the system. Beyond that, it was up to the development team to decide how to
implement requirements. Where clari¬cation on technical issues was required,
these would be taken to a fortnightly Design Authority meeting with the cus-
tomer and the other suppliers. Once the development team had decided how
to implement a user story, they would brief the testers, who could then start
preparing test scripts.
having a Scrum board, on which was posted the product backlog, the sprint
backlog, the burndown chart, and the latest “start“stop“continue” list.
insisting that we be left alone during a sprint to complete the tasks that we had
been set. We reminded the customer that a sprint could not be altered “ only
abnormally terminated. One abnormal sprint termination did occur just after the
sprint had started.
When Is a Scrum Not a Scrum?
187


not having one Scrum team with thirty or so members, but splitting this into a
“Scrum of Scrums.” A central Scrum master coordinated our work with that of
the other Scrum teams to ensure that we developed functionality in a coordinated
fashion and were able to put releases into cross-supplier integration testing at
the same time.

My team differed from Scrum™s methodology by

using a “proxy” product owner. The development and test team were 30 miles
away from the customer, so the business analyst represented the customer in
sprint planning meetings. To make this work, the business analyst would spend
two days per week with the customer, getting to know and understand the
customer™s requirements.
developing a back-end system, which helped (i.e., without a user interface). The
customer, therefore, didn™t need to see frequent demos: evidence from integration
testing that what we were building was working was enough.


23.4 Results of the Agile Approach
The Scrum approach was remarkably successful: each user story was thoroughly
system-tested as soon as development ¬nished. As a result, the ¬rst code release
made it through cross-supplier integration testing, user acceptance testing (UAT),
and into live with zero code defects. (There were a few con¬guration problems in
each environment, but the code itself was faultless.) Future releases also had a low
code defect count.
A further success was that the Scrum team worked “sensible” hours, by and
large. This was a result of the Scrum estimation process and the calculation of sprint
velocity, which allowed us to set expectations with the customer about what was
deliverable in any given sprint. Indeed, the customer was often able to work this out
for themselves.
That the development cycle ran so smoothly is, I believe, for the following
reasons:

The customer was 100% behind Scrum. Indeed the decision to use Scrum was
taken by the customer, who then requested that we use it. The developers and
testers were also open-minded “ indeed excited “ about trying out Scrum for the
¬rst time.
We had a right-sized test team. To compliment our ¬ve developers, we had two
testers (plus a test manager). This allowed the user stories developed in each
sprint to be fully system tested before the sprint was complete.
Testers and developers enjoyed good relations, fostered through the collegiate
spirit that arises from co-location and augmented by social activities (including
pub lunches and frequent games of table tennis!).
Agile Testing: How to Succeed in an Extreme Testing Environment
188


The daily stand-ups proved their worth. Initial skepticism about the value of
these meetings quickly gave way to a ¬rm belief in their bene¬t, after early
identi¬cation of issues saved a lot of time for both testers and developers.

Note that we did not run integration testing and UAT as sprints “ we found it
was much easier to run these in “classical” fashion. Where ¬xes were required, those
that were nonurgent were added into the current sprint (time was set aside for this),
then deployed to integration/UAT with the rest of that sprint™s code. Where the ¬x
was urgent, we would branch the code and patch the ¬x straight in.


23.5 Lessons Learned
From this initial experience of Scrum, the main lesson learned was that, when
properly applied, Scrum is a very effective project management methodology for
software development.
However, I have also worked on a number of other projects where pragmatic
deviations from the prescribed Scrum methodology were made to satisfy client or
wider project needs. In what follows, I consider some of these deviations and whether
a project that deviates in these ways can still be considered to be following Scrum.
Test-speci¬c considerations are examined ¬rst, followed by general considerations,
which affect testing and development equally.
System testing not carried out during the sprint. For system testing to be
successfully conducted within a sprint, a “sprint test” environment is required
in addition to the developer sandbox environment. The sprint test environment
should be a “private” environment, for the sole use of the Scrum team. It should
be possible to make frequent deployments to the environment at short notice.
Without this environment, Scrum cannot function. (Note that system testing is
distinct from cross-system integration testing, UAT, nonfunctional testing, and
operational acceptance testing. These types of testing are not normally carried
out within the same sprint “ indeed they may not be subject to sprint discipline
at all.)
Testers and developers not co-located. Communication is key to Scrum, and this
includes communication within the Scrum team. Sprint testing relies on the
testers enjoying good relations with the developers, talking to them frequently
to understand the detailed design that the development team has chosen to
implement to satisfy a given user story. If testers and developers are not sitting
right next to each other, the frequent informal discussions (and more formal
meetings) required for (1) the testers to understand what the developers are
building and (2) the developers to understand why the testers have raised a
particular bug cannot happen. This leads to the breakdown of effective testing
and is a frequent cause of testing failure in an agile environment.
No requirements at all. It™s a common misconception, but in fact agile (and
Scrum) does need requirements to function. Without requirements, the project
When Is a Scrum Not a Scrum?
189


would be chaos. The requirements should take the form of “user stories” “
descriptions of the functionality that the system should offer to users. They
should be worked out by a business analyst, in consultation with the “product
owner” and other key stakeholders.
Additionally, agile can often bene¬t from a solution architect “ someone who
maps out the high-level components that are required to make the solution work
and to make it reusable. The solution architect should have visibility of existing
systems (or anticipated future systems) in the organization and so will be able to
put together a high-level design that integrates well with these existing systems
or makes integration with anticipated future systems straightforward. (Note that
the solution architect should not attempt to specify the technical details of how
systems interact “ this should be left to the development team or technical
architect.)
Highly detailed requirements. Agile does not specify up front the detail of how
to implement a piece of functionality; it expects the developers to work this out
for themselves. This is based on the notion that no one will have a better idea
about exactly what will and won™t work than the developers. A project that tries
to specify requirements to a deep level is by de¬nition not an Agile project (see
also “No requirements at all”).
Some projects need highly detailed requirements. A good example would be a
project to implement a new piece of regulation. The regulatory requirements are
¬xed: there is no room for manoeuvre here. This sort of project would not be
suitable for an agile approach, but would be well suited to a waterfall approach.
Team is not left alone during a sprint. A fundamental principle of Scrum is that,
during a sprint, the Scrum team is left alone to complete the workload it had
agreed to at the start of the sprint. The Scrum team can only do this if they are
not interrupted by changes to the user stories agreed at the start of the sprint
or by requests for new user stories to be added to the sprint. Note that this does
not mean that there is no interaction between the team and the product owner
during the sprint: the product owner should be on hand to answer queries from
the developers and to adjudicate on “bug versus feature” disputes as required.
Long sprints. A “sprint” is a short development cycle. The most popular sprint
lengths are two and four weeks [94]. Sprints of longer than six weeks tend to
defeat the purpose of Scrum “ short development cycles and frequent reviews
of the software product with the product owner are key to making a success of
Scrum.
Stand-up meetings that take too long. A stand-up meeting should be short
(maximum ¬fteen minutes). To encourage this, it should be held standing up.
If it™s taking longer than ¬fteen minutes, it™s not adhering to the principles
of a stand-up meeting. Instead, it™s probably become a design discussion or a
debate about how best to ¬x a particular problem. This is all very well, but not
a discussion that should keep the whole team back from continuing their work.
The alternative explanation is that too many people are talking at the meeting.
Agile Testing: How to Succeed in an Extreme Testing Environment
190


Remember that, while anyone is welcome to attend, only members of the Scrum
team (i.e., developers, testers, business analyst) can speak.
Customer demands ¬xed scope and ¬xed time or price. It™s not that unusual
for a customer to demand ¬xed scope and ¬xed price or time, but it is certainly
not agile! Agile was founded on the philosophy that scope cannot be ¬xed, as
change is inevitable. Time and price certainly can be ¬xed; however, agile states
that when time or money runs out, the software artifact that exists at that point
should become the live system (subject to successful integration testing, UAT,
and operational acceptance testing, of course).
PART THREE




AGILE MY WAY: A PROPOSAL FOR YOUR OWN
AGILE TEST PROCESS
A very important lesson to be learnt is that there is always another lesson to be
learnt; the goal of ¬ne-tuning our processes continues today, and will continue
into the future.
Tom Gilb




This section of the book provides an analysis of the agile case studies presented
in Part 2 and draws upon the material from Part 1 to make a series of proposals
about what components might be used to generate your own successful agile testing
approach:

Chapter 24, Analysis of the Case Studies, examines in detail the agile case studies
presented in Part 2, identifying particularly successful agile techniques, com-
mon themes (such as successful reusable templates), as well as those testing
approaches that were not successful and which may need to be treated with
caution.
Chapter 25, My Agile Process, draws on the information covered in the earlier
chapters of this book and on the information provided in the case studies and
their analyses to make a series of proposals for how you might set up and run
your own practical, effective, and ef¬cient agile testing process.
Chapter 26, The Roll-out and Adoption of My Agile Process, provides advice
and guidance on how to successfully introduce, roll out, and drive the use and
adoption of your agile process within your own organization.




191
24 Analysis of the Case Studies
If you plan to run a test more than ten times, it™s cost effective to automate it.
If you don™t automate it, it™s unlikely that you™ll run it more than once!
Jon Tilt




24.1 Introduction
This chapter examines in detail the agile case studies presented in Part 2, identifying
particularly successful agile techniques, as well as those testing approaches that were
not so successful and which may need to be treated with caution. The next chapter,
Chapter 25, makes a number of proposals based on the analysis in this chapter for
agile practices that you could reuse as part of the process of setting up your own
agile method, while Chapter 26 provides a series of recommendations on how you
might manage the roll-out and adoption of your agile method.
In addition to the case studies that made it into this book, I was lucky enough
to have had offers of roughly the same number again, and I am very grateful to all
those agile practitioners whose work helped to inform the material in this and the
following chapters.
I have also been fortunate enough to be able to draw upon a rich vein of agile
material from a number of other sources, including:

my association with the British Computer Society Specialist Group in Software
Testing (the BCS SIGiST), a number of whose members have been kind enough
to submit agile cases studies;
I am also indebted to those SIGiST members with whom I have discussed and
corresponded with on the subject of agile testing and who have helped drive and
de¬ne the material in this chapter;
my work on the committee of the Intellect1 Testing Special Interest Group, where
I have been very pleased to have been asked to promote the cause of testing best
practice and process;
the academic community where, both through my early career on the teaching
staff at the Open University and later as part of my own research activities in

1 The organization whose goal it is to represent the U.K. technology industry (formed from the merger of
the Computing Services and Software Association and the Federation of the Electronics Industry).



193
Agile Testing: How to Succeed in an Extreme Testing Environment
194


support of my doctorate, I have been involved with a wide range of academics
working at the cutting edge of agile research;
and last but not least, The Day Job, where I have been fortunate enough to
work on a daily basis with agile development and testing best practices, and with
colleagues who are not just inspired by and evangelical about agile, but who also
work in real-world agile projects delivering genuine value to their customers.

This chapter is structured in the following manner:

Sections 24.2 to 24.7 of this chapter review a series of agile best practices that
the case studies have highlighted as being particularly valuable from a software
quality perspective and are organized under the following headings:
Agile Development and Testing,

<<

. 6
( 10)



>>