. 3
( 10)


Agile Testing: How to Succeed in an Extreme Testing Environment

and ensure that you retain the con¬dence of the project sponsor by reporting on
progress against plan for the agreed objectives;
identifying all the people affected by the project and ensuring that there is a
mechanism for keeping them apprised of developments and progress;
creating and following a project plan, identifying the tasks to be undertaken by
all participants, and ensuring that they are tasked through the project steering
committee to deliver against the plan;
identifying early when there is risk that an objective may not be delivered by the
client™s staff and giving the project sponsor suf¬cient warning so that there is a
reasonable chance of the situation being resolved;
budget for testing a “rate of run project” and accept that there will be a regular
monthly cost regardless of the level of work. Once the initial test tools are
available, running through the regression test process is highly dependent on the
delivery by the developers; so one is a hostage to fortune if a testing organization
undertakes work on a ¬xed-price basis. However, if you have testers who are also
developers, you can consume time to bene¬t by getting testers to undertake some
development-related tasks; and
ensuring that when your role as a consultant is completed, then you exit. De¬ne
the exit criteria, meet them, and go! We are all passionate about testing, and
it needs to be a permanent function, so help the customer to recruit a mem-
ber of staff to continue the good work using the framework you have put in

How to Decide When to Release
It was expected that our approach would ¬nd problems; the challenge we faced was
how to report them and how to prepare senior management to deal with them.
In our case, many problems once discovered were considered to be so serious
that they had to be ¬xed immediately, and the new release they were associated with
could not be delivered until they were resolved.
This was the single most signi¬cant factor in delaying the releases. In retrospect,
we should have worked harder to create a regime where if the release was at a
state where it was an improvement and there were no new known problems being
introduced, then we should have proceeded with the release anyway on the basis that
we were at least improving the quality of the code.
An alternate approach, and one that would have managed the expectations of the
stakeholders more effectively, would have involved us making it clear that there was
a signi¬cant probability that we would ¬nd so many issues, that the overall testing
effort was likely to take at least three months, and perhaps up to six months. This
would also have allowed the stakeholders to change their overall approach to the
project “ perhaps taking a different view on how to proceed.
As a result, even though the agile test approach was ¬nding defects, getting them
¬xed, and improving the quality of the software, we were seen as the cause of the late
release, rather than as the agents of improvement.
Testing a Derivatives Trading System in an Uncooperative Environment

De¬nitive Evidence
It was found that using software tools allowed the creation of highly valuable reports.
Experience showed that when the situation is extremely serious you can neverthe-
less create hope by performing analysis using heat-maps3 and other categorization
reports to identify where the trouble areas are and what the trend lines are.
These types of reports provide de¬nitive evidence to enable senior managers to get
involved in solving the problem by allowing them to determine where the problem
is, and to take action based on hard evidence created from metrics. Producing trend
graphs shows that, even in a dire situation, once the trend line starts to improve, the
management action taken can be demonstrated to be having a positive effect.

Using the lessons learned from this project, we undertook a number of similar pro-
jects for other major ¬nancial institutions. The bene¬ts of implementing this suc-
cessful agile approach for these other customers include
improving the functional quality of the delivered software;
improving system robustness and availability;
verifying that the performance of the software was capable of satisfying speci¬c
throughput levels (through rigorous performance testing);
establishing the upper limits of scalability to support business planning, so that
system hardware upgrades could be planned; and
identifying, isolating, and rewriting performance bottlenecks to improve perfor-
mance and scalability.

The agile testing approach described in this case study continues to be a useful
and effective solution that will continue to be used and re¬ned in future projects.

8.6 Results of the Agile Approach
Despite the issues described earlier, the approach described in this case study was a
great success. Furthermore, it has worked extremely well on a number of subsequent
projects too.
An important ¬nding of this and the subsequent projects is that, as is usually the
case, improving quality turns out not to be a technology problem, but a people issue
and a leadership challenge.

3 Also known as tree-maps, heat-maps provide an effective data visualization technique for concise display
of project data.
9 A Mixed Approach to System Development and Testing:
Parallel Agile and Waterfall Approach Streams within
a Single Project
Geoff Thompson, Services Director for Experimentus Ltd

This case study describes a project for which I was the test program manager for an FTSE 100
life assurance company that was delivered using both waterfall/V-model and agile approaches
alongside each other in separate but dependent projects.

9.1 Introduction
My name is Geoff Thompson. I have been involved in software testing for nearly
twenty years. In addition to automation, I have in my time been a test analyst right
through to a test program manager. In that time I have experienced or directly used
many life-cycle delivery approaches, such as waterfall [5], Timeboxes [21], V-model
[4], Rational Uni¬ed Process [7], Interactive [27], Agile and Scrum [26].
I am currently the Services Director for Experimentus Ltd, a U.K.-based software
quality consultancy, specializing in process improvement, project management, test,
con¬guration management, and requirements management.
I am a founding member of the Information Systems Examination Board Software
Testing Board and also a founding member of the International Software Testing
Quali¬cations Board [42], and am currently the U.K. representative to the Board. I
am also the founder and chairman of the U.K. Testing Board (www.uktb.org.uk).
The following case study describes a project for which I was the test program
manager for an FTSE 100 life assurance company (the BigLifeAssurance Insurance
Co. Ltd [BLA Ltd]) that was delivered using waterfall/V-model and agile approaches
alongside each other in separate but dependent projects.

9.2 Overview of the Testing Challenge
The two main drivers that caused the BLA Ltd Mortgage Service project to be the
critical project that it was, were as follows:

A Mixed Approach to System Development and Testing

Legislation had changed, meaning that BLA Ltd needed to change its mortgage
sales system or it would have to stop selling mortgages (at the time mortgages
provided 28% of its yearly revenue).
BLA Ltd also saw an opportunity, through this change, to convince a larger
group of independent salesmen to include BLA Ltd on their roster of insurance
providers, and hence increase market share.

The initial architectural and design work started in January 2004, with a release
date set to 31 August (the legislative change date).
The release was put under further pressure in March when the business decided
that their IT department could not deliver the nice web front end they wanted, and
it went out to a supplier (Webfrontendsrus) who had built a simple front end for a
very small insurance company previously.
As a bit more background, BLA Ltd currently had 5% of the market and was
almost the market leader (there is a lot of competition in this market); the insur-
ance company that Webfrontendsrus had already worked for held just 0.01% of the
market. The solution Webfrontendsrus proposed was therefore very simple and cer-
tainly not stable enough for a company the size of BLA Ltd; but, with excessive
pressure from the business, the IT department at BLA Ltd agreed to include it within
the overall delivery project (con¬rming that it would need signi¬cant customiza-
So, what was a complex internal project had just multiplied in complexity and
50% of the development had been outsourced.
To make matters worse, BLA Ltd insisted on a waterfall approach, whereas
Webfrontendsrus proposed the use of an agile approach; we were clearly in for
an interesting time.

9.3 De¬nition of an Agile Testing Process
The speci¬c details of the agile process adopted by Webfrontendsrus included:

Co-location of designers, developers, and system testers in their of¬ces.
As will be seen later, it was not altogether clear what the involvement of test was
within Webfrontendsrus™s agile method. What was clear, however, is that they
believed they didn™t need the customer™s input and they had the contract written
that way.

As far as we could ascertain, this was it.
Agile Testing: How to Succeed in an Extreme Testing Environment

9.4 Results of the Agile Process
For clarity I have broken the results into two streams, the waterfall (BLA Ltd) and
the agile (Webfrontendsrus).

Results of the waterfall approach (BLA Ltd):
With careful use of the V-model approach, the ability to plan and implement
testing early in the life cycle was achieved.
Even though the test analysts and developers were co-located and productivity
should have been higher, the code wasn™t ready on its scheduled delivery date.
Development wanted to deliver something that worked and so asked for, and
got, a week-long extension.
On delivery to system test, the code installed ¬rst time “ this had never been
seen before!
A system test pack of 200 test cases was run within two days of delivery,
without a single defect of any consequence being found. In fact we found only
ten cosmetic issues. This had never happened before at BLA Ltd. So we reran
the tests again, still ¬nding no serious defects.
I then asked my test analysts to recheck their tests to ensure they were
accurate; they were.
We therefore completed system test six weeks earlier than planned, saving
BLA Ltd £428,000 (in planned, but unused development time for ¬xes, and
testing costs) and reducing the delivery to live by six weeks (six weeks before
the legislation came into force).
Results of the agile approach (Webfrontendsrus):
No data are available as to what documentation was written or what tests were
Code was delivered to BLA Ltd to run acceptance tests against it with just four
weeks left before launch.
There was a security ¬‚aw identi¬ed initially that meant that if you logged
in then pressed the back button on your browser, your original login details
remained in cache and allowed you to press the forward button and get straight
back into the system! It was sent back for ¬xing.
On receipt of the next version, most of the management information functions
didn™t deliver the right reports and the way in which the data were collected
and stored meant that retrieval could take up to three minutes (completely
Eventually the delivery was descoped to 25% of the original scope to ensure
something was ready to go live, on time, and to stay legal.
The now infamous quote by Webfrontendsrus™s managing director to the IT
director of BLA Ltd at this time was, “If you could stop ¬nding errors we could
put the time in to make sure that you get the full scope delivered!”
A Mixed Approach to System Development and Testing

Six months after the initial delivery date, Webfrontendsrus delivered software
that provided 90% of the required functionality; three years later the balance
of the initial functionality is yet to be seen.
BLA Ltd did go live with a solution on time and so stayed legal but, due to the
issues with Webfrontendsrus, never managed to grab the additional market
share they were looking for.

9.5 Lessons Learned
The biggest lesson learned through the BLA Ltd project was that the in-house team
had actually worked in a pseudo-agile way (we just didn™t realize it at the time):

We co-located development and test resources.
The system testers tested code as it was built (hence the request for an additional
week in development).
Unit tests were automated using JRun.
Daily short progress meetings were held throughout the delivery to ensure every-
thing was on target.
The V-model principles of early test design signi¬cantly enhanced our approach
to delivery.
All documentation was delivered with the initial drop of code.

Webfrontendsrus, however, appeared to have used agile as an excuse to adopt ad
hoc and unstructured methods:
None of their team had identi¬ed roles; developers did their own unit testing and
the system testing (no trained system testers were used).
No documentation was delivered.
Code was rewritten on the ¬‚y, with no knowledge of the risk or impact of the
Resources were employed without the requisite development skills (in effect to
make up the numbers required).

BLA Ltd has now adopted a far stronger supplier management position regarding
the quality of third-party software, de¬ning the required development approach (and
building in audits to ensure they are followed), demanding access to review system
tests, and trying (as much as contracts will allow) to work in partnership with third
While they still use the waterfall approach on the older maintenance releases,
BLA Ltd has successfully implemented projects using Scrum as their prescribed
delivery method for all new developments.
10 Agile Migration and Testing of a Large-Scale Financial System
Howard Knowles, MD of Improvix Ltd.

This project describes the migration and testing of a large-scale complex business-critical
¬nancial software system from MS VB6 to VB.NET.
Since the project involved the direct conversion of the existing functionality of the system
into VB.NET with no other enhancements planned, all the cost and effort involved were seen
as overhead by the users.
Because of the need to reduce cost and timescales, an agile approach was selected for
the project.

10.1 Introduction
My name is Howard Knowles and I am the MD of IT consultancy Improvix Ltd. With
thirty-one years™ experience in the IT arena covering the full software development
life cycle, I have been involved in process adoption and improvement for the past
¬fteen years.
This case study describes a project in which I was engaged as the test manager by
a leading international ¬nancial company to migrate their business-critical system
from Microsoft VB6 to VB.NET. The project arose due to Microsoft™s ending of
support for VB6 and the dif¬culty in hiring (and keeping) VB6 developers to continue
maintenance and enhancement of the system.

10.2 Overview of the Testing Challenge
Two factors de¬ned the testing challenge facing the conversion project:

1. A regression test suite did not exist for the system to verify the migrated software.
As part of gaining user buy-in to the conversion project, development of an
automated regression test suite had been promised as a deliverable.
2. Development of the software would continue during the conversion because a
change freeze was unacceptable to the users.

Agile Migration and Testing of a Large-Scale Financial System

The system in question has been continually maintained and enhanced for over
eight years in small increments, relying on informal developer and user acceptance
testing. Whereas this approach has been successful in delivering a quality system,
there has been no use of automated testing, and no formal regression test suite has
been produced.
This presents a challenge when considering conversion from VB6 to VB.NET. It
was clear that this would involve wholesale change to a high percentage of the code
base. The existing testing approach was not designed to cope with this type of change.
The business need is to ensure that the converted system functions properly and can
be cut over with no business interruption. However, because no direct business
bene¬t exists in terms of new features, all costs involved in the conversion are seen
as an overhead.
The application is organized as a fat client/fat server system. The server accounts
for around 80,000 lines of code. It abstracts the underlying database and includes
business logic. The client code, around 120,000 lines, includes around 200 forms and
also includes business logic. The behavior of the system is highly data-dependent.

10.3 Initial Approach
The server code was converted to VB.NET as a separate task from converting the
GUI. The changes required to convert the server were relatively small. Following
conversion using Microsoft™s conversion tool, the manual correction of remaining
errors could be accomplished in a few hours.
Most of the conversion effort was required in converting the client code. The
initial approach to the conversion of the GUI code was functionality-driven:

A survey of the user community identi¬ed the most-used functions in the system.
The plan was built around converting the code to support these functions, most-
used ¬rst.
The actual conversion was done using Microsoft™s conversion tool, followed by
manual correction of remaining errors.

The initial plan for testing the conversion was to prove the functional equivalence
of the converted system by producing automated functional tests for the identi¬ed
functions and executing these against the VB6 code and VB.NET code and comparing

10.4 Identi¬ed Problems
An initial iteration showed many problems in both the conversion approach and the
testing approach for the client code:

The scale of the conversion problem was huge “ 4,500+ ¬xes were required after
the use of the Microsoft conversion tool.
Agile Testing: How to Succeed in an Extreme Testing Environment

Manual correction was proving extremely time consuming, repetitive, and prone
to error.
Acquiring the right staff, with both VB6 and VB.NET skills, was dif¬cult.
Estimation of the effort required proved almost impossible.
The testing task was also enormous “ over 100 test cases were identi¬ed to test
the ¬rst few functions, each requiring implementation in two environments.

The ¬rst iteration overran by more than 100% and did not deliver the planned func-
tionality. Integration problems prevented a successful build that could be deployed
automatically, meaning developers had to manually install the system on testers™
workstations. Testing was performed manually because the system did not imple-
ment suf¬cient functionality to support the planned automated tests.

10.5 Analysis of Iteration 1
A postmortem of the ¬rst iteration identi¬ed a number of problems with the conver-
sion approach and the testing approach:

The testing was not focused on proving that what was changed worked (i.e., the
GUI code). It was looking more abstractly for general functional equivalence.
Business logic was not being changed and so did not require testing. Planned
iterations to implement user functions were too long, which created integration
problems and delayed demonstration to users, resulting in user unrest.
Too much functionality planned in each iteration presented test implementation
and execution problems.
A lack of developer unit testing allowed many bugs to be passed through to the
build that were caught in functional testing, increasing costs.
Manually ¬xing conversion problems would require too much effort to allow
the conversion project to keep pace with the development of the VB6 code base
without imposing a lengthy code freeze, which the user community would not

10.6 De¬nition of an Agile Testing Process
A complete change to the conversion approach and testing approach resulted from
the analysis of the ¬rst iteration. Fundamental to this approach were the following:

Very short iterations “ conversion of one form per iteration. This allowed very
early feedback on the approach and allowed early demonstration to users that
something was happening.
Continuous integration “ With such a large code base, it was essential to keep
all converted code building all the time. Each build also invoked unit tests and
automated functional tests.
Agile Migration and Testing of a Large-Scale Financial System

Complete build per iteration “ Each iteration produced a build, creating con-
¬dence that the build process would deliver the ¬nal system and allowing any
correction of the build process to happen in very small increments.
Test ¬rst design and coding “ Unit tests using NUnit [43] were built into all code
changes and incorporated into the continuous integration build process.
Testing what has changed “ Functional tests focused on testing the changes made
to correct conversion problems rather than testing the user functions. These
focused on testing the user interface behavior of controls/forms that required
Identifying generic solutions “ This involved the identi¬cation of solutions to
common classes of problems. An enhanced version of the Microsoft conversion
tool from ArtinSoft [44] was used to reduce the number of conversion problems,
including automated conversion of a number of popular third-party controls.
Also, the adapter pattern was used to wrap .NET controls and provide COM-
compatible interfaces to avoid code changes [45].
Automated application of generic solutions “ The generic solutions were coded as
batch edit commands and applied automatically using scripts. As each occurrence
of a problem was addressed, either an existing solution was applied or a new
solution was created.
Automated conversion from VB6 label “ When performing a build, a subset of
a label of the VB6 code base was converted to VB.NET by running the ArtinSoft
conversion tool and then running the SED batch editor conversion scripts [46].
This allowed development of the VB6 code base to continue in parallel with the
VB.NET conversion, as the application of all conversion ¬xes was automated.
Optimization of forms to convert “ Through an analysis of the occurrences of
conversion problems, an optimal set of forms was chosen that covered all known
problems. A milestone in the project was completing the conversion of these
forms, because this represented ¬nding a solution to all the known conversion
problems. Through the use of generic solutions, once this set of forms was
converted, the solutions could be applied to all remaining occurrences of the
conversion problems.
Sample testing “ This involved the sample testing of occurrences of generic
solutions; once the initial optimal set of forms was converted and tested, the
remaining occurrences of conversion problems would not all be tested. A sample
of the occurrences would be tested to ensure the generic solutions worked when
applied to different occurrences.

10.7 Results of the Agile Approach
The results of adopting and following the agile approach were:

Analysis of conversion errors “ An analysis of the conversion errors remaining
after running the ArtinSoft conversion tool was performed to identify the smallest
Agile Testing: How to Succeed in an Extreme Testing Environment

number of modules that needed to have postconversion ¬xes applied and tested.
This identi¬ed thirty-four modules that between them contained at least one
instance of all the postconversion problems that existed in the VB.NET code.
Once a generic solution was found to the problems in these modules, they were
applied to the remaining modules.
Build process “ The build process automated the conversion (using nAnt [47]) by
starting with a copy of a labeled version of the VB6 code, running the ArtinSoft
converter, applying the batch edits to correct postconversion problems, compil-
ing the VB.NET code, running unit tests, and packaging for distribution. This gave
a repeatable process for the conversion, and the automation of the build allowed
continuous integration, ensuring that the target could always be produced and
any problems in building were detected early.
Iterations “ The iterations rapidly progressed from converting a handful of mod-
ules to adding in suf¬cient modules without any conversion problems to produce
a running build suitable for testing. This allowed functional testing to commence
early in the project. The approach of creating generic solutions using batch edits
proved successful and allowed additional modules with postconversion problems
to be rapidly added to the build.
Unit test “ Unit testing proved successful in detecting coding errors and resulted
in fewer bugs ¬nding their way into the functional testing.
Manual test “ Manual testing of the GUI functionality proved satisfactory and
ef¬cient. Developers were familiar with the features of GUI controls and forms
and could easily exercise them to prove the functionality was correct without a
large investment in test design and planning.
Automated test “ Automated testing fell by the wayside due to problems with tools
and resources, but this did not adversely affect the outcome. The combination of
unit testing and manual testing proved suf¬cient.

10.8 Lessons Learned
As a result of completing the project, the following lessons were learned:

Identifying targets of test is vital to testing success because there will never be
enough time or resources to test everything, so prioritization is essential. Identify
what is being changed and test that and that alone.
Test early and often; ¬nd defects as early as possible, correct them in a timely
manner, and continue testing frequently throughout the project.
Ensure unit testing of all code changes, testing ¬rst and often. This might be a
target for test automation in future projects.
For a conversion to a new environment, building and deploying are very impor-
tant; so develop these approaches ¬rst and ensure that all subsequent develop-
ment uses and tests them. This also enables testing to progress smoothly by
ensuring the availability of a deployed build.
Agile Migration and Testing of a Large-Scale Financial System

Save time, effort, and cost by using generic solutions (patterns) and trusting that
a generic solution can be tested in a subset of where it is used.
It is worthwhile investing time to create automated solutions for common prob-
lems and applying these generic solutions (patterns) automatically with scripts,
which is repeatable and ef¬cient. Generic solutions can then be tested in a subset
of their implementations, thus reducing the targets of test.
Automated testing is not a panacea “ Manual testing can deliver excellent results
when suitably targeted, particularly for GUI code changes where there is little
dependency between the changes and therefore the tests. Good test management
with manual test packs and exploitation of reuse opportunities can achieve good
test coverage rapidly, avoiding the overheads of automated test script develop-
The effort required to develop an automated regression test suite for a mature
system may well be greater than the effort required for major enhancements,
conversions, and upgrades.
11 Agile Testing with Mock Objects: A CAST-Based Approach
Colin Cassidy, Software Architect, Proli¬cs

This case study examines the practical application of Java mock object frameworks to a system
integration project that used test-¬rst development and continuous computer-aided software
testing (CAST).
Mock object frameworks allow developers to verify the way in which units of software
interact “ by replacing any dependencies with software that simulates them in a controlled
manner. The case study explores the problems encountered with this approach and describes
the subsequent development of SevenMock “ a simple new framework that overcomes many
of these issues.

11.1 Introduction
My name is Colin Cassidy. I™m a software architect for Proli¬cs “ a systems integration
company that specializes in IBM technologies. Coming from a software development
background, I have ten years™ experience in the IT industry. I have performed a
number of roles, including software architect, systems analyst, developer, and devel-
opment process engineer. This has been mainly in the retail, telecom, and ¬nance
My interest in software testing stems from a desire to build better software and to
¬nd better ways of building that software. Having practiced development and testing
on a number of project types ranging from the very agile to the very structured,
I have experienced the problems caused by too much automated testing as well as
those caused by too little.
This case study describes a Java system integration project that used test-¬rst
development and continuous automated regression testing. It explores the bene¬ts
and challenges of creating effective automated unit tests and describes mechanisms
deployed by the project team to overcome the issues they encountered.

Agile Testing with Mock Objects: A CAST-Based Approach

11.2 Overview of the Testing Challenge
This case study describes a project that was undertaken for a major U.K. retail chain
to update their existing bespoke order management system.
The system was originally designed to operate on a version of IBM™s Websphere
application server platform that had over time been superseded. The customer had
also purchased a commercial off-the-shelf (COTS) application to replace and enhance
an important part of the existing bespoke system.
The objectives of the project were twofold:

1. to migrate the existing system to operate on the latest version of the IBM platform,
2. to integrate the existing system with the COTS application.

Although all previous testing of the existing system had been manual, we decided
that test automation should play a signi¬cant role in regression testing. The key
factors in making this decision were:

the large size of the project,
the high number of changes being made, and
the large number of parties involved.

After carefully considering the project requirements and resources, the following
types of testing were decided upon:

automated unit testing,
automated integration testing,
automated performance testing, and
manual (UI-based) system testing.

This case study is concerned primarily with the development of automated functional
tests at the unit and integration levels.

11.3 The Software Migration Testing Challenge
The software migration challenge was slightly unusual in the sense that we were not
changing the behavior of the existing system in any way. The only thing that was
really important from a functional testing perspective, therefore, was for the system
to behave in exactly the same way before and after the completion of the software
Since there were no existing tests and the internal design of the system was
not well understood, we judged that the most effective way to regression test the
software migration would be to use an automated record/playback (or capture“
replay) style of testing tool at the system interface level. The system had a small
user interface that could be tested manually, but the rest of the system was operated
via XML (Extensible Markup Language [48]) message interfaces to other systems. It
Agile Testing: How to Succeed in an Extreme Testing Environment

was therefore relatively straightforward to create a message recorder that extracted
system messages from the input and output message logs. Starting with a fresh
system database, a test harness then took the messages, replaying the input messages
and verifying the outputs.
The system users were engaged to help develop and run through a set of test cases,
which were duly recorded. The resulting scripts were then run before migration and
against each incremental release of the migrated system.
Within the scope of the migration activity, this mechanism worked very well. The
system needed to be built and deployed to the application server before any testing
could commence, but this was appropriate given the lack of any functional change.
One minor problem encountered was that sometimes parts of the output mes-
sages could be dif¬cult to predict “ for example, if they were time-dependent. Our
solution to this problem was to arrange for the system time to be arti¬cially ¬xed by
the test. It would also have been possible, although involving slightly more work, to
write a more intelligent message veri¬er.

11.4 The COTS Integration Testing Challenge
The COTS system purchased by our client was a sophisticated and special-purpose
piece of software that was designed to be installed on a server and to be controlled
by a separate bespoke application. In our case, the bespoke application was the
existing order management system. The objective of the project was to adapt this
application to make it work with the COTS system in a way that met the stated
business requirements. The task involved extending its user interface, writing control
logic, and integrating with various legacy systems.
System requirements for the project were de¬ned by performing a combination
of top-down (business-driven) and bottom-up (technology-driven) analysis. In many
cases, the business processes had to be changed to ¬t in with the way that the COTS
system operated, which resulted in a signi¬cant amount of functional change to the
existing bespoke system.
In order to reduce project risk, we followed an agile approach to software engi-
neering that we have developed and re¬ned through its use on many projects. Our
approach is based on the Rational Uni¬ed Process (RUP) framework [7]. It adopts
many agile practices, including early development and short (two- to three-week)
iterations, with each iteration combining analysis, development, and testing activi-
ties. It also recognizes the importance of software architecture in attaining consis-
tency and resilience to change, but at the same time developers are allowed to work
unconstrained within their allocated unit of work. Agile development techniques
such as pair programming and test-driven development were also adopted.
The implication of this way of working, however, is that there will always be a
certain amount of change all the way through the project. Good change management
was therefore critically important, as was the ability to understand the state of
development at any point in time. The project manager needed to be con¬dent
Agile Testing with Mock Objects: A CAST-Based Approach

about which requirements were implemented and working, irrespective of any other
requirements that may have changed.
One practical way of achieving this was through automated regression testing.
Such tests can be written at the unit or integration level (or preferably a combination
of the two). However, due to project constraints, it became apparent to us that we
would be limited in the number of automated integration tests that could be written
and maintained.
Speci¬cally, this limitation was caused by the dif¬culty of writing integration
tests that were suf¬ciently focused on individual software requirements. In terms
of code coverage, a single integration test would typically exercise a very large part
of the system and, consequently, test many requirements. This was initially seen
as a good thing but, as the test base expanded, we found that some parts of the
system were being tested many times by many different tests. This made those parts
of the system very dif¬cult to change without also modifying every related test. In
addition, integration tests (even automated ones) can become quite slow to execute,
which made it disruptive for our developers to run these tests on a regular basis (for
example, before committing changes to the source control repository).
A partial solution to this problem was to focus developer testing on the writing of
automated unit tests. Execution of these tests by the developers before committing
code was made a mandatory part of the development process. Once code had been
committed, it was then automatically built and tested a second time by a continuous
build server that would send out email noti¬cations in the event of a failure. As such, a
test failure could never remain undetected for more than a few minutes “ a factor that
vastly reduced the amount of time required for detecting and ¬xing any related bugs.
A small number of automated integration tests were retained as a sanity check,
but the bulk of the integration testing was performed manually by developers at the
point where each build was prepared for release to system testing.

11.5 The Unit Testing Challenge
When writing unit tests for a piece of software, the ¬rst decision that needs to be
made is the granularity of the test units “ if the units are too coarse, there will be
a large number of tests for any given unit of code and the result will be hard to
manage. If the units are too small, it will be dif¬cult to relate the tests back to the
software requirements in any meaningful way.
In order to support both unit testing and general maintenance, any well-designed
system should be composed from a set of readily identi¬able, cohesive units that
individually perform well-de¬ned functions and interact with each other via simple
interfaces that hide implementation complexity.
In our case, the existing system was constructed from a set of components that
were implemented using the Enterprise JavaBean (EJB) component standard [49].
These components were a logical choice for our unit testing and, consequently, the
vast majority of our unit tests were written at the component level. They could
Agile Testing: How to Succeed in an Extreme Testing Environment

easily be veri¬ed against system requirements through our software architecture
model “ a piece of high-level (and lightweight) design that charted the structure and
interaction of the components against system requirements.
Despite being an obvious target for unit testing, there were no existing tests in
place that we could use, and we encountered a number of challenges when creating
new ones. These included:

¬nding a way to run tests against their target components without the com-
plexity and speed implication of automatically deploying them to the Websphere
application server each time they were executed;
introducing a mechanism that allowed a component™s dependencies to be selec-
tively replaced for testing, enabling it to be tested in isolation of other compo-
creating tests that were precisely targeted at particular features without unnec-
essarily constraining the way that the rest of the system worked;
testing of poorly factored existing interfaces, whose behavior was heavily depen-
dent on (although not immediately apparent from) the parameters passed to
them; and
trying to understand the purpose of existing code that, due to being outside the
scope of our project, had no available requirements documentation.

11.6 Unit Testing with Mock Objects
Our unit tests were written by developers in Java code and executed with the JUnit
framework [50]. In most cases, the code under test had dependencies on other system
components and so, by default, this would have led to the execution (and therefore
validation) of a large part of the system with each test. In order to avoid this situation,
the developers forcibly stopped the code under test from calling other parts of the
system. Instead, it was made to interact with test code that mimicked the original
system by using mock objects “ Java objects created by the test that are directly
interchangeable with the real thing.
In their most basic form, mock objects can be pieces of code written by devel-
opers that use programming language mechanisms such as inheritance or interface
implementation to make them interchangeable with existing system code. How-
ever, mock objects written in this way tend to be unwieldy and hard to maintain.
Instead, developers commonly use one of the freely available dynamic mock object
frameworks such as EasyMock or Rhino Mocks (for Java and .Net, respectively [50]).
Such frameworks aid the test developer by controlling the creation of mock objects
and monitoring how the system code attempts to interact with them. Figure 11.1
illustrates the typical use of a dynamic mock object framework by a unit test.
When the test is run, the sequence of events is as follows:

The unit test initiates the mock objects framework and requests a dynamic mock
for each dependency of the code under test.
The mock objects framework creates the dynamic mock(s).
Agile Testing with Mock Objects: A CAST-Based Approach

application under test

Other System

Code Dynamic
Under Test Mock

Mock Objects
Unit Test

11.1 Replacing dependencies with mock objects.

The unit test creates a set of “expectations” that tells the mocks how it expects the
code under test to interact with them and what responses they should provide.
The unit test invokes the code under test, ensuring that it will use the dynamic
mocks in place of any other system code (i.e., code that is outside the test scope).
The code under test executes, calling the dynamic mock instead of the real thing.
The dynamic mock veri¬es whether it has been called correctly. If so, it returns
the canned result it was given. If not, it throws an error that will let the unit test
know that the test has failed.

After evaluating some of the existing dynamic mock objects implementations, we
chose to use EasyMock. EasyMock allows a developer to create expectations by
effectively “recording” a set of calls made to the mock object by test code. The
EasyMock framework stores details of these calls but does not act further on them
until it is put into “replay” mode. At this point, the framework knows that setup is
complete and so it will verify that the calls are repeated by the system under test.
This mechanism is simple to use and has the advantage that the tests will not
be broken by refactoring (i.e., making changes that do not affect behavior) of the
system under test. For example, if a developer were to use an integrated development
environment (IDE) to change the name of a particular operation, the tool would
automatically update all references to that operation, including those in the test.
Mock objects frameworks that do not use a mechanism like this tend not to be
supported so well by common development tools, making their use a little harder.

11.7 The Mock Objects Challenge
The ¬rst challenge we encountered with the mock objects approach to testing was
¬nding a way to replace existing component dependencies with mock objects. For
new code, this is relatively straightforward since it is good practice to separate units of
Agile Testing: How to Succeed in an Extreme Testing Environment

code, allowing them to be assembled ¬‚exibly in different ways without necessitating
internal changes. Such an approach supports unit testing quite naturally.
There are two popular patterns that enable this way of working: the service locator
and dependency injection patterns [51]. With a service locator, the code to identify
and create new components is de¬ned in a single place “ within the service locator.
When one unit of code needs to call another, the service locator is asked to provide
a service or component that supports the required interface. Unit tests are then able
to recon¬gure the service locator as required to substitute mock objects.
With dependency injection, the dependencies of a given unit of code are deter-
mined at the point of initialization or invocation and are passed directly to the unit
for it to use. A unit test is therefore able to call this code directly, supplying mock
objects for any dependencies that do not fall within the test scope.
We have used both approaches but tend to favor dependency injection for its
simplicity and recent popularity. For existing code, some level of refactoring is
often necessary and desirable, although it is sometimes possible to avoid change by
employing a technology-dependent solution such as the Java/EJB-speci¬c XJB test
platform [49].
When starting to examine the legacy code, we were lucky to ¬nd that there was
a clean separation of technology-related (application server) code and core business
logic. This helped enormously with the migration challenge but also made it possible
to run unit tests against the system code without ¬rst deploying it to an application
server, which is a slow and involved process to automate.
The second major challenge we encountered was that, even with a focus on
unit testing, our automated regression tests eventually started to affect the ease
with which system code could be modi¬ed. A number of contributing causes were
In normal use, EasyMock performs a deep equality comparison on supplied
parameters. If any part of the parameter graph differs from the expectation, the
test fails. Although it is possible to write custom “matching” logic for EasyMock,
it is cumbersome to do and makes tests harder to maintain if widely used.
Many of the existing system interfaces were (XML) message-based or were poorly
factored, containing operations that could behave in many different ways depend-
ing on the parameters supplied. This resulted in a large number of tests exercising
the same operations, which compounded the problem of EasyMock parameter
When some of the tests became quite long and repetitive, it occasionally became
dif¬cult to tell which expectation was causing a test failure. The reason for this is
that EasyMock, in common with other mock objects frameworks, evaluates the
expectations itself. If an expectation fails then the only reference back to the code
that created it is the resulting error message.

In order to improve the situation, we refactored system interfaces and tests where it
was practical to do so, treating the tests as production-quality artifacts and putting a
Agile Testing with Mock Objects: A CAST-Based Approach

corresponding emphasis on their design. We also took care to ensure that tests were
linked back to documented requirements using comment tags so that they could be
easily reviewed. This made it easier for developers to tell whether or not a test was
failing for a genuine reason and for them to ¬x the test without compromising its
original purpose.

11.8 Results of the Agile Approach
The agile approach to automated unit and integration testing brought a number of
clear bene¬ts to the project:

The focus on thorough and continuous automated regression testing certainly
reduced the overall number of bugs at any point during the project™s life cycle, but
it also helped developers to work side-by-side without inadvertently interfering
with each other™s efforts. Unit testing with mock objects allowed the development
of code units in virtual isolation of each other.
Since each unit test was focused on a particular part of the system, it could
therefore be quite thorough. In some cases, for example, it was used to simulate
failure conditions or housekeeping functions that would be hard to arrange or
observe through the user interface.
Writing tests before code forced developers to reason about what the code needed
to do before deciding how it was going to work. This separate de¬nition of function
and design tended to produce cleaner code and tests that were well aligned with
stated requirements rather than the developed solution. In turn, this made the
tests and code easier to verify.

The things that worked less well were generally related to the pain of improving the
quality and test coverage of legacy code. Poor code encourages poor tests and poor
tests can slow the pace of change. The choice and use of tools is also vitally important.
Arguably, however, most of these problems were uncovered by the adopted approach
rather than fundamentally caused by it.

11.9 Lessons Learned
In summary, we learned a great deal from the project. Some speci¬c lessons that we
will be taking to future projects include:

Use mock objects to enable unit tests to focus on the unit of code under test and
nothing else.
Pay as much attention to the design and quality of tests as you would to production
In each test, check as much as necessary but no more. Carefully consider the
scope and purpose of each test and think carefully about the most effective way
to implement it without constraining system code.
Agile Testing: How to Succeed in an Extreme Testing Environment

Adopt test-¬rst or test-driven development to ensure that developers are disci-
plined about test writing and that they mentally separate function from design.
Ensure that tests can be traced back to system requirements or that the require-
ment being tested is immediately clear from the test. This makes code easier to
audit and maintain.
Adopt and follow an effective change management process and consider the use
of a change management tool [52].

Finally, following the conclusion of the project, I put some effort into evaluating
how our development approach could be improved for the future and in particular
how we could make more effective use of mock objects. I drew up a shopping list of
features that my ideal mock objects framework would have. A summary is as follows:
a simple mechanism to support ¬‚exible parameter matching “ the testing of some
parameters or attributes of parameters, but not others;
the production of messages when mock expectations fail that allow the developer
to determine exactly what has failed and why “ ideally pinpointing both the test
and the expectation;
allowing for creation of mocks against existing classes or interfaces;
supporting the ability to test calls to class constructors;
being simple to use and maintain; and
allowing the relaxation of call sequencing checks, if required.

I evaluated a number of mock frameworks against these and other criteria. Most
(including EasyMock) did quite well, but they fell short on one or two important
points. I also had the idea that a mock objects framework could be simpler and
more effective if it shifted the responsibility of evaluating expectations back to the
test by using call-backs. This would allow the test developer to implement the most
appropriate set of assertions for any given test. If an expectation was to fail, then the
error report would point directly to that line of code.
This prompted me to write my own Java-based mock objects framework called
SevenMock [51], which is now an active project on the SourceForge open-source
software development Web site.
As desired, SevenMock is a very simple piece of software but evaluates well against
my stated criteria compared with the other frameworks that I investigated. It has
a few restrictions of its own “ it can only test against classes (not directly against
interfaces) and it doesn™t support relaxed call sequencing at the time of writing. In
practice, users have found that its restrictions are easy to live with and feedback
has been generally positive. Additional feedback and code contributions via the
SourceForge Web site are very welcome.
12 Agile Testing “ Learning from Your Own Mistakes
Martin Phillips, IBM Software Group, Hursley

No one in our newly formed team had done this “agile” thing before, and while more and
more colleagues in other parts of our company are doing “agile,” it seems that everyone
does things slightly differently. There are no mandated processes that we must follow, only
guidelines, experiences from other teams, and best practices “ but what everyone else does
doesn™t necessarily work for you. Learn from yourselves as you go along, change your process
to match your working environment and your team, and don™t be afraid to try something for
an iteration or two to see if it works.

12.1 Introduction
My name is Martin Phillips and I work for IBM. I™ve been in software development
for almost nine years, working mostly on functional testing of “off-the-shelf ” mid-
dleware. This software was developed primarily using the waterfall model [5] (with
twelve- to eighteen-month cycles), though more recently projects have been more
iterative (with two- to three-month iterations).
For the past six months I™ve been the test lead for a newly formed agile devel-
opment team, and am responsible for delivering a new middleware product for an
emerging market.
This case study describes the challenges we have faced testing the product and
what lessons we have learned.

12.2 Overview of the Testing Challenge
IBM is very aware of the commercial value of emerging markets; like many compa-
nies, it wants to get a foothold in these new areas quickly, but with products that
meet the high expectations that customers have of IBM.
Developing with an agile approach enables our team to develop a functional
product with early and frequent beta releases. This allows us to get feedback from

Agile Testing: How to Succeed in an Extreme Testing Environment

a variety of potential customers and to incorporate that feedback into the product
before it is released for sale.
The team is eighteen-strong and all were new to the agile process. About 50% of
them were able to attend education on agile development before the project started,
though none of the test team happened to be part of this group. The group consists of:

one architect,
seven developers,
¬ve testers,
one documentation writer,
one project manager,
one people manager,
one User-Centered Design expert (part time), and
one sales manager (part time).

The goal of the project is to deliver the product within ten months. We are doing
two-week iterations and have just ¬nished iteration 12, with six iterations to go
before our target release date.
The product is an enterprise-quality, distributed Java application that integrates
with an existing off-the-shelf product. It must support a variety of distributed plat-
forms (Linux, Solaris, HP, AIX, and Windows) in addition to supporting the zOS
mainframe. The product includes a command line interface, an Eclipse plug-in
graphical interface, a graphical installer, and all the associated documentation.
In addition to the functional requirements of the product, we also need to test
the installer, graphical interface, and documentation, and additionally conform to
corporate requirements. These requirements include accessibility and globalization,
including translation into ten different languages and running on operating systems
running in any locale worldwide.
All in all, quite a challenge.

12.3 De¬nition of an Agile Testing Process
The details of the agile testing process we are currently following are shown as
follows, but this process has grown and developed over time and I™m certain it will
continue to change and evolve:

Co-location of test and development “ This allows test input during the design
and development phase, and development input in the test phase.
Unit tests, developed along with the code, are run in every build “ Any unit test
failures result in a “bad build,” which must be ¬xed immediately.
Static analysis is run against the code in every build “ Any static analysis prob-
lems result in a “bad build,” which must be ¬xed immediately.
Automated build veri¬cation test (BVT) of every good build “ tests basic core
function on a single distributed platform.
Agile Testing “ Learning from Your Own Mistakes

Automated functional veri¬cation test (FVT) of every good build “ test more
functions, including across every operating system that we need to support.
Development uses “stories” as the unit of function “ Each story has “success
criteria” that must be demonstrated at the end of the iteration. These success
criteria clarify any ambiguity when trying to determine if a story was complete.
Reduced test planning “ Only do actual test plans for the more complex stories.
Simple stories are tested using exploratory testing [53].
Automate what we can “ Rather than automating everything, we automate only
key tests because automating everything would take too long. Try to automate
the tests that will give best coverage and the most complex areas of code.
Pseudo-translation and accessibility testing every iteration “ We used tools to
test that all visible text is translated/translatable and that GUIs are accessible in
every iteration. This prevents a backlog of problems building up in these areas
and reminds development to write their code appropriately.
De¬ne “test stories” as part of the planning process/burndown “ This allows us
to track large test work items such as setting up the automation infrastructure,
adding major new test automation function, and “test-only” work-items such as
globalization veri¬cation testing and translation veri¬cation testing.
Introduction of system veri¬cation tests (SVTs) as test/development stories “
System tests such as long runs, stress, load, large topologies, and so forth, take a
lot of time and effort to do properly, so these are entered as stories in the planning
for the iteration. If the SVT stories do not get prioritized into the iteration, the
testing does not get done.
Everyone is a software engineer “ There is no hard line between testers and
developers. People specialize in certain areas, but developers do some testing
when they are free, and testers do development and ¬x defects when they are free.
Low defect backlog “ We carefully track the number of open defects toward the
end of each iteration with a requirement of no high-severity defects and a target
of less than ¬fteen low-severity defects open at the end of every iteration. This
prevents a defect backlog from growing over time.
Lessons learned and re¬‚ections for test in every iteration “ Along with the
development side of things, the test team have a “lessons learned” session at the
end of every iteration. We have actions for what we can change in our testing
process to make things better. This keeps our testing processes as up-to-date and
meaningful as possible.

12.4 Results of the Agile Approach
Even though, as of the date of the submission of this case study, we are still working
toward the ¬nal release of our product, already we are able to report the following
bene¬ts of the agile process that have been apparent to us:

Early demonstration of product to stakeholders “ Having functional drivers at
the end of every iteration allows us to give early demonstrations of the product
Agile Testing: How to Succeed in an Extreme Testing Environment

to interested parties. These stakeholders include the sales team, beta customers,
upper management, and, in some ways most important, the development and
test team itself.
Inspiring con¬dence in the customers “ Allowing beta customers to try out the
software gets them interested early, shows them how the software is developing,
and starts to give them a feeling of con¬dence in the quality of the product,
assuming the beta releases are of suf¬cient quality. I know that agile process says
that every iteration exit driver should be of high quality, but we heartily believe
that a bit of extra testing on beta drivers is not wasted effort.
Managing the managers™ expectations “ Demonstrating to your management
chain that you have something that works keeps them feeling good about con-
tinuing to spend the money to support your team.
Inspiring the sales team “ Demonstrating your product to the sales teams gives
them a good idea of what will be possible with the product, familiarizes them
with its capabilities, and lets them start thinking about which customers may
bene¬t from the product.
Keep it clean “ Having builds that work all of the time makes it very easy to spot
defects as soon as they are integrated into the code base. It makes it much easier
to track down the cause of problems and get them ¬xed, fast!
Don™t underestimate the value of the feel-good factor “ Finally, working on a
product that “works” almost all of the time certainly gives me a really good
feeling. In the past I™ve worked on waterfall projects where the code sometimes
failed to compile for weeks at a time, and, even if it did compile, it didn™t work
for months! The frustration that causes is terrible. Working on this agile project,
where I can install almost any build and expect it to be functioning fully, is a real
joy, and happy people work much better than depressed ones!

12.5 Lessons Learned
The lessons learned from this project so far include the following:

Deliver testable function as early as you can “ With very short iterations (two
weeks for us), it is imperative that development deliver testable function as early
as they can to reduce last-minute integration and testing. A hard cutoff for
integrating new function, leaving at least two days of testing (and defect ¬xing)
time is crucial to getting a functional and tested driver for the end of the iteration.
Do not underestimate the time and effort required to develop and maintain
an automation infrastructure “ We had a complex, distributed, asynchronous
system to test, which made choosing an automation framework very dif¬cult as
we found there were few existing frameworks available that could help us. We
ended up using STAF and STAX (available from sourceforge.net), which have
been ¬‚exible enough to do what we wanted. Even using these tools it took us ¬ve
iterations before we got the infrastructure in place to be able to run an automated
Agile Testing “ Learning from Your Own Mistakes

BVT and a further ¬ve iterations before we managed an automated multisystem
Communication is absolutely key to the success of iterations and the product “
Make sure that you communicate with the developers to ensure that you under-
stand what the product is supposed to do, reviewing the designs as soon as they
are produced and before implementation starts. Challenge them if you have dif-
fering thoughts. Also ensure that you validate the requirements with what the
stakeholders actually want by talking with them.
Test early and test often “ Testing as soon as you can is good: testing the developed
code directly after the development team has coded it ¬nds bugs early and, while
the developers still have the code they wrote fresh in their minds, makes ¬nding
the problems easier. Additionally, testing the small units of function that are
developed makes the scope of what needs to be tested clearer. Getting early drops
of testable function in an iteration is good in that it allows us to test earlier and
eases off some of the pressure from the last few days of the iteration, but it can
sometimes allow regressions to slip past if no automated regression tests are
Ensure the GUI is stable before using automated GUI test tools “ It was decided
not to attempt to perform automated GUI testing. Our understanding of the
automated GUI testing tools was that they are useful only if the GUI is fairly
stable. With a brand new product being developed from prototypes and adding
new functions frequently, we decide not to spend time trying to automate the
testing of the GUI. This has turned out to be a good thing in that it forces us to get
real people looking at the GUI, trying out different things and ¬nding usability
and accessibility issues that would not have been discovered by automated tests.
Select tools in haste, repent at leisure “ Take time to choose your tools. We
changed some of our tools several times, and the cost of changing whatever tool
you are using is wasted effort. Automation frameworks, change management
system, defect tracking systems, design document repository, and so forth “ get
it right at the start because it is dif¬cult to change halfway through and has a high
cost in time and effort. This may seem anti-agile to spend lots of time deciding
things up front, but I believe that the cost of spending additional time up front
to get things right is worth it.
13 Agile: The Emperor™s New Test Plan?
Stephen K. Allott, Managing Director of ElectroMind

“Steve, about every ¬ve years,” Ross Collard once told me, “you™ll hear about the NGT in
software testing.” “The NGT?” I questioned. “Yes, NGT “ the next great thing.” Ross was
one of my ¬rst ever teachers of software testing and in those early days in 1995 I learned a
remarkable amount from him about how to test and what it means to deliver quality software.
Toward the end of the 1990s the NGT was RAD (Rapid Application Development) and
¬ve years later, in 2005, the NGT for me was agile testing. This was to be the savior of
testing projects throughout the Western world. Nothing could stop it or stand in its path.
Nonbelievers were cast aside as the NGT drove a coach and horses (or tried to) through the
established numbers and acronyms of my beloved software testing world (IEEE 829, DDP,
ISO 9126, TickIT, ISO 9000, RBT, Test Strategy, MTP, IEEE 1012, IEEE 1044, Test Cases).
Books and papers were written, seminars delivered, and agile manifestos created for
all to worship and wonder at. It was time to cast aside the restrictive clothing of software
test documentation and formal methods and enjoy the freedom of exploratory testing in a
context-driven world. The Emperor™s new test plan was born!

13.1 Introduction
My name is Steve K. Allott and I am a chartered information technology professional
(CITP), who has worked in the IT industry for over twenty-¬ve years, after graduating
with a computer science degree. My background includes software development and
project management at telecommunications and ¬nancial organizations, and since
1995 I have specialized in software testing and quality assurance, founding my own
company, ElectroMind, in 2002, which is currently focused on providing test process
improvement “health checks” for companies in the ¬nancial, telecommunications,
retail services, and e-commerce sectors.
I also serve as the program secretary for the British Computer Society Specialist
Group in Software Testing (BCS SIGiST) and am the Executive Director for IT
Integrity International, a not-for-pro¬t organization involved in researching areas
such as IT security, workforce education, and IT governance.

Agile: The Emperor™s New Test Plan?

Illustrated with real examples from my consulting and training experiences over
the last few years, this case study contains my perceptions on the Next Great Thing “
agile testing.
The question mark in the title for this chapter is deliberate!

13.2 The Emperor™s View1
I am not trying to say that those promoting agile are simply twenty-¬rst-century
snake oil salespeople, but neither shall I conclude that it™s a bandwagon of require-
ments nirvana on which businesses should jump wholeheartedly without a second
thought. In my experience of over thirty years in IT, agile could be the bees™ knees
(you see how old I am), the best thing since sliced bread, or the software testing
equivalent of the Betamax or Sinclair C5.
Will agile be the answer to all our software testing and quality assurance issues?
As a consultant I can honesty say, “It depends.”
My agile story is based on a collection of anecdotes and observations from the
testing departments of corporate U.K., with the company™s real names and identities
disguised, mainly because by the time the ink is dry on this book, they will all have
learned from the process and moved on.
I started my own consulting company in 2002 and initially delivered training
courses and strategic consultancy based on traditional methods and standards such as
ISEB (the Information Systems Examination Board) ISTQB (International Software
Testing Quali¬cations Board) and IEEE 829. After a few years we™d realized the world
was changing “ getting more complex and faster in every sense of the word. Our
articles and seminars on rate of change over time were developed and we published
our FAST (Funded, Autonomous, Supported, Technical) methodology and started to
follow the agile developments.
In my recent experience as an independent consultant in software testing and
quality assurance I have found agile to be both a blessing and a curse: a blessing
when it has delivered real business value, a curse if it is used simply as an excuse to
ignore commonsense testing approaches. The agile approach has transformed some
development shops and signi¬cantly enhanced their relationship with the business.
Have they been lucky not to have a major systems failure or pragmatic about the
way they integrate the iterative testing with the traditional end-to-end approaches?
Me thinks the jury is still out and we have a long way to go to understand, let alone
solve, all of the problems.

13.3 Common Themes and Patterns
This section provides a snapshot of testing problems found during a variety of
consulting assignments throughout the United Kingdom and Northern Europe.
1 Mature testers and conference a¬cionados may recognize my nickname as the Emperor of Testing
following an amateur production of a play in Scandinavia many of your Earth years ago (once upon a
time in the West).
Agile Testing: How to Succeed in an Extreme Testing Environment

Scale 0 1 2345 6 7 8 9 10 11 12 13
Key Area Controlled Efficient Optimizing
Test strategy A B C D
Life-cycle model A B
Moment of involvement A B C D
Estimating and planning A B
Test specification techniques A B
Static test techniques A B
Metrics A B C D
Test automation A B C
Test environment A B C
Office environment A
Commitment and motivation A B C
Test functions and training A B C
Scope of methodology A B C
Communication A B C
Reporting A B C D
Defect management A B C
Testware management A B C D
Test process management A B C
Evaluation A B
Low-level testing A B C
13.1 Composite TPI chart from a number of clients, 2005“2008.

Over an eighteen-month period I™ve been fortunate to have worked closely with
the senior IT management of leading companies in a variety of sectors including
investment banking, travel, mobile telecommunications, on-line retail, and utilities
(gas, electricity). This work includes assignments done in association with a well-
known London test house and one in Ireland.
The methodologies employed were a mix of agile, fragile, and traditional. Typically
the technologies in use were mainly Microsoft-based, including Visual Studio Team
System (VSTS), Team Foundation Server (TFS), SQL Server, Windows XP, and
.NET with some use of UNIX/Linux, Oracle databases, and Citrix Metaframe. Most
engagements involved the ubiquitous test management tool “ HP Quality Centre.
As I worked with these many companies, a pattern of behavior emerged as depicted
in the composite test process improvement (TPI) chart (Figure 13.1). Essentially,
where organizational and process maturity is relatively low, signi¬cantly more effort
is needed to make agile methods work in practice. My observations over the past few
years will probably not be a surprise to anyone working in the software testing ¬eld:

The ¬rst observation is that of constantly changing business priorities, which
leads to frustration and demoralization within the test teams. Often a project
is started as high-priority to get it moving and then as time goes by it gets
forgotten a little and moved down the list. Test teams ¬nd it dif¬cult to plan in
this situation, whether using agile techniques or not.
Agile: The Emperor™s New Test Plan?

Second, the often frantic demand for new functionality by the business managers
seems to forget about the complexity of what™s already out there.
Third, with weekly and sometimes daily release cycles, the intellectual exercise
required to think about likely modes of failure is compressed into a timeframe
which I believe is bound to lead to problems later on in the release cycle. People
just do not have the time to think and come up with that “eureka” moment
Finally, the test teams I have worked with were small, enthusiastic, and had
an important role to play as gatekeepers and ¬re¬ghters, yet many seemed to
have little in¬‚uence over the business demands or the investment in new tools
and technologies to help them improve their own career opportunities. Many
groups were asked for metrics to prove their case and explain their added value,
yet struggled to record even the simplest numbers due to lack of time.

In summary, many of the agile projects I have been exposed to have suffered from

no documented test process;
no formal test policy, strategy, or plan;
many tests, but with no formal design;
no test (or other) metrics;
no proper test environment (some tested in “live” environment);
no documented test basis; and
testing/testers not involved early enough.

13.4 Lessons Learned
These are the lessons I have learned through my exposure to the NGT:

Agile verses fragile “ As mentioned earlier, I have seen both agile and fragile
projects. Fragile is apparently the buzzword for a “fake agile” project: this is
where some, but not all of the buzzwords are used, yet the test teams lack the
expertise and disciplines to actually follow through and carry them out to a
successful conclusion. I™ve seen some development groups totally embrace the
concept and redesign, build, and unit test their ¬‚agship Web site in a few weeks
with no recognized testing function other than some user acceptance test (UAT)
done by the project™s business analysts on behalf of the user community. They
used Extreme Programming techniques. It was agile. It worked. It may not work
for you but in this context it was amazingly successful and cost-effective.
Islands of enterprise “ Many times I ¬nd that developers, business analysts,
project managers, operations people, infrastructure, and architecture experts are
all working in isolation with no overall knowledge of the “big picture” of what
the systems are supposed to do. Only in the test group does all of this expertise
supposedly comes together; to perform an end-to-end test of a business process
at either a systems or UAT level requires knowledge of the back-end systems
Agile Testing: How to Succeed in an Extreme Testing Environment

and databases, interfaces, protocols, and much, much more than just the small
changes developed at the front end in the Scrum. This increases the knowledge
and skill levels needed in the test team. I do not believe that the software testing
industry has yet found the right balance, and part of the problem is the awareness
(or lack of it) of what testing is all about.
No one-size-¬ts-all solution “ There are no simple answers or one size ¬ts all. The
problems are essentially the same; however, the solution within each organization
is unique “ it depends on the organization™s culture, level of investment, likely
resistance, and buy-in from all senior stakeholders to give people time to “sharpen
the axe.”
The value of embracing agile “ In one large organization we worked with, agile
has consistently delivered new and valuable (revenue-enhancing) business func-
tionality month in, month out; it has fundamentally changed the relationship
between the business and the IT group and, although we normally expect UAT
to be done by the users, in this case the IT group has taken on that responsi-
bility on the users™ behalf. They will look to try and solve the problems of doing
an agile development as well as coping with end-to-end testing and especially
nonfunctional testing such as performance, usability, accessibility, and security.
Tool support for agile “ The tools, methods, and techniques within the industry
are all starting to adapt to agile methods. We are starting to ¬nd that using
commercial-strength test management and test automation tools is not as essen-
tial as it once was if you adopt the true agile principles. There are many open-
source test tools available that can help. Maintaining traditional automated test
scripts within a two-week sprint is a nightmare and often impossible to do well.
And what value do the automated regression tests provide? Perhaps some comfort
level, but do they ¬nd any new bugs or give you good information about the qual-
ity of the new code drop? Maybe the time would be better spent using exploratory
techniques. (With a control mechanism like session-based test management these
can be very effective in some situations.)
Process improvement “ Paradoxically the process improvement community does
not yet appear to have models that will be of use in an agile environment (e.g.,
TPI, TMM) and yet technical capability and organizational maturity are required,
in my experience, for agile to work well. Testers in the organizations I have
worked with almost exclusively relied on their experience and domain knowledge
to design tests, whereas I would expect agile testers to also be highly skilled test
professionals; with a crazy workload they will need to use techniques to get better
coverage with fewer tests, use softer skills to negotiate a better position, and have
the technical skills to set up their own test data and test environments.
Breathtaking pace “ One retail web company released software three times a
week at a breathtaking pace. Attempts to try and consider testing outside of the
Scrum functionality for a particular sprint always seemed to get delayed until the
next release, by which time all the code and impact had changed anyway.
Agile: The Emperor™s New Test Plan?

Experience counts “ At a U.S. software company, a small, well-funded and man-
aged, more technically able team delivered better testing in half the time with
fewer resources than a conventional test team.

13.5 Conclusions
These then are my personal opinions based on recent experience, and since I am still
learning about “agile testing,” they are subject to change:

Agile methods are seen as the NGT “ Everyone wants to get on board; they
are good (or bad) depending on the context; they may scale depending on the
organization and its maturity in the test department. In my opinion, agile does
not sit on its own as the NGT. It™s just another tool in the toolbox which may
help, it may not. It is worth a look. You decide.
Don™t forget the skills and people challenges “ As long ago as 1976, in my ¬rst
programming job at a telecommunications company, I had to write Program
Acceptance Tests (PAT) (yes I know it is tautologous but that™s life) and show
them to my manager BEFORE I wrote the code. Sounds a bit like test-driven
development to me. Are we not going “Back to the Future”? I believe that some
of the disciplines we had back in the old days, without being too nostalgic, have
been lost and need to be rediscovered in our test groups; testers need to own their
own career, learn agile methods, learn the test techniques, and vastly improve
their own technical knowledge and abilities if they are to succeed in the brave
new world.
Apply common sense “ Traditional also works if you do it properly; you cannot
build a hospital in a day! We must not forget the end-to-end and non-functional
aspects of testing and quality. We don™t know how to communicate to manage-
ment and the businesses, and the businesses don™t seem to like or care about IT
Not quite mainstream, yet “ Agile testing techniques are not yet in the main-
stream or fully understood; they are not methods or techniques as such that stand
up to a lot of scrutiny. However, there are some good ideas and principles that
can and should be adopted by anyone wishing to make testing better, cheaper,
and faster.
Size (and complexity) matters “ Using an “agile” method for testing (or devel-
opment) should always be explained “in context” of the size or complexity of the
system under test and not just as a theoretical exercise. How many developers,
how many testers, how many screens, interfaces, how many servers, and so on “
these questions must be answered “ how can I write down on small A5 cards my
“requirements” for an air traf¬c control system or a new missile defense system.
Being used to weekly and sometimes daily releases at some web clients, I asked
one project manager (attending a training course I was running) when the next
Agile Testing: How to Succeed in an Extreme Testing Environment

release of their radar software was due (expecting “next month”) and she said
four years hence, just in time for the next Olympics!
Don™t neglect the documentation “ The way in which tests and test plans are
documented should be examined. In very complex object-oriented systems there
are often thousands of tables and hundreds of “screens,” so keeping track of it all
is a nightmare and I can understand why there is so little documentation either
for design or for test.
The pace of change “ The pace of change is often so fast that the ink is not dry on
the code or the test (whether test-driven development is used or not!) before the
next change comes charging along; this seems to create a strange effect where
people ignore several changes and don™t worry about the related impacts (i.e.,
they are so swamped with work that they do their best and ¬x and test what they
can without doing a full impact analysis). This leads to integration, regression,
and performance faults much later in the project.
Don™t throw the baby out with the bathwater “ The skill levels required to adopt
“agile testing” should be examined in more detail. I have met some very clever
people who can and do test in an agile way. But they know the traditional test
design techniques as well and I think they apply these perhaps without really
knowing it. Can we teach good testing techniques to large numbers of testers?
Avoid NGT for NGT™s sake “ We must get a more joined up and collaborative soft-
ware testing industry where we share ideas and resources rather than compete for
the NGT. The poor customers have been confused by promises from certi¬cated
experts and new methods that are amazing; in truth, it™s a complex world and
we need skilled analysts to help solve the problems where business people stop
ignoring the complexities of what they have built.
14 The Power of Continuous Integration Builds
and Agile Development
James Wilson, CEO of Trinem

This case study will examine how organizations can bene¬t from applying continuous integra-
tion build (CIB) solutions; it will also look at how adopting agile best practices can increase
development productivity while introducing performance management techniques, as well as
examining the role and use of testing throughout CIB.

14.1 Introduction
My name is James Wilson and I am the CEO of Trinem, based in Edinburgh, United
Kingdom. I cofounded Trinem with Philip Gibbs in 2001 after working for a number
of years as an IT consultant, primarily delivering software con¬guration management
(SCM) services and solutions. Since 2001, Trinem has implemented SCM projects
that have inherited various forms of technologies, methods, and scale.
Trinem has implemented processes, and had their technology implemented, for a
wide spectrum of organizations, including NASA (United States), Bank of New York
(United Kingdom), HBOS (United Kingdom), Belastingdienst (Holland), Syscom
(Taiwan), SDC (Denmark), and GE (United Kingdom and Holland).
This case study illustrates the challenges, approaches, and bene¬ts that I have
encountered while developing agile application development processes.
This case study will also explore some of the key technologies that were used,
both commercial and open source, to augment the implementations delivered by
Trinem and myself.

14.2 Agile, Extreme Programming, Scrum, and/or Iterative?
Invariably when customers ask for consulting services to assist with the implemen-
tation of SCM and application development best practices, they already have an idea
of what is happening in the industry. Typically the customer would talk at length
about the latest development methods, the newest techniques, and the most popular
processes that will increase productivity, reduce costs, and convert their currently

Agile Testing: How to Succeed in an Extreme Testing Environment

unremarkable development teams into superdevelopers who would be the envy of
their competitors.
In addition to managing the varied interpretations of the latest “fads,” the endless


. 3
( 10)