A project used to have a start, a muddle and an end

But is agile development really any better?

The breakthrough of “agile” brought with it a breakthrough in how the development lifecycle is perceived. However, did this lead to a resolution in practical terms of the issues which the agile development principles and methodologies set out to resolve?

This blog considers the challenges which might still need to be addressed from a testing perspective.

The Waterfall model            

Before agile development principles, the development lifecycle consisted of a design phase, a development phase, a testing phase and a deploy phase – in that order. As Dan North describes, this linear, “1990s model” could take around 18 months, and the feedback loop was faint at best.

This model often left testers and developers feeling short changed. Having been provided with a supposed complete specification, they often found that its ambiguities and gaping holes had been perpetuated in the code, while the test cases had failed to pick these defects up.

This created time-consuming rework and, often, a defect would have already rolled over the so-called Waterfall, leaving no way to repair it on time and within budget.

Testing techniques themselves were also typically slow and manual, extending the feedback loop even further. As late as 2014, 70% of testing remained manual, and test cases were derived by hand from the poor quality requirements.

In this Garbage In, Garbage Out scenario, the poor quality requirements are the start, testing and development is the muddle, and the ending is rarely a happy one. In fact, research suggests that 60% of IT projects fail and as many as 31.1% are cancelled before completion.[1]

Further research has estimated that $312 billion in development costs are avoidable and are expended on defects, while developers reportedly spend half their time repairing bugs.[2] Rigorously testing software earlier to avoid defects and improving communication between the business and IT therefore appear to be good places to start to reduce costs and deliver projects on time.

The advent of “agile”

“Agile” development principles arose in part to achieve this closer alignment of business and technical initiatives, while also introducing testing earlier in the development cycle. Iterative development would accommodate incremental changes, so that rather than being discovered after 18 months, a defect could be remedied in far less time and expense after 4 weeks.

The issue is, even in supposedly “agile” context, the slow and unsystematic testing techniques discussed above are often still used. Sprints quickly appear like mini-Waterfalls and the feedback loop remains too long.

The specification which stood at the start of the Waterfall project has been replaced by a constant barrage of change requests. However, the formats used remain static, with no traceability to the test cases derived from them, so that change remains a particularly acute problem.

When a change is made to the requirements, testers have to no way to automatically identify the impact across inter-dependent components, and often check existing every test cases by hand. This painstakingly slow maintenance rarely achieves sufficient coverage, so that defects are once detected too late as testing constantly rolls over to the next sprint.

Shortening the feedback loop

In order to shorten the feedback loop to the extent that testing can keep up with changing user needs, requirements gathering, test design and test maintenance need to be collapsed into a single, collaborative phase. The “information hops” which reduce quality and create delays need to be avoided, with BAs, developers and testers working from an unambiguous point of reference.

This can be achieved using “active flowchart” modelling which provides a requirements format familiar to BAs, but which is also mathematically precise. Test cases can therefore be derived and executed automatically from the requirements themselves, and can be updated automatically when the requirements change.

In this approach, otherwise manual testing effort is concentrated into the design phase, eliminating the delays which extend the feedback loop. The systematism of deriving test cases mathematically further drives up test coverage, so that defects are discovered earlier, avoiding the risk of project delays and budget over-run.

References:

[1] The Standish Group, Chaos Manifesto, 2014. Retrieved from https://www.projectsmart.co.uk/white-papers/chaos-report.pdf.

[2] Cambridge University Judge Business School, 2013. Retrieved from http://insight.jbs.cam.ac.uk/2013/financial-content-cambridge-university-study-states-software-bugs-cost-economy-312-billion-per-year/


Tom Pryce is a Product Marketing Manager, having been a technical and content writer for…

Comments

Modern Software Factory Hub

Your source for the tips, tools and insights to power your digital transformation.
Read more >
RECOMMENDED
Low-Code Development: The Latest Killer Tool in the Agile Toolkit?What Are “Irresistible” APIs and Why Does Akamai's Kirsten Hunter Love Them?Persado's Assaf Baciu Is Engineering AI to Understand How You Feel