Avoiding information hops for successful continuous delivery

A requirements-driven approach to testing and development

When Winston W. Royce, in 1970, described the single-step sequential model which became known as the Waterfall model, he recognized an inherent risk in it: there was a lack of feedback at each stage. And yet, by the 1990s, the method was prevalent, and still today supposedly “Agile” projects frequently turn out to be mini-Waterfalls.

In addition to the faint feedback loop, this model has fundamental drawbacks in terms of both efficiency and quality. It proposes that a user and BA design the system, developers code it, and testers test it – in that order. At each of these linear stages, information is passed on and transformed. With each of these time-consuming “Information Hops”, there is a degradation in quality.

The impact of poor quality requirements

Some degradation of quality is unavoidable. It is created by inherent uncertainty regarding the exact nature of the system being developed, while some is a consequence of human error.

However, some uncertainty is introduced by incomplete, ambiguous requirements. This uncertainty breeds misunderstanding from the very start, which is then perpetuated throughout the SDLC, having a devastating effect on how well user expectations are transformed into a working system.

In fact, at least 56% of defects stem from ambiguity in requirements[1], while some place this as high as 59%[2], or even 65%[3]. These requirements defects can account for 64%[4] or even 80%[5] of defect remediation costs. They also create rework and project delays, as the later defects are detected, the harder they are to fix.

Subsequent information hops

In addition to the time spent converting poor quality requirements to code, each subsequent information hop takes more time, and further impacts quality.

Test cases are typically derived manually from the requirements, in an unsystematic manner – a time-consuming process, which rarely achieves sufficient test coverage. One team we worked with, for example, spent 6 hours creating 11 test cases with just 16% coverage.

Automating the execution phase alone won’t resolve this, as it usually introduces manual scripting and maintenance, in addition to the time setting up tests and investigating failures. Often, this time will outweigh the time saved during test execution.[6]


When the requirements change – and most development is now driven by change requests – test cases, data and automated tests all have to be checked and updated, and this is usually performed by hand. At one company we worked with, it took two testers two days to check the existing test cases when the requirements changed.

One input, several outputs: Making Continuous Delivery Possible

In order to deliver quality software which reflects constantly changing requirements, these time-consuming, damaging information hops should be avoided. The linear stages of the Waterfall model need to be collapsed, “shifting left” testing, development and design into one parallel effort.

One way to achieve this is to formally capture requirements. This must be done in a way which is accessible to users and BAs, as otherwise an information hop will remain. BAs, who are already familiar with VISIO and Business Process models, might, for example, map requirements to a flowchart model. This breaks requirements into the cause and effect logic which needs to be coded and tested, reducing ambiguity and incompleteness, and the defects they create.

Because a flowchart model is mathematically precise, test cases and automated tests can be automatically derived from it. Not only does this avoid the time and manual effort spent transforming requirements to tests, it also works to ensure that quality is maintained. The tests – which are simply paths through the systems logic – reflect 100% of the specified functionality, and so maximum functional coverage is achieved.

When a change is made to the flowchart model, this is formally captured. The same sort of algorithms used to derive test cases can therefore be used to automatically identify exactly which test cases need to be updated or removed, and which ones are needed to retain maximum coverage. From requirements to test maintenance, information hops are eliminated, making the delivery of quality software, on time and within budget, possible.

[1] Bender RBT, 2009.

[2] IT University, Denmark, 2001.

[3] Hyderabad Business School, 2012.

[4] Hyderabad Business School, 2012.

[5] Bender RBT, 2009.

[6] Dorothy Graham, That’s No Reason to Automate.

Tom Pryce is a Product Marketing Manager, having been a technical and content writer for…


Modern Software Factory Hub

Your source for the tips, tools and insights to power your digital transformation.
Read more >
Low-Code Development: The Latest Killer Tool in the Agile Toolkit?What Are “Irresistible” APIs and Why Does Akamai's Kirsten Hunter Love Them?Persado's Assaf Baciu Is Engineering AI to Understand How You Feel