Achieve rigorous testing within the bounds of 9 to 5.

Every tester’s been there: the code freeze is in, release day is looming, and there are more tests to execute than there are hours in the day. And that’s assuming the test cases needed for rigorous testing have been identified and created in the first place, which opens up a whole other can of worms. So, how can you rigorously test — without spending your entire weekend doing it?

Today’s systems are too complex to test manually, and testing windows are too short

There’s no way to execute every possible test to completely cover today’s systems. For example, a system with 32 nodes (logic points) and 62 edges (decisions) will have a whopping 1,073,741,824 possible routes through it. Based on estimated test execution time, this would require 34 years of testing! No amount of extra-long nights and weekend work can ever close that type of gap.

Testers often attempt to reduce the number of tests by using equivalence partitioning. However, to reliably partition a system requires an understanding of all the logic that needs to be tested, and ideally a fully functional description of the system. What’s more, tests still need to be created for each partition, bringing you back to the challenge of identifying and creating an optimal set of tests when faced with huge complexity.

The complexity of modern applications means that even if the number of tests is reduced, there will still be more than can feasibly be executed manually. For fun, we recently created an approximate possible route model of Pokémon Go and estimated that over 107 million paths are needed just to cover a high level flow with the subprocesses optimized down. No wonder kids (and a fair number of grownups, too) are finding so many glitches!

Why don’t we just automate test execution?

Automating test execution is a good place to start when moving to rigorous testing within the confines of a sprint. It drastically shortens one of the slowest aspects of testing. However, it is not a complete solution.

Even the best automation frameworks tend to bring you back to manual test creation, in the form of either script creation or keyword selection. The time spent converting test cases to automated tests often outweighs the time saved executing them, and maintenance can create an additional bottleneck.

When the system or requirements change, brittle automated tests must be updated. Otherwise you risk automated test failures and wasteful over-testing. This time spent identifying the impact of a change on tests and then updating them can have a huge impact on speed and quality or your application.

How to ensure quality when you have too many tests and not enough time 

The development team I work with releases code every 4-6 weeks. They use a method we call “active” flowchart modeling for rigorous functional testing. This approach ties automatically generated tests and data directly to an easy-to-maintain model of the system.

First, all the known logic of a system is modeled, usually using subject matter expertise, existing test cases and requirements. The flowchart model then serves as a mathematically precise directed graph, meaning that all possible paths through the modeled logic can be identified and created automatically.

The paths are equivalent to tests that can also be optimized automatically, to reduce the total number while still covering every logically distinct combination. Numerous established algorithms exist for this, and CA Agile Requirements Designer offers All Pairs, All In/Out Edges, All Edges and All Nodes optimization, as well as Risk-Based approaches.

Partitioning is possible with subflows, but the fundamental goal of this approach is different: we are trying to cover all the logically distinct combinations in the entire system, rather than testing just a subset of that logic. Quality can thereby be assured while still reducing the total number of tests.

Test execution and test data allocation can also be automated without requiring time-consuming, manual scripting. A reusable automation configuration file is assigned to a flow, mapping automated code snippets or key words to actions and objects. Dynamic or static data can further be attributed to the nodes of the flowchart, so that a fully executable, automated test pack is compiled when the optimized tests are created.

In this approach, testing is not only automated, but can also react to change. When the flowchart is updated, all the test assets that are traceable to it are likewise updated automatically, so that the effort of testing a change is equivalent to updating the model.

Let the machine share the stress of end-of-sprint testing

This approach has been proven to end hero culture and the 4 AM scrambles to execute all testing before the release date. The time spent modeling the initial flowcharts is quickly outweighed by the time saved on slow, manual test creation and execution, as well as manual script generation, test maintenance and test data allocation.

The number one time-saver comes when the system changes, because you can update the regression test pack in minutes by updating the flow. As the test lead of the development team I mentioned earlier described to me, “We are now performing more regression testing in less time, while the automation suite has replaced the stress and errors that used to come at the end of the sprint.”

What are your thoughts on model-based testing?

What’s your excuse?

Get in touch with CA.

Chat
What would you like to chat about?
Support
Contact
Call us at 1-800-225-5224
Call us at 1-800-225-5224
Contact Us

Chat with CA

Just give us some brief information and we'll connect you to the right CA ExpertCA sales representative.

Our hours of availability are 8AM - 5PM CST.

All Fields Required

connecting

We're matching your request.

Unfortunately, we can't connect you to an agent. If you are not automatically redirected please click here.

  • {{message.agentProfile.name}} will be helping you today.

    View Profile


  • Transfered to {{message.agentProfile.name}}

    {{message.agentProfile.name}} joined the conversation

    {{message.agentProfile.name}} left the conversation

  • Your chat with {{$storage.chatSession.messages[$index - 1].agentProfile.name}} has ended.
    Thank you for your interest in CA.


    Rate Your Chat Experience.

    {{chat.statusMsg}}

agent is typing