Moving the test data closer to testers
Avoiding bottlenecks with automated, self-service provisioning
Data provisioning and the need to “move the data closer to testing” was a common theme at the recent user group for CA Test Data Manager and CA Test Case Optimizer held on February 10th at Ditton Manor in the UK. Users were able to discuss their experiences with the tools and also have the opportunity to work more collaboratively with CA to develop the software to best reflect and help solve their testing needs.
It seemed that most test data engineers in the room worked on projects in which data provisioning was proving a major bottleneck, and shared their experiences of how they have moved to a self-service, automated provisioning approach to overcome this.
From the perspective of the testers and automation engineers, the fundamental challenge seems to be that they do not think in terms of database logic.Instead they think in the terms of the user interface and logical business language, so that a certain type of credit card for example is just that, and not a DB2 database value.
Testers are therefore often dependent on a central team of test data engineers, who have the knowledge of the data needed to find the exact data sets they need. However, this dependence on a central team opens up the potential for testing bottlenecks. Of 112 respondents to a CA survey (July 2015), over 60% cited difficulty finding the right data for a particular test as a “main software challenge”.
One reason for delays is that test data engineering teams often lack the technology needed to handle the constant flow of test data requests coming in. They might not have even semi-automated technology for discovering data from among large copies of production, and so have to search manually for the specific data sets testers need.
This data then has to be extracted and copied, often from across a number of distributed and mainframe platforms, all while retaining the referential integrity needed for testing. In general, this process is slow and overly-manual, and at some organizations we’ve found that data refreshes take longer than the planned sprint itself.
In order to meet tight deadlines, test data engineers are moving towards a self-service provisioning model. For this, data must be exposed to testers using the language they are familiar with, removing the reliance on subject matter experts.
As one test data engineer discussed at the user group, analysing test cases, requirements and data queries can reveal the attributes testers actually need, using them to build a Test Mart. These attributes can then be mapped to logical business language, so that data can be requested based on the language of the requirements and test cases, rather than the underlying data attributes.
The Test Data on Demand web-portal from CA, for example, provides dynamic form building, which allows test data engineers to construct self-service forms based on exact criteria. Testers can request the data attributes they need using drop-down menus, and the attributes are then combined and delivered automatically. This provides them with in demand access to the data they need to execute any possible test.
We heard how such approaches have reduced data provisioning time to just 1-2 minutes – in contrast to the 3-4 weeks one automation consultant described having to wait for data. If data is further cloned as it is provisioned, it becomes available in parallel and on demand, while version control over the data means that it can be automatically updated to reflect changes to the requirements.
In sum, such approaches offer a way to eliminate testing bottlenecks, and provide testers with on demand access to the data they need to deliver quality software earlier. To find out more about moving towards a more automated, self-service approach to parallel data provisioning, download our white paper, Moving Beyond Masking and Subsetting.