Testing the Internet of Everything
Things we already have and things we might need
The so-called Internet of Things (IoT) or Internet of Everything (IoE) is one of the fastest growing trends in technology, with Gartner having forecast 6.4 billion connected “things” in use in 2016. It will be accompanied by fresh challenges as well as opportunities, and this is especially true when it comes to testing the connected devices and their software.
A new article written for CA by Paul Gerrard, Principal, Gerrard Consulting, considers “How will the Internet of Things Affect Testers?” The potential risks of failure Paul identifies include the complexity of “interactions between devices” which “may be unpredicted, unforeseen and unknown”.
This is not a new challenge for testers who are often faced with more possible combinations than can reasonably be tested, while many of these combinations can be unknown. In both API testing and the IoT, for instance, testers are faced with numerous discrete units of work, each of which can be combined in different ways.
Some of these APIs might have been created by a third party, while the possible ordering and combinations of different versions of APIs can cause the number of combinations which might be tested to skyrocket to an impossible number.
The ability to “… identify the hundreds, thousands or millions of tests” needed to rigorously test the interconnected devices will therefore be necessary when testing the IoT, as it is already in API testing. This will require the ability to systematically reducing the number of tests to a realistic number without compromising quality and Paul, like CA, advocates a Model-Based approach to achieve this.
There are features particular to the IoT which might make it particularly complex, however. Among other factors, Paul identifies the lower level hardware devices often involved, as well as the fact that the user base might be millions of mobile individuals, with objects moving across networks and certain devices carrying their own network.
When considering how to perform functional testing in the face of such complexity, Paul suggests numerous requirements. Testers will need to be able to simulate “thousands or millions of devices”, as well as having “data that is fit for purpose”. The latter will involve the ability to generate and edit data, as well as to monitor and track its usage.
On the surface, these requirements are similar to the current need of testers to have constant access to realistic test data and production-like systems. Along with effective test case design, having the right data and environments form three pillars of rigorous testing at speed.
The IoT will of course present new challenges. It will require tooling to “execute very large numbers of tests”, as well as the more fundamental challenge of creating these tests in the first place. Tools are emerging, however, capable of executing tests efficiently across both software and hardware, while Model-Based approaches will presumably have a role to play here too.
Similarly, high performance data generation engines which offer sufficiently comprehensive data generation functions already exist to create “trusted data sets” based on “real world operations”. CA worked with Hitachi Consulting, for instance, to generate realistic data for performance testing as part of Copenhagen’s move to become a smart city. As set out in chapter 5 of Hitachi Consulting’s eBook, Engineering the New Reality, this included creating data on the basis of historical data from 1.7 billion GPS journeys, while also learning from physical sensors such as GPS in mobile devices.
The IoT will therefore present fresh challenges for testing, but there are already solid foundations on which it can build. As Paul concludes, “testers will have to learn to create better test models and how to use them with more technical modelling and simulation tools”, but some of the techniques and technology already appear in place, and seem to be moving in the right direction to accommodate this shift.