We just published a new case study from a leading Insurance (Property & Casualty) provider, which we think neatly outlines the advantages of using Service Virtualization to help manage performance of a complex, multi-tier application.
In this example, the company basically breaks apart an application-wide Performance Test into all of the steps that make it up, and assigns a target time for each step that makes up that application workflow. The decomposed target time for each step makes up a “Performance Budget” that contributes to the overall performance time.
Without decomposing the performance budget, you get no visibility into which element of the application is causing the hit on response time – you only know the End-to-End response time is not up to snuff (4.0 seconds here vs. expected 2.1 second response time for customers).
Given that, the first impulse of the performance tester is often throwing more hardware at the test lab, trying to get that speed up. This may help a little but it usually misses the mark in pinpointing the real time hog underneath the surface.
This is where we apply Service Virtualization – to virtualize the Behaviors of each of the component services in the end-to-end workflow. Behaviors in this sense are not functional, but they are the realistic response times of each component when subjected to load. By virtualizing the components around your component under test, you can figure out which one needs tuning, and which ones aren’t creating performance problems in the overall application. We think this customer employed the practice, with the assistance of our LISA Virtualize product, with a high level of success.
This topic was covered for starters in a paper I worked on with Ken Ahrens called “Virtualizing Over-Utilized Systems” and a series of blog posts on the topic last year. Look for a next installment on virtualizing performance testing from us this year – I hope we can cover some new ground for you “application speed freaks” out there.
The discipline of Performance Testing is sometimes also called Non-Functional Testing (or NFT), but the lines for NFT are blurry when you are also performance testing with the realistic use cases and variable data required to get a true snapshot of real-world performance levels.