7 reasons the next-gen mainframe will be part of Hyperscale Cloud
The modern software factory will run on mission-essential workloads
In my last blog, I shared that hyperscale computing is a set of architectural patterns for delivering scale-out IT capabilities at massive, industrialized scale. Many enterprises that are pressured to develop innovations quickly and “hyper” scale those innovations to millions of users worldwide are now looking to hyperscale cloud.
Hyperscale computing has until now involved abstracting data centers into software running on low-cost servers and standard storage drives. These large-scale data centers are located near low-cost power sources and offer availability through massive build-out of redundant components. Hyperscale computing usually involves a minimum of half a million servers (the bragging rights seem to be about how much real estate one owns), virtual machines (VMs), or containers.
Take a look at Netflix’s architecture documented here. What is interesting is that, per the Netflix blog, “Failures are unavoidable in any large-scale distributed system, including a cloud-based one. However, the cloud allows one to build highly reliable services out of fundamentally unreliable but redundant components.”
This begs certain questions. Can enterprise companies use hyperscale computing technology in their own data centers to deliver mission critical applications? Think of these key applications like electricity – you expect it to work always, and things really fall apart without it. If public hyperscale cloud is built with an expectation of failure on top of non-reliable low-cost components, can hyperscale capabilities be created in private or hybrid datacenters to the necessary service level agreements (SLAs) for mission critical workloads?
The answer is, “possibly.” Using commodity servers involves a massive investment in huge data center footprints and associated power management – plus hiring lot of people to do what Netflix is doing. For example, they say: “By incorporating the principles of redundancy and graceful degradation in our architecture, and being disciplined about regular production drills using Simian Army, it is possible to survive failures in the cloud infrastructure and within our own systems without impacting the member experience.”
This does not seem necessarily feasible for many enterprises, who are often trying to get out of the business of managing large data centers as a core competency. And also, not all workloads are created equal: businesses have a combination of mission critical, mission essential, and differentiating services for customers, partners and employees, all of which require enterprise-scale and reliability.
So, the real question is, what happens if one builds the next gen hyperscale architecture using highly-reliable and high-security components and systems? From the outset, the mainframe has been architected to support high-performance transactional systems with the highest security for “electricity-like” workloads. But can z systems be part of a hyperscale computing environment and deliver on its promise? Here are seven key characteristics of a mainframe that support hyperscale cloud infrastructure:
What does all this mean? It means that, with a hyperscale cloud that includes next gen mainframes in the data center, your business can further capture all of the advantages of hyperscale computing: you can leverage high-performance block solutions, support mission-critical workloads with complete reliability, and facilitate end-to-end security and compliance.
When I have talked to companies about hyperscale, two concerns typically arise. First, managers assume that they will need to learn new skill sets to program and code in a z systems environment compared to the way they program and code in the cloud. Simply put: this is not true. For example, application developers are increasingly able to develop in Java for mobile-to-mainframe applications, without ever having to touch a green screen. The vendor ecosystem, CA included, has continued to deliver new tools to new developers on the mainframe.
The second concern is regarding cost and expense. But think of it this way: instead of having a gigantic hyperscale industrial data center, you can have one system sitting on the floor using less power than a coffee machine, and one person operating the equivalent of a state’s department of motor vehicles portal. That’s hard to beat!
As your workloads increasingly demand electricity-like, always-on IT services, you will need the power of hyperscale computing. Hyperscale data centers which rely on z systems can deliver that for you – and improve your total cost of ownership (TCO), security, platform stability, and business agility as well.
Stay tuned for the latest, as CA and others continue our march towards the next-gen mainframe and what that means in a hyperscale cloud world!
(this article originally appeared in Cloud Strategy Magazine)