7 reasons the next-gen mainframe will be part of Hyperscale Cloud

The modern software factory will run on mission-essential workloads

In my last blog, I shared that hyperscale computing is a set of architectural patterns for delivering scale-out IT capabilities at massive, industrialized scale. Many enterprises that are pressured to develop innovations quickly and “hyper” scale those innovations to millions of users worldwide are now looking to hyperscale cloud.

Hyperscale computing has until now involved abstracting data centers into software running on low-cost servers and standard storage drives. These large-scale data centers are located near low-cost power sources and offer availability through massive build-out of redundant components. Hyperscale computing usually involves a minimum of half a million servers (the bragging rights seem to be about how much real estate one owns), virtual machines (VMs), or containers.

Netflix: Poster child for Hyperscale

Take a look at Netflix’s architecture documented here. What is interesting is that, per the Netflix blog, “Failures are unavoidable in any large-scale distributed system, including a cloud-based one. However, the cloud allows one to build highly reliable services out of fundamentally unreliable but redundant components.”

This begs certain questions. Can enterprise companies use hyperscale computing technology in their own data centers to deliver mission critical applications? Think of these key applications like electricity – you expect it to work always, and things really fall apart without it. If public hyperscale cloud is built with an expectation of failure on top of non-reliable low-cost components, can hyperscale capabilities be created in private or hybrid datacenters to the necessary service level agreements (SLAs) for mission critical workloads?

The answer is, “possibly.” Using commodity servers involves a massive investment in huge data center footprints and associated power management – plus hiring lot of people to do what Netflix is doing. For example, they say: “By incorporating the principles of redundancy and graceful degradation in our architecture, and being disciplined about regular production drills using Simian Army, it is possible to survive failures in the cloud infrastructure and within our own systems without impacting the member experience.”

Can hyperscale be a possibility for all?

This does not seem necessarily feasible for many enterprises, who are often trying to get out of the business of managing large data centers as a core competency. And also, not all workloads are created equal: businesses have a combination of mission critical, mission essential, and differentiating services for customers, partners and employees, all of which require enterprise-scale and reliability.

7 characteristics of a mainframe that deliver hyperscale cloud

So, the real question is, what happens if one builds the next gen hyperscale architecture using highly-reliable and high-security components and systems? From the outset, the mainframe has been architected to support high-performance transactional systems with the highest security for “electricity-like” workloads. But can z systems be part of a hyperscale computing environment and deliver on its promise? Here are seven key characteristics of a mainframe that support hyperscale cloud infrastructure:

 

  1. Software-Defined: All compute, storage, middleware, and networking for mission-critical application/services are enabled through a software-defined architecture. That is, all elements of the infrastructure are virtualized and delivered as a service.
  2. Available and Elastic: Hyperscale data centers with z systems brings the best of both hyperscale and z systems to deliver the availability and elasticity required by enterprise-scale workloads. Hyperscale excels at scale-out (many jobs or 500K servers or VMs), while z systems perform best in scale-up mode and deliver five 9s of availability through high-availability approaches like Sysplex. The result: a much more available, reliable and elastic infrastructure that can handle different types of enterprise workloads.
  3. Open: Despite what you may have heard, the z Systems ecosystem brings many open source elements, including support for enterprise-grade, native distributions of the Apache Spark in-memory analytics engine. And, with Linux on z, almost all open source software available for Linux is accessible, including Docker containers, artificial intelligence (AI), machine learning (Google Tensorflow), and modern languages (GO, Python, etc.).
  4. Highly Secure: Security in a hyperscale data center with next gen mainframes is improved, because they offer enterprise-grade security and compliance capabilities which no other server can beat (e.g., EAL5-certification, crypto containers).
  5. Energy-Sustainable: Low power consumption and a small footprint are a hallmark of z systems, leveraging some of the system z power-management density characteristics that provide significant efficiencies in contrast to the setup of arrays of commodity servers.
  6. Intelligent Automation: Workloads can be orchestrated across the data center based on unique hardware and software requirements and service level agreements (SLAs) with the business. Advances in AI and machine learning can make intelligent automation a reality and bring IT closer to the vision of “NoOps.”
  7. Vendor-Neutral: There is an industry ecosystem around IBM, including the #1 ISV for z Systems, CA Technologies, and the world’s largest solution providers, who are building hyperscale data centers to support enterprise customers.

 

What does all this mean? It means that, with a hyperscale cloud that includes next gen mainframes in the data center, your business can further capture all of the advantages of hyperscale computing: you can leverage high-performance block solutions, support mission-critical workloads with complete reliability, and facilitate end-to-end security and compliance.

Cost-effective hyperscale computing

When I have talked to companies about hyperscale, two concerns typically arise. First, managers assume that they will need to learn new skill sets to program and code in a z systems environment compared to the way they program and code in the cloud. Simply put: this is not true. For example, application developers are increasingly able to develop in Java for mobile-to-mainframe applications, without ever having to touch a green screen. The vendor ecosystem, CA included, has continued to deliver new tools to new developers on the mainframe.

The second concern is regarding cost and expense. But think of it this way: instead of having a gigantic hyperscale industrial data center, you can have one system sitting on the floor using less power than a coffee machine, and one person operating the equivalent of a state’s department of motor vehicles portal. That’s hard to beat!

As your workloads increasingly demand electricity-like, always-on IT services, you will need the power of hyperscale computing. Hyperscale data centers which rely on z systems can deliver that for you – and improve your total cost of ownership (TCO), security, platform stability, and business agility as well.

Stay tuned for the latest, as CA and others continue our march towards the next-gen mainframe and what that means in a hyperscale cloud world!

 

(this article originally appeared in Cloud Strategy Magazine)


Ashok is General Manager of Mainframe at CA Technologies where he's responsible for the P&L,…

Comments

rewrite

Insights from the app driven world
Subscribe Now >
RECOMMENDED
The Sociology of Software >How (Not) to Lie with Data Visualization >DevOps and Cloud Computing: Exploiting the Synergy for Business Advantage >