Hyperscale cloud = Next-gen mainframe
Things are going to get interesting as the architectural battles for new workloads such as Machine Learning, AI and next killer apps based on Blockchain intensify.
What’s in a name? “A rose by any other name would smell as sweet.” Quoting Shakespeare seems apt at this time of the year when we yearn for spring.
In my conversations with CIOs’ running mission-critical businesses, the discussions around “hyperscale computing” are becoming popular. The question at hand: “How can I get a similar highly-elastic infrastructure such as Uber or Amazon but with the reliability and availability of the mainframe?”
Here is the hyperscale computing definition from Gartner’s Lydia Leong: “Hyperscale computing is a set of architectural patterns for delivering scale-out IT capabilities at massive, industrialized scale. These patterns span all layers of the delivery of IT capabilities — data center facilities, hardware and system infrastructure, application infrastructure, and applications. Non-hyperscale components can be layered on top of hyperscale components, but the overall architecture is only ‘hyperscale’ through the level where all components use a hyperscale architecture.”(Source: Gartner, Hype Cycle for Infrastructure Strategies, 30 June 2016.)
A Hyperscale cloud can have millions of virtual servers and accommodate increased computing demands without requiring additional space, cooling, or electrical power. The total cost of ownership is typically measured in terms of high availability (HA) and the unit price for delivering an application or data.
This got me thinking about IBM’s recent investor briefing and how the next generation mainframe (z Systems), expected to ship later this year, actually is set to become the transaction platform of choice for mission-essential workloads. The current z13, already delivering five 9s availability, is the world’s fastest computer and probably uses the power equivalent of my cappuccino machine! The z13 analyzes transactions using machine learning with Spark on zOS in 2 milliseconds, manages up to 30B RESTful web interactions per day with Dockerized Node.js and drives over 470K database read and writes per second.
All of this is current reality, even before the next set of advances which will make the mainframe even more cost-efficient and secure. I’ve repeatedly heard clients say it’s time to stop treating the mainframe as a separate legacy system. What’s exciting is that now the mainframe name can be synonymous for “hyperscale cloud,” as mainframe delivers on the promise of hyperscale computing and supports open-source tools with the highest reliability, availability, security, and performance – all at lowest cost per transaction.
To put things in perspective, I’d like to share some highlights from IBM’s recent investor briefing that speak to the innovation that is already in motion and are areas I am very excited about.
According to IBM, “90 percent of corporate data isn’t searchable via Google,” indicating the opportunity for organizations to tap this data for new business models. That said, there is a huge scarcity of data science skills. Even with the advanced techniques, data scientists can spend weeks developing, testing, and retooling a single analytic model. On the other hand, companies are also realizing it isn’t cost-effective or easy to move data to a public cloud. This concept is all about data having gravity where it resides. In addition to typical security concerns, companies are realizing that analytics and machine learning workloads need to come to the data – not the other way around!
The next innovation in analytics takes us from BI and predictive to machine learning – how can fraud detection happen in real-time? IBM recently announced Machine Learning for z/OS, an application that allows data scientists to automate the creation, training, and deployment of operational analytic models using the Spark framework.
The big news here is that the mainframe is the ideal solution for machine learning workloads. Companies will need help to keep up with the onslaught of new services on the mainframe – now DevOps is a must-have to handle continuous change, we must address new data privacy concerns, and finally use machine learning for Operations Intelligence on the mainframe itself.
Most mainframe customers have already done the first step of mobile or web enablement of their existing apps. The next wave of value comes from reusing and recombining the existing mainframe applications as APIs. APIs are simply services – e.g., “account open” or “credit check” that can be combined with newer customer experience services at scale.
I’ve spoken with many enterprise clients who have realized that “lift and shift” to cloud simply isn’t justifiable. The risk is too high as these are mission-essential applications which need to run 24×7. More importantly, by not completely re-architecting apps, companies eliminated a time-consuming transition or migration. Our recent IDC study on the connected mainframe showed that API adopters who used modern IDEs and re-factored existing code into modern apps were faster in delivering real value.
Finally mainframe’s scale, security, and transactional velocity are tailor-made for blockchain. IBM mentioned that there are 50 blockchain projects underway and many of the clients don’t even know that there is a mainframe behind it; all they know is that they are using a highly secure blockchain network.
Mainframe’s unique selling proposition for Blockchain is security: the ability to create secure containers, unalterable even by systems administrators. In fact, blockchain pilots are happening on the cloud, but, due to security, production blockchain networks are happening on mainframe systems.The potential of blockchain adoption will be enormous – driving unprecedented volume, variety, and transactional velocity on the mainframe operating as a hyperscale cloud.
A few highlights from IBM’s Bridget Van Kralingen: Today only a few horizontal processes such as HR or supply chain are technology-enabled. Blockchain will be about enabling the “long tail” of back and middle-office processes across industries that until now were cost prohibitive to enable with technology. For example, in compliance, which impacts all industries, there will be 300M pages of regulation by 2020 and 40,000 changes per year – not even an army of humans can tackle that complexity. These are long-tail processes in every industry that require domain expertise, wisdom, and judgment. They need to execute at Hyperscale.
So, back to “what’s in a name?” For me, the answer and excitement lies in what’s ahead. Whether we call it mainframe or hyperscale cloud, we are on the cusp of something big and CA is delighted to be part of this ecosystem and journey.
Stay tuned for the next blog where I will go into more detail on hyperscale attributes and how software innovations on mainframe address those attributes.