Machine learning: What if? Or when?

For decades, scientists have talked about “intelligent machines” and “the public” has responded with skepticism and often with fear.

The moment where most of the human race will be confronted with intelligent machines is coming closer every day, which means that scientists and technologists will need to find ways to better explain what this “intelligence” means and the rest of us will need to learn how to deal with it.

Luckily, humans are probably the best example of a fast learning organism. We have the brains to learn from experience, from copying behavior, from reading and absorbing massive amounts of information and to combine all of what we learned into new behavior. We learn how to walk and look out for things that could make us fall or which we could bump into. We learn to cycle and the need to process more information faster to prevent us from falling or getting into an accident. We drive cars and need to process information even faster because more things happen even quicker. And we do this almost without realizing. So we will also learn how to cope and live with intelligent machines, especially if we understand what makes these machines so intelligent.

Intelligent machines get smarter by learning as well. Just in a slightly different way. When the IT Industry started talking about machine learning, most of us were skeptical. It was hard enough to make computers do what we wanted them to do, let alone them being able to make smart decisions! But today’s computers are so powerful and many of the available algorithms are now so advanced that machine learning is not something of the future anymore. We are able to feed them with so much (historic) data that they can start to learn from these events, just like humans do.

Complex IT Infrastructures are an easy target for machine learning. Many large servers, a complex network, Cloud services, storage devices and more: these environments could benefit from the automation that machines can provide. Every activity from every component of this ecosystem is logged in some shape or form; users logging on, a file being moved, a network connection getting lost, a disk crashing, a database error and much, much more.

Millions of activities and events are stored in multiple log-files, all waiting to be absorbed by something that can make sense out of it all. Big Data in optima forma. All we need is some smart software that absorbs these millions of “events”, filters them and combines them, and we have all the ingredients to start learning… but it can’t be that simple, can it?

The algorithms we need are not the ones that make sense out of all this data, but the ones that learn how to filter out noise (unimportant events) and different levels of importance. We also need to identify which events lead to other events (a disk crash leads to a database failure which then turns into an application error), so they need to understand root cause analysis to identify what came first.

And what if we do find events that lead to others? In some cases, an expert will need to verify this and approve the “learning” and maybe he can even approve the proposal from the machine to do something that makes the problem go away. Especially if the software discovered that events A & B & C always lead to activity X (which makes the problem go away).

But why would we need this in the first place? We need smarter systems for a simple reason; our IT Infrastructures are so complex, they change so fast and we are so much depending on them that a specialist or even a group of specialists can no longer track and correlate everything, let alone see what has caused certain things to happen. Machines are very good at this; patiently collecting millions of events, sifting through them in an ultrafast way and apply algorithms that are way too complex for most humans.

Does this make machines intelligent? It all depends what you define as intelligence. Can machines (or more specifically, the software) take the wrong decisions that could lead to even bigger problems? Of course they can, and they can do it even faster than humans! But the more we feed, the more they can analyze and the more they learn. Just like humans, but with more patience, rigor and efficiency.

CA Technologies last year announced that we are already implementing analytics software that helps our customers to run their data-centers in a more efficient way. And some of our specialists have already posted a number of interesting articles on this topic: A lifeline for those drowning in data: analytics-driven applications, Machine learning is coming to the mainframe and Mainframe: platform of choice for machine learning and ops intel.

On April 21-23, CA Technologies is one of the sponsors of the Machine Learning conference in Prague where experts will discuss the possibilities of machine learning. As humans, we will learn how to deal with smarter systems, just as we learned how to create new tools when we discovered how to work with iron, the steam engine and electricity.


Marcel is principal for product marketing EMEA for CA Technologies, mainframe solutions and is a…


  • jAmEs shIn

    I wish we could develop an ai algorithm as we did before, nuegent(?), like google with deepmind and facebook with fair, so that can be used in every single enterprise software product of ca sells. ai is the future and the present.

Modern Software Factory Hub

Your source for the tips, tools and insights to power your digital transformation.
Read more >
What is GraphQL and Why Should You Care?Maribeth Luftglass Runs Her School District Like a Fortune 500 CompanyCybersecurity 2017: A Look at the Year's Biggest Breaches