Machine learning: Why augmentation is better than automation

Do machine-generated decisions always lead us to the best outcomes?

This is the second post in this series on machine learning. The first was Good Bots vs. Bad Bots by Jin Zhang.

Machine learning has reached an inflection point

While the fundamental algorithms were developed as long ago as the 1970s, the conflation of two trends is finally enabling machine learning to deliver on its long-awaited potential.

The first is cheaply available mass computing power. The parallelism of multi-core computer hardware, combined with multi-gigahertz clock speeds means that gigaflops of computing power are available on a smart phone, teraflops available on a server, and petaflops available in the cloud. This means that algorithms which were previously considered too computationally expensive can now scale to large datasets. The second enabling trend is the proliferation of data. More data is being collected, stored and published than ever before.

The combination of available data and computing power, enables us to compute answers to almost any question we can imagine.

Machine learning’s fuel source

One of the world’s leading experts on machine learning is Professor Rao Kotagiri, from the University of Melbourne; one of CA Technologies’ research partners. When interviewing candidates for a machine learning research project, Rao often asks candidates, “What is the best, most advanced machine learning algorithm?” It’s a trick question. The best algorithm depends entirely on your data and the question you are trying to answer. The data determines the algorithm.

Machine learning algorithms are commoditized. There are literally thousands of publicly available algorithms. There is also a proliferation of open source and proprietary machine learning frameworks, such as Weka, Apache Spark, R and Pentaho. While one algorithm may outperform another under a given set of conditions, the data and the form with which it’s fed into the algorithm is much more fundamental.

But how do we get the insights?

A critical step in machine learning is feature extraction. Raw data is typically a sea of noise; it’s the features that allow us to derive key information from the data. However, deriving an optimal feature set requires a domain expert, someone who understands key characteristics to look for in the data. Armed with a solid feature set, machine learning algorithms can discover the predictive relationship between the features and the target variable.

The key ingredient

Which brings us to the most important ingredient of all – human touch. Nearly every machine learning automation project fails for the same reason – humans are taken out of the loop. While machine-driven decisions may be right 80% of the time, the (sometimes disastrous) consequences of being wrong 20% of the time wipe out the productivity gains. If the automated decision process is a black box, there is no way easy way for human experts to correct mistakes.

The best use of machine learning is as a complement to human experts. Let the machine learning do what it does best – discover repetitive patterns and automate repetitive tasks. Let the human expert supervise.

The key here is to make the automated decision-making process transparent. Visualizations are very helpful for this reason. In an ideal scenario, the human expert focuses on the corner cases, understanding when to override machine decisions, or determining when the machine has either insufficient data or requires additional contextual information. The overridden decisions also help the machine to continually refine its decision-making process. This human intervention, or augmentation of machine learning, is what leads to ideal outcomes.

Ideal outcomes

Let’s look at an example of when augmentation led to an ideal outcome. In 2012, airplane pilot Captain Chesley “Sully” Sullenberger famously landed an A320 on the Hudson River, executing what many aviation experts had considered nearly impossible. Captain Sully both relied upon and overrode the Airbus auto-pilot system, deliberately enabling the autopilot to keep the plane from stalling while he glided it down, but switching to manual control during landing.

Captain Sully’s amazing feat was due to his understanding of how to best leverage machine intelligence – knowing when to rely upon it, and when to override it.

Summary

In summary, mass adoption of machine learning has reached a tipping point due to inexpensive mass computing power and the proliferation of data. It is this data that is foremost to any machine learning approach. However, while data is the ‘fuel’ needed to power Machine Learning, the scenario works best when humans are kept in the loop, using ML to perform most tasks, while submitting their own ‘overrides’ when needed, helping to refine the automated decision process.


Steve Versteeg is a Research Staff Member with CA Labs, based in Melbourne, Australia. His…

Comments

rewrite

Insights from the app driven world
Subscribe Now >
RECOMMENDED
The Sociology of Software >How (Not) to Lie with Data Visualization >DevOps and Cloud Computing: Exploiting the Synergy for Business Advantage >