Machine Learning and data science: Hype or hero?

By Mark on Nov 26

Have expectations for the benefits of Machine Learning and Artificial Intelligence got ahead of themselves?

AI and Machine Learning

The fascination with AI and Machine Learning

On re-watching Stephen Spielberg’s 2001 movie ‘A.I. Artificial Intelligence’ some 17 years after it was released, it is impossible to prevent yourself from judging the film against where AI was as a technology at the turn of the millennium, compared with where it is now. (If you are old enough! Apologies if you are not.)

The movie is an immense achievement, and it is that rarest of things that only someone with a track record of outstanding box office success can ever be allowed to make: A piece of cinematic art that is visually stunning while being brain food at the same time.

That said, it is a little overblown. It is way too long, and it seems to drag out the ending. And the final impression is that it raises more questions than it provides answers. But maybe that was the intention? Perhaps it was designed so as to be impossible to ignore; even getting on for two decades later, it remains a talking point.

AI and its related fields of Deep and Machine Learning have been all the rage in recent years. The promise of automation in the shape of smart machines that can replace monotonous human labor is perhaps the most fascinating area of speculation in the tech world.

The potential for unerring accuracy and consistent, repeatable performance of smart machines offers the promise of high levels of quality. In smart machines such as the self-driving car this promises improved safety that eliminates human error and the effects of erratic behavior.

However, despite all the promise, the speculation has generated a high–level of anticipation. But so far, it has been short on delivering knock-out products and services that grab our attention. Here we look at whether AI and machine learning has been overhyped.

The peak of inflated expectations

The field of technology underlying AI and Deep Learning is that of data science. Big Data and data analytics have become significant points of focus for business over the last decade or so, with vendors monetizing their expertise and enterprises jumping at opportunities to obtain competitive advantage, with senior executives keen to avoid getting left behind.

Of course, leading technology market analyst, Gartner, has seen technologies and their promises come and go. To help its client base of leading CTOs understand up and coming technologies and put claims and expectations into context, it developed the Hype cycle analytical tool. This places Machine Learning and Deep Learning at the very peak of what they call the Peak of Inflated Expectations.

As evidence of this consider the CogX event in London in the summer of 2018. Billed as the “Festival of All Things AI,” and “the world’s largest AI event for business,” 6,000 delegates participated in a corporate feeding frenzy put on by the likes of sponsors including SoftBank, Accenture, IBM and Google. Almost as is if defending the objection that AI is overhyped, another claim by the event marketers suggested that the technology goes “beyond the hype” to “deliver real value in business”.

The evidence is building

Stronger signs that all is not proceeding as AI’s promoters and boosters would wish can also be seen. The data scientists right on the cutting edge of Machine Learning research are beginning to reform opinions about what the frenzy of activity and the ROI from billions of dollars that have been sunk into AI might actually yield. Essentially, AI may fail to deliver against the highest expectations that some are hoping for.

Among this community there is the sneaking suspicion that Deep Learning is not going to deliver software capable of artificial general intelligence. Such a program would be more intelligent than humans at a wide variety of tasks. This is one of the higher expectations; however, all is not well when it comes to the objectives of performing narrower tasks. There is some belief that AI may not be able to create systems that are reliable enough for safety critical tasks like autonomous driving, or financially critical ones like investment decision making.

Other evidence can also be seen in the following:

  • The pace of AI breakthroughs seems to be slowing and those breakthroughs that are occurring seem to require ever-larger amounts of data and computer power
  • Top AI thinkers that had been hired to head in-house AI labs, notably Yann LeCun at Facebook, have had the resources to steam ahead for several years, but have recently either moved on or gone sideways
  • Accidents involving self-driving cars seem to indicate that the complexity of the real world may be beyond the mastery of Deep Learning techniques
  • Testing has shown self-driving cars frequently experience situations where they compute low confidence and hand control back to the human driver

The folly of making predictions about… prediction technology

At the heart of our expectations and fears about what AI, Machine Learning and Deep Learning may or may not be able to deliver, is the capability of making predictions. Whether these forms of smart technology are driving cars or diagnosing medical conditions, they are powered by software that has learned from observing situations played out over and over again, enabling them to compute probabilities and make good guesses as to the likely outcomes of strings of events, information sets or collections of related data points.

Should there be a wider acceptance that its potential has been hyper inflated, some of the predictions that have formed the bow wave of hype about AI may soon start to look absurd. There are many media stories suggesting that robots would take over jobs done by people. In some cases time scales are 10 - to 20 years and the proportion is as high as 50 percent.

In the perfected AI future, driverless cars and trucks, even if not replacing a human entirely, would render certain types of human activity obsolete. This may well be possible, but the key is when, how far away is this brave new world?

How soon the future?

Amara’s Law essentially states that we should ignore wild claims about short term impacts of emerging technologies and take a more long-term view. It says:

  • We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

A good example of a technology that follows Amara’s Law is the Global Positioning System (GPS), built originally on the network of 24 satellites first deployed in 1978. It was a US military-led initiative (what’s new?) and was designed to enable weapons payloads to be delivered with great accuracy. After surviving the 1980s and repeated threats of cancelation, it was first used for the purpose intended in 1991, helping free Kuwait from the invasion of Iraq in Operation Desert Storm.

Look at where GPS is now. It’s all around and embedded in all sorts of systems that incorporate geographical positioning information as part of its dataset. Aircraft navigation and crop seeding; vehicle fleet tracking; and fitness trackers for health and well-being apps are just some of the common applications.

In some ways, GPS was ahead of its time. It had a military application, but then it had to wait for the birth of the Internet and the arrival of an ecosystem of GPS appropriate applications to spring up. You might argue that the Internet itself followed a similar path. ARPANET and TCP/IP had to wait for Tim Berners Lee to invent HyperText Transfer Protocol (http) to power the World Wide Web.

Just like taking a long view is the best way to invest in the stock market, the best way to reap the rewards and realize the benefits of technology is to count in decades rather than anything less. Hype or Hero, AI, Machine Learning and Deep Learning is likely to have a very long earn out time for many of those that have invested vast sums into it.