What’s the biggest barrier to successful artificial intelligence (A.I.) and machine-learning projects?
Earlier this year, Arvind Krishna, IBM’s senior vice president of cloud and cognitive software, suggested that such initiatives tend to fail once companies realize the expense and labor involved in collecting and structuring data for analysis. “And so you run out of patience along the way, because you spend your first year just collecting and cleansing the data,” he told the audience at The Wall Street Journal’s Future of Everything Festival, according to the newspaper.
“And you say: ‘Hey, wait a moment, where’s the A.I.? I’m not getting the benefit.’ And you kind of bail on it,” he reportedly added.
All the hype around A.I. and machine learning might have deluded a number of companies into believing that such initiatives will quickly yield powerful results. For example, Watson Health, an IBM A.I. initiative, supposedly failed to meet hospitals’ expectations for successful healthcare data analysis.
"Part of Watson for Health's challenges is they were very aggressive with marketing, which is kind of an IBM trait. And, then it came [to] delivering it and they chose oncology, they chose genomics—really tough nuts to crack," Cynthia Burghard, IDC's research director for Value-based Healthcare IT Transformation Strategies, told ComputerWorld in late 2018.
Of course, IBM isn’t the only major company to experience some kind of artificial intelligence failure. Uber’s attempts to build an autonomous-car platform capable of navigating busy city streets came to a sudden halt when one of its self-driving vehicles plowed into a pedestrian in March 2018. While the company’s autonomous efforts continue, its spokespeople are cautioning that it may take years before true self-driving cars hit the road. (This is a reversal from Uber’s previous, ultra-aggressive position on autonomous technology research.)
Whatever the company’s size, plunging into artificial intelligence is a risky proposition, both in terms of time and costs. Experts in A.I. and machine learning are expensive, and they may need expensive tools and infrastructure to get their jobs done. Moreover, it’s tempting for many companies, fearful of their competition getting ahead of them, to plunge into an A.I. project without fully considering their ultimate goals.
Indeed, every company contemplating an A.I. project should ask:
- What issue is this effort meant to solve?
- Is the data to solve it available?
- What are the ideal outcomes?
But even with those questions answered, there’s no way to predict how an A.I. initiative will actually go; this is still a relatively nascent field. If a company decides to take the plunge, it must exhibit patience—and a healthy respect for the effort it takes to prep data. And that's before you get to the truly existential questions, such as whether integrating "empathy" into A.I. coding will make these platforms more "human-like" and effective.
Then there's the persistent and nagging fear that, if artificial intelligence doesn't succeed spectacularly, another "A.I. winter" will set in. After all, companies are pouring billions of dollars in A.I. initiatives; if they don't see results, a lot of that funding could dry up. In a notable blog posting, Filip Piekniewski, an A.I. researcher, once listed all the ways in which the artificial intelligence hype has greatly exceeded the reality, including a lack of progress in Google’s DeepMind, as well as deep learning (“does not scale,” he concluded). It's easy to see barriers overwhelming even the most well-funded initiative.