When Microsoft looks into the near and far future, it sees artificial intelligence, particularly the hottest branch of it, called deep learning. The company has spent billions on A.I., including a spate of acquisitions such as the deep-learning startup Bonsai in late June and Semantic Machines in May. Microsoft CEO Satya Nadella has said that A.I. is the "defining technology of our times." He also said at an investor’s conference this spring: “It's going to be A.I. at the edge, A.I. in the cloud, A.I. as part of SaaS applications, A.I. as part of in fact even infrastructure.”
But there’s a chance that the big bet on deep learning could go bad if Microsoft fails to invest in other branches of A.I. that could be more useful. The issue isn’t that deep learning isn’t important — it certainly is. But there’s a chance it’s also been oversold and is already up against the limits of its capabilities.
That’s the argument that an increasing number of computer scientists and researchers are making. To understand it, let’s take a brief look at how deep learning works. In it, massive amounts of data are sent into a system that then essentially learns on its own, without developers writing specific programs for the task at hand. In one of the simplest uses of it, a machine could be taught to recognize the difference between cats and dogs by having hundreds of thousands of pictures of them fed into it, with each photo labeled as either a cat or a dog. The premise of deep learning is that that is all the machine needs to learn how to correctly distinguish between them. How it does that is essentially a black box.
Deep learning is being applied to self-driving cars, cancer and new-drug research, and back-office automation, among other tasks. But as The New York Times notes in its article “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So,” “Some scientists are asking whether deep learning is really so deep after all.” The article goes on to say that “a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now — and disillusionment later.”
The problem, these researchers say, is that deep learning is by its very nature a very limited technique, and only works on a specific set of problems, ones in which there are vast amounts of easily accessible data and where the tasks that need to be done are clear and well-defined, “like labeling images or translating speech to text,” in the words of the Times.
Because of that, Michael I. Jordan, a professor at the University of California, Berkeley, told the newspaper, “There is no real intelligence there. And I think that trusting these brute force algorithms too much is a faith misplaced.” And New York University professor Gary Marcus, warned in an article that “the patterns extracted by deep learning are more superficial than they initially appear.”
For evidence of that superficiality, look no further than Microsoft Office. Microsoft has built A.I. directly into Office, which sounds impressive. But take a close look at what that A.I. supposedly does and you might not be that impressed. It helps power Microsoft’s proofing tools in Word, for example — and I haven’t noticed any difference in those proofing tools before and after A.I. was baked in. In Outlook, A.I. recommends when you should leave for an appointment, which is about as ho-hum a feature as I can imagine. In PowerPoint, an A.I.-powered Designer feature recommends slide designs, which don’t seem like much more than templates. On the positive side, though, A.I. is useful for advanced Excel users, who can use it for extracting insights from complex sets of data.
I don’t doubt that A.I. will improve in the coming years and offer more useful tools. But because of the hype, Microsoft and other companies may be spending their resources on deep learning rather than on less touted A.I. technologies that may be more useful in the long run. That could hurt Microsoft and slow the benefits we all may get from the new technologies. So here’s hoping that the company takes a look at its use of deep learning with a cold, hard eye — and puts the intelligence back into A.I.