Computerworld

Stanford AI Index finds rapid technical progress and industry growth

Australia's Data61 contributed to the inaugural index

The number of academic papers containing the phrase ‘artificial intelligence’ has increased ninefold since 1996, while the number of active (US-based) start-ups developing AI systems has increased by 14 times since the turn of the millennium, according to Stanford University’s inaugural AI Index.

The index looks at 18 metrics in academia, industry, open-source software and public interest, as well as technical assessments of progress toward human-level performance in areas such as speech recognition, question-answering and computer vision.

The Stanford group behind the index, part of a project called the One Hundred Year Study on Artificial Intelligence, or AI100, hope it will serve as the baseline for progress, similar to how the S&P ASX 200 tracks the Australian stock market.

“Artificial Intelligence has leapt to the forefront of global discourse, garnering increased attention from practitioners, industry leaders, policymakers, and the general public,” said co-author Russ Altman, faculty director of AI100.

“However, the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field. Without the relevant data for reasoning about the state of AI technology, we are essentially flying blind in our conversations and decision-making related to AI,” he added.

Industry boom

As the number of AI-related start-ups has increased, so has the amount of venture capital investment into them, increasing six times since 2000 to around US$3.3billion.

A separate study by Gartner found many vendors were over-egging the artificial intelligence capabilities of their products to cash in on the “gold rush” around the technology.

The number of job openings requiring AI skills has likewise increased, up 4.5 times since 2013, according to analysis of job listing sites Indeed and Monster.

Although the index focuses solely on the US, Indeed told Computerworld that in Australia the site had observed a 17-fold increase in AI jobs over the last two years.

The index also discovered a rise in the number of times various AI and ML software packages have been ‘starred’ on GitHub, across all software libraries. The most popular were found to be TensorFlow and Scikit-Learn.

Technical performance

According to the index, the performance of AI systems to detect objects in an image has surpassed that of humans, at least within the bounds of the ImageNet Large Scale Visual Recognition Challenge. Error rates for image labelling by AI systems have fallen from 28.5 per cent to below 2.5 per cent since 2010.

The performance of AI systems on a task to recognize speech from phone call audio is now equivalent to that of a human, according to the index. Their ability to find the answer to a question within a document is fast approaching human-level accuracy, although hasn’t reached it yet.

The index’s authors note that, while AI systems can perform better than humans at very specific tasks – such as playing Go or detecting skin cancer in an image – “these achievements say nothing about the ability of these systems to generalise”.

“While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if the task is modified even slightly. For example, a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast, very different AI systems would be needed for each of these tasks,” Altman said.

The index is funded by private charitable donations as well as Google, Microsoft and Chinese AI-powered content platform Toutiao.