Nvidia attempts to ease the path to deep learning

Nvidia's newly updated Digits software now has a graphical user interface and can build models using up to 4 GPUs at a time

Nvidia's Digits software provides an easy way to train deep learning artificial intelligence models to do tasks such as recognize images of numbers.

Nvidia's Digits software provides an easy way to train deep learning artificial intelligence models to do tasks such as recognize images of numbers.

Nvidia hopes to bring artificial intelligence to a wider range of applications with an update to its Digits software for designing neural networks.

Digits version 2, released Tuesday, comes with a graphical user interface, potentially making it accessible to programmers beyond the typical user-base of academics and developers who specialize in AI, said Ian Buck, Nvidia vice president of accelerated computing.

The previous version could be controlled only through the command line, which required knowledge of specific text commands and forced the user to jump to another window to view the results.

Digits has also been enhanced to enable designs that run on more than one processor, enabling up to four processors to work together simultaneously to build a learning model. Because the models can run on multiple processors, Digits can build models up to four times as quickly compared to the first version.

Nvidia has a vested interest in expanding the use of artificial intelligence, which typically requires heavy computational power. Over the past decade the company has been marketing GPUs, originally designed for powering computer displays, to work as hardware accelerators that boost computing power for large systems.

Deep neural networks, also called deep learning networks, are software models that help computers recognize objects or other phenomena of interest, and are built through a trial and error process of learning what to look for. In recent years, they have been the basis for a new wave of AI capabilities that have accelerated and refined tasks such as object classification, speech recognition, and detection of cancerous cells. Nvidia first released Digits as a way to cut out a lot of the menial work it takes to set up a deep learning system.

One early user of Digits' multi-processor capabilities has been Yahoo, which found this new approach cut the time required to build a neural network for automatically tagging photos on its Flickr service from 16 days to 5 days.

In addition to refreshing Digits, Nvidia also updated some of its other software to make it more friendly to AI development.

The company updated its CUDA (Compute Unified Device Architecture) parallel programming platform and application programming interface, which also now supports 16-bit floating point arithmetic. Formerly, it supported only 32-bit floating point operations. Support for the smaller floating point size helps developers cram more data into the system for modeling. The company updated its CUDA Deep Neural Network library of common routines to support 16 bit floating point operations as well.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags softwareapplicationsnvidiadata mining

More about IDGNewsNvidiaTwitterYahoo

Show Comments
[]