Google Cloud is launching two new tools to help customers design, launch and keep track of their machine learning algorithms.
Following on from the release of its pre-packaged machine learning use cases under the AutoML brand in January, these two new products - one proprietary and one open source - continue with Google's strategy to simplify and speed up its customer's ability to adopt AI techniques.
"Our goal is to put AI in reach of all businesses. But doing that means lowering the barriers to entry," Hussein Mehanna, engineering director for the Cloud ML Platform wrote in a blog post.
"That’s why we build all our AI offerings with three ideas in mind: make them simple, so more enterprises can adopt them, make them useful to the widest range of organisations, and make them fast, so businesses can iterate and succeed more quickly."
The first product is an alpha release of AI Hub, which is essentially a collaboration platform for AI specialists.
"AI is a team sport," Rajen Sheth, director of product management for Cloud AI told press on a Google Hangouts call earlier this week. "It involves data engineers, engineers to take the model into production, developers to build and integrate it into an application.
"That all needs to be pulled together to be successful, so these two new products, we think, will significantly help with this."
The AI Hub will provide data scientists and developers with a platform to store components like pipelines, Jupyter notebooks and TensorFlow modules in one private, secure place for more streamlined collaboration.
"This makes it easy for businesses to reuse pipelines and deploy them to production in GCP—or on hybrid infrastructures using the Kubeflow Pipeline system—in just a few steps," Mehanna wrote.
"In alpha, the AI Hub will provide these Google-developed resources and private sharing controls, and its beta release will expand to include more asset types and a broader array of public content, including partner solutions," he added.
During alpha AI Hub will be a free product for selected customers, after which "Google Cloud is actively working with customers and partners to determine the right pricing and monetisation strategy that will be made public at beta launch," a spokesperson said.
The second product is an open source project called Kubeflow Pipelines to help take these resources and get them into production.
This new feature is part of the open source Kubeflow project and is effectively a workbench solution for composing, deploying and managing machine learning workflows. As it is an open source solution Kubeflow Pipelines offers interoperability and the flexibility to experiment with models before deploying into production.
"Organisations like Cisco and NVIDIA are among the key contributors to this open source project and are collaborating with us closely to adopt Kubeflow pipelines," Mehanna wrote.
"NVIDIA is already underway integrating RAPIDS, a new suite of open source data science libraries, into Kubeflow. The RAPIDS library leverages GPUs to provide an order of magnitude speed-up for data pre-processing and machine learning, thus perfectly complementing Kubeflow."
Kubeflow is already available on GitHub.
Early adopters will naturally be organisations with a sizeable machine learning/data science function and Sheth expects them to come from retail, financial services, manufacturing, healthcare and media sectors, predominantly.
One potential early adopter is the media company Meredith Corporation, which is already using Google machine learning tools to automate content classification.
As part of Mehanna's blog post, Alysia Borsa, chief marketing and data officer at Meredith Corporation said: “By using Natural Language and AutoML services to apply our custom universal taxonomy to our content, we’re able to better identify and respond to emerging trends, enable robust detailed targeting and provide our audience with more relevant and engaging experiences.”
Sheth explained that this is an early release of the product and that the "goal is to get a key set of early customers giving us feedback and continuing to iterate and make this the central part of our AI strategy."