Computerworld

Intel reveals Big Data's dirty little secret

To get value from Big Data, enterprises need systems that require less assembly and expertise to run, an Intel executive said.

Companies are spending billions on tools and engineering to analyse Big Data, though many are hampered by one little problem: they still don't know what to do with all the data they collect.

"This is the dirty little secret about Big Data: No one actually knows what to do with it," Jason Waxman, an Intel vice president and general manager of the company's cloud platforms group, said Thursday in a webcast for investors.

"They think they know what to do with it, and they know they have to collect it, because you have to have a big data strategy. But deriving the insights from Big Data is a little harder to do," he said.

Big Data is all about collecting large amounts of sensor or process data, the analysis of which can lead to insights into customer behavior and point the way to improvements in operational efficiency.

Intel is interested in the Big Data market becauseBig Data systems will require lots of processor-driven hardware, preferably Intel's.

Today, big data sparks about $13 billion a year in IT spending, a figure Intel estimates will balloon to $41 billion by 2018, with at least $2 billion or so of that money earmarked for hardware.

To get value from big data, enterprises must get past a number of hurdles, Waxman said.

The company talked to a number of organizations to find out more about their use, and anticipated use, of Big Data. It found that the number one challenge is figuring out how to extract value from the data.

It's a demanding task. Organizations need the right talent to assemble and run Big Data systems, which requires skills in statistics and analytical reasoning in addition to the more usual programming and system administration.

"The ability to do this all together is pretty rare," Waxman said.

Intel has undertaken a number of initiatives to help organizations start to get value from all of their information.

One is finding and highlighting successful big data operations. When a retailer, for instance, finds a successful way to improve a customer experience through big data, Intel documents the operation "to help more people replicate that," Waxman said.

Another big challenge is making big data systems easier to deploy. Right now, organizations are assembling these systems piece by piece, which can involve a lot of configuration and integration.

"Instead of having people write a bunch of programs and stitch together big computers, we need to find a way to make it easier for people to deploy" Big Data systems, Waxman said.

To this end, Intel has invested in a number of Big Data software providers. Last year, Intel invested $740 million in Cloudera, which offers a commercial distribution of the Hadoop data processing platform. Together, the companies worked on a roadmap for Cloudera software that will take advantage of the advanced features in Intel's processor architectures.

Despite the current popularity of Cloud-based software services, Waxman predicted that most companies will want to run their Big Data operations in-house, rather than hand off their data and analysis to third-party services.

Waxman recalled talking with an executive at a financial firm who confided in him about a recent meeting with IBM, which offered its Watson cognitive computing services. The executive found the technology intriguing but expressed concern about using a proprietary service, as well as in handing over data to a company that could be a competitor at some point.

"People want to own their own data. If they give away their data, they don't have much of a company left," Waxman said.