Data, the subject everyone loves to love, has become a board-level agenda item, but organisations often make the mistake of focusing on the upside, without giving the same level of attention to potential downsides.
In a recent report, Ovum claimed that “…if data has the power to reshape business positively, it also has the power to destroy it” - an assertion its stands by.
As with the treatment of employees or supply chains, it is time to consider the ethics of what data is retained and how it is used.
Defining “data ethics” is challenging, because there is no one-size-fits-all explanation - organisations will take many different views on the subject.
To put it simply: If an organisation wouldn’t want the public to know about a particular data or analytical process, it needs to reexamine its ethical footing.
News outlets are increasingly interested in stories pertaining either to breaches in data security or privacy of data (of course, the two are often related).
Whether it’s the Snowden leaks or the failure of a high-profile corporate website, the result is the same: negative reaction.
Ovum believea the momentum behind these stories is only likely to continue - an ever-greater proportion of business is conducted in a potentially exposable fashion.
This is not just a function of the cloud and has started to present a question often only asked by science fiction: “Just because we can, should we?”
The perfect storm of technology, analytical technique, and data availability has arisen to the point that a raft of questions about the availability of that data’s analysis and, even, whether that analysis should be conducted have come to the fore.
Many proponents will highlight the benefits of data analysis, yet are reticent on the subject of its potential drawbacks.
The other issue here is a matter of perception versus reality. Data professionals and the world at large have a very different understanding of technology and its limitations.
The rise of cognitive technologies is a prime example.
Recent protestations about the possible negative outcomes of a general artificial intelligence have the potential to become confused with real-world deployments of context-specific cognitive technology (IBM Watson, for example) and machine learning (the framework for machine learning in Spark, for example); both are careful applications of new technology, to either infuse contextual intelligence into decisions or automate existing process, respectively.
The critical issue, however, stands.
The general public doesn’t necessarily have a good understanding of how the technology may be deployed today.
For the above reasons, and many more beyond, enterprises looking to benefit from the use of existing and new data sources / technology, and the vendors who enable that process, need to be cautious.
Ovum stands by a core assertion: If data holds the potential to benefit many, it also has the potential to harm many (as an unexpected outcome, or purposefully negative).
The logical reduction here is simple: Data and its potential benefits have become a board-level agenda item; therefore, equal attention must be paid to the conceivable downside.
The corporate responsibility agenda just got bigger and the best practices for managing it are still developing.
Ovum suggests that organisations take two steps in the near term:
First, establish clear lines of data responsibility - data governance isn’t new but many of the processes and structures created to support and enforce it are often underused and poorly understood.
Second, apply a simple test to new data and analytics projects by asking the question, “How comfortable would the company be if this project appeared in the news headlines tomorrow?”