Computerworld

Why businesses will have to explain their AI

Experts believe that organisations will need to be able to explain decisions made by AI systems
  • Stuart Corner (Computerworld New Zealand)
  • 21 February, 2018 14:42

The algorithms underpinning todays artificial intelligence and machine learning systems are highly complex, and proprietary, but AI experts says data protection legislation being introduced in Australia and Europe will require any companies doing business in those areas to explain decisions made using AI.

Habib Baluwala, newly appointed data scientist at Auckland-based SAP consultancy Soltius, said the recent passing of the General Data Protection Regulation (GDPR) in the European Union would require all organisations making decisions using AI or data science to explain their decisions, and that New Zealand organisations dealing with international clients based in the EU would need to comply with GDPR regulations and provide transparency for algorithms used in making decisions.

Similar views were expressed today in Sydney by Richard Kimber, speaking at the launch of his AI startup, Daisee.

“Regulators are getting interested in AI,” he said. “Users will have to justify their AI based decisions, and the GDPR will require AI to be explainable.”

Soltius’ general manager, Practice, Andrew Roberts, said the hiring of Baluwala reflected the company’s evolution into predictive analytics after years of delivering analytics and reporting solutions.

“We’ve had a history of delivering analytics and historical reporting to customers. The idea of bringing in predictive analytics, internet of things, artificial intelligence and machine learning aspects to our customers, through Habib and the people we’ll bring in to follow Habib in the future is really exciting.”

Baluwala also said that bias within AI was becoming an increasingly hot topic globally.

“AI is used to predict everything from the credit card fraud to preferred cancer treatment, but companies fail to realise that AI is only as good as the data that is used to train it,” he said. “Quality assessment and control of data used in AI systems should be a major focus for enterprises seeking neutrality in their AI systems.

“Courts in America are currently using AI algorithms to make decisions on how long a criminal should be jailed for. Unfortunately, these algorithms trained on data from previous court decisions might include racial and ethnic biases that can lead to tainted decisions.”

Baluwala said a dominant trend this year would be in the way companies communicate with customers, with chatbots gaining increasing dominance.

“Most companies have been using call centres and other mediums to interact with customers, but chatbots can execute the same tasks with greater efficiency, no waiting times, and significantly reduced overhead.”

At Spark NZ’s half year results announcement today CEO Simon Moutter said the company now had 35 bots performing automated and sometimes very complex tasks, from managing security functions to proactively resolving broadband faults.

Spark gave the first inkling of its use of bots only in June 2017 when Dr Claire Barber, CDO Spark Platforms, revealed that Spark had been using ‘Tinkerbot’, “a first edition artificial intelligence robot trained by our customer-facing engineers, proactively looking for customers experiencing poor broadband performance and diagnosing and resolving their issue.”