Computerworld

Law Foundation report calls for oversight of Government use of AI

Report proposes new regulatory agency

A new report from the New Zealand Law Foundation warns against the unregulated user of artificial intelligence algorithms by government, saying that, in other countries algorithms have led to troubling outcomes. It is calling for the establishment of a regulatory/oversight agency.

It says such an agency would work with individual government agencies that intend either to introduce a new predictive algorithm, or to use an existing predictive algorithm for a new purpose.

The recommendation is the conclusion of a report the foundation has funded prepared by the University of Otago’s Artificial Intelligence and Law in New Zealand Project (AILNZP).

The report considers several possible models for a regulatory agency, but offers no detailed proposal as to the form it should take.

“At present, there are very few international examples from which to learn,  and those which exist (such as the UK’s CDIE) are in very early stages,” it says.

“We would welcome the opportunity to discuss this further with government and other regulatory agencies, and to contribute to the next stage of discussion about this.”

The report Government Use of Artificial Intelligence in New Zealand, says predictive algorithms such as RoC*RoI in the criminal justice system, have been in use for two decades, but increasing use of these tools and their increasing power and complexity present a range of concerns and opportunities. The primary concerns about their use are mainly around accuracy, human control, transparency, bias and privacy.

The study points to examples from other countries where algorithms have led to troubling outcomes. “In the USA, an algorithm that had been used for years in the youth justice system turned out never to have been properly tested for accuracy,” it says.

“In other cases, there has been concern about producing racially biased outcomes. The COMPAS algorithm, for instance, has been widely criticised for overstating the risk of black prisoners reoffending, compared with their white counterparts – an outcome that can result in them being kept in prison for longer.”

Report co-author associate professor Ali Knott said if information fed to an AI system was based on previous human decisions its outputs could be tainted by historic human biases.

“There’s also a danger that other, innocent-looking factors - postcode for instance - can serve as proxies for things like race,” he warned.

Another report co-author, professor James Maclaurin, said accuracy, transparency, control and bias created cause for concern, but New Zealand was faring better than many countries

“Unlike some countries that use commercial AI products, we’ve tended to build our government AI tools in-house, which means we know how they work. That’s a practice we strongly recommend our government continues.”

However, a third co-author, associate professor Colin Gavaghan, said understanding how AI code worked was not sufficient protection against bias, and it was for this reason the report recommended that New Zealand establish a new, independent regulator to oversee the use of algorithms in government.

“We already have rights against discrimination and to be given reasons for decisions, but we can’t just leave it up to the people on the sharp end of these decisions to monitor what’s going on,” Gavaghan said.

“They’ll often have little economic or political power. And they may not know whether an algorithm’s decisions are affecting different sections of the population differently.”

The report also warns against “regulatory placebos” – measures that create the impression of safety without actually providing any.

Gavaghan cited human oversight of algorithms as one example. “There’s good evidence that humans tend to become over trusting and uncritical of automated systems – especially when those systems get it right most of the time,” he said. “There’s a real danger that adding a human ‘in the loop’ will just offer false reassurance.” 

The report recommends that predictive algorithms used by government, whether developed commercially or in-house, must: feature in a public register; be publicly inspectable, and be supplemented with explanation systems that allow lay people to understand how they reach their decisions.

It says their accuracy should also be regularly assessed, with these assessments made publicly available.

The report is the outcome of on phase 1 of the New Zealand Law Foundation’s Artificial Intelligence and Law in New Zealand Project and  said it had received the lion’s share of $4342k funding distributed under project.

OECD members adopt AI principles

On 22 May the OECD’s 36 member countries, along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, signed up to the OECD Principles on Artificial Intelligence at the Organisation’s annual Ministerial Council Meeting.

The OECD AI Principles state that:

- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being;

- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards;

- There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.

- AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed;

- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.