Govt use of algorithms under scrutiny
A new independent agency that oversees the use of algorithms in government decision-making is required to counter unintended bias. That's the recommendation from researchers at the University of Otago's Artificial Intelligence and Law in New Zealand Project (AILNZP).
They have this week released a report, funded by the NZ Law Foundation, entitled 'Government Use of Artificial Intelligence in New Zealand.' According to its authors Associate Professor Colin Gavaghan, Associate Professor Ali Knott and Profession James Maclaurin, New Zealand is a world leader in this area, but "measures are needed to guard against their (algorithms) dangers."
"On the plus side, AI can enhance the accuracy, efficiency and fairness of decisions affecting New Zealanders, but there are also worries about things like accuracy, transparency, control and bias," says Maclaurin.
Knott says that while computer programs are not inherently prejudiced, the data that is fed into a system can be, and that means the outcomes could be "tainted by historic human biases."
"You can't check or correct a decision if you can't see how it was made," says Knott. "But in some overseas cases, that's been impossible, because the companies who design the algorithms won't reveal how they work."
New Zealand has a better track record than some countries because, according to the authors, the government has mostly built its "AI tools in-house, which means we know how they work. That's a practice we strongly recommend our government continues," Knott says.
A local example of how prejudice can be unwittingly baked into an algorithm is cited in the report. Although not strictly an algorithm, the authors discuss the use of the Youth Offending Risk Screening Tool (YORST), which has been used by the New Zealand police since 2007. It came under scrutiny following the National Party proposal in 2017 to use YORST to identify young offenders for "boot camps". Here's are the relevant extract from the report:
The YORST is not a "predictive model", as we define the term, because it isn't trained on data using automated means. And it certainly seems unlikely that anybody would count it as a form of artificial intelligence. Yet some of the ethical issues it raises are similar to those associated with bona fide predictive models that provide much more sophisticated automated support for decisions in government… One issue was that while the YORST does not use ethnicity as a variable, many of the variables it employs, while predictively accurate, are also proxies for ethnicity. Māori youth are more likely to come from families that have been subject to Child Youth and Family notifications. They are also more likely to live in poor neighbourhoods. So while the YORST follows accepted practice by focusing on the causes of criminal behaviour rather than on ethnicity, the proposed policy seemed likely to result in boot camps disproportionately populated by Māori youth. And while the stated aims of the policy were to turn the lives of young people away from crime, in the short term it seemed likely to exacerbate rather than alleviate existing racial inequality in New Zealand.
This reminded me of the recent Law Commission's study in the use of DNA in criminal investigations and its conclusion that "the collection and retention of DNA samples and profiles has a disproportionate impact on Māori… the data shows that Māori are already disproportionately represented on the DPD (DNA Profile Databank). This is particularly the case for rangatahi Māori."
The authors of that report also concluded an independent agency was required to ensure oversight of a very powerful application of new technology. Is there an opportunity for a cross-functional agency that ensures that we can, as much as possible, guard against the prejudices of the old analogue world infecting the ways of the new digital world?
You can check out more about the 'Artificial Intelligence and Law in New Zealand' research here.
You must be logged in in order to post comments. Log In