The Human Rights Commission’s call for a pause on the development of Facial Recognition Technology and the placing of guardrails around the development of other AI products could be the kickstart the Australian tech sector desperately needs.
While Australia plays perpetual catch-up with the tech superpowers of the US and China, scrounging for government support and celebrating the odd local-grown dotcom billionaire, the Commission has been posing a more fundamental question: is all AI the same?
The answer, according to Commissioner Ed Santow and his ‘Human Rights and Technology Report released late last week, is a resounding ‘no’.
Instead, his report sets out to create an indigenous, locally produced form of automated decisions and artificial intelligence anchored in Australian laws and expressing Australian values.
The starting point of this work is the proposition that the development of algorithms that build on their own logic are not value-free.
Embedded in every AI algorithm are the biases of the coders; the context of the information they build their decision chains from, the questions they choose to ask, the faces they choose to see.
For example, a bank building out a model from historical data may discover women or indigenous people are more likely to default on their loans, the product of deep-seated historical and systemic inequities. Does this make a code that preferences white males an asset to a business? Or does it just nudge that business away from thousands of customers who do not fit the stereotype of a bad risk?
Santow argues only a human, employing deductive reasoning should make that call. Indeed, he argues that all AI decisions should be explainable, and when it comes to the government exercising power over citizens, directly attributable to an identified member of the species.
Under this model, the logic of so-called robodebt where people where pursued and fined for overpayment of welfare based on the outputs of a flawed algorithm could not occur again.
One of the most egregious uses of Artificial intelligences is the so-called ‘Facial Recognition Technology’, which – according to recent studies in the UK doesn’t get the recognition bit right most of the time, especially when the faces are brown or black.
While the Chinese Government rolls out its Social Rating System and western law enforcement agencies build massive banks of images for street identification of terror risks, it is timely to ask, should this technology be part of our crime-fighting arsenal?
Similar concerns abound when the technology is applied in a commercial setting. Do we really want algorithms that can identify us on a camera and understand our buying habits, or worse, our political convictions, to be processed and monetised?
Santow’s recommendation when it comes to Facial Recognition is to hit pause while we think through not just the processes but the very purpose of this technology.
But his broader recommendations around how AI should work in accordance with the existing laws of the land are the real game-changer in his report.
Too often new technology is built on the run, with vulnerable members of the population corralled into a beta-test. Santow’s proposition is that AI needs to comply with Australian discrimination law from the outset. This would require AI deployed by both government and corporations to be justifiable if challenged. And that means they can no longer be secret.
If adopted this would bake Australian values of fairness into our technology framework from the outset; not as an ‘ethical feel-good initiative or a CSR nice to have, but as a legal obligation.
This would create a distinctly Australia technology framework that Australia could take to the world. Recognising we can’t compete on scale or scope we could compete on value.
The US’s AI industry regards users as inputs, to be commodified and monetised. The Chinese surveillance state regards individuals as variables to be sorted managed and controlled.
But imagine an Australian AI that was exported to the world, nations that yearned for something more than the binary of surveillance capitalism or state surveillance, a sort of liberal renaissance embedded in lines of code.
As the Swiss have their watches and the Danes their furniture, maybe Australia could have its AI, built with fairness baked in, delightfully designed, rigorously engineered, embedding all that is good about us in the algorithm.
It could be exported to governments and businesses around the globe to create more robust algorithms and help avoid having to make the choice between systems anchored in state surveillance or in surveillance capitalism.
The instinct of business is always to push back against government regulation as red tape that will stifle innovation. But the Santow Report should be seen more as the guardrails that could turbocharge Australian technology onto the world stage,
Sometimes doing the right thing can also mean doing the smart thing. Aussie, Aussie, Aussie? AI! AI! AI!