The Australia Institute’s Centre for Responsible Technology made a submission to the Federal Government’s consultation on Safe and responsible AI (artificial intelligence) in Australia. To make AI safer and more responsible, the Australia Institute recommends:
- The Australian Government require transparency from AI product owners and model owners (namely technology companies like OpenAI, Google, Meta and Amazon) on what datasets are used to train their AI, and where they source these data sets from.
- The Australian Government adopt strong data privacy protections, and explicitly include AI technologies in the Australian Privacy Act, ensuring effective consent, data minimisation and purpose limitations.
- Compensating copyright holders and owners of any data used to train AI technologies – including authors, artists, musicians, writers, journalists, programmers, and any other original copyright holders.
- A system of accountability for privately funded AI initiatives, including any research bodies, practices, training hubs, and networks which have been funded by technology companies with vested interests in a positive return for their own organisations.
- After developing a risk register for AI, the Australian Government should impose a moratorium on what are identified as the most harmful applications of AI.