The government is considering mandatory rules to label and watermark artificial intelligence systems and content as part of measures to protect against the risks and harms posed by the technology.
Subscribe now for unlimited access.
$0/
(min cost $0)
or signup to continue reading
As part of its interim response to consultations on the use of AI, the government is talking with industry about voluntary standards and codes and the possible need for "mandatory guardrails" around product testing, compulsory labelling and watermarking, developer certification and organisation accountability.
Industry Minister Ed Husic said consultations with the industry and community showed that while Australians considered that AI technology held immense potential, "[they] want to see the risks identified and tackled".
"We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI," Mr Husic said. "We want safe and responsible thinking baked in early as AI is designed, developed and deployed."
The minister said an expert advisory group would be established to help develop options for mandatory guardrails for the industry.
The government was also closely monitoring the approach taken by other countries and regions including the European Union, the United Sates, Canada and the United Kingdom.
The government's response to the Safe and Responsible AI in Australia discussion paper is on the Department of Industry, Science and Resources website.