Australia asks in surprise meeting whether ‘risky’ AI should be banned

The Australian government has suddenly announced an eight-week consultation to consider banning “high-risk” artificial intelligence tools.

Other regions, including the United States, the European Union, and China, have also taken steps in recent months to understand and potentially mitigate the risks associated with the rapid development of AI.

On 1 June, Minister for Industry and Science Ed Husick announced the publication of two papers: a discussion paper on “Safe and Responsible AI in Australia” and a report on Generative AI from the National Science and Technology Council.

The papers came alongside a consultation which will run until 26 July.

The government is seeking feedback on how to support the “safe and responsible use of AI” and is discussing whether it should opt for voluntary approaches, such as ethical frameworks, whether specific regulation is needed, or both approaches Combination of.

Map of options for possible AI management along the ‘voluntary’ to ‘regulatory’ spectrum. Source: Ministry of Industry, Science and Resources

One question in the consultation asks directly “Should risky AI applications or techniques be banned altogether?” and what criteria should be used to identify AI tools that should be banned.

A draft risk matrix for AI models for response has been included in the wider discussion paper. To give examples, it classified AI in self-driving cars as “high risk”, while a common AI tool used for a purpose such as creating patient medical records was deemed “moderate risk”.

The paper highlighted “positive” uses of AI in the medical, tech, and legal industries, but also highlighted its “harmful” uses, such as deepfake tools, use in creating fake news, and cases where AI bots have self-destructed. Damaged. encouraged.

The bias of AI models and “hallucinations” – nonsensical or false information generated by AI – were also highlighted as problems.

Connected: Microsoft’s CSO Says AI Will Help People Thrive Either Way, She’s Signing Doomsday Letter

The discussion paper claims that the rate of AI adoption in the country is “relatively low” because it has a “low level of public trust”. It also pointed to AI regulation in other jurisdictions and Italy’s temporary ban on ChatGPT.

Meanwhile, the National Council for Science and Technology reports that Australia has some advantageous AI capabilities in robotics and computer vision, but its “fundamental core competence” [large language models] and related areas are relatively weak,” adding:

“The concentration of generative AI resources within a small number of large multinational and mostly US-based technology companies provides opportunities [sic] Risk to Australia.

The report further discussed global AI regulation, giving examples of generative AI models and believed they are “likely to impact everything from banking and finance to public services, education and creative industries.”

O eye: 25,000 traders bet on ChatGPT’s stock pick, AI sucks dice and more

Stay connected with us on social media platforms for instant updates, Click here to join us Facebook

Show More
Back to top button

disable ad blocker

please disable ad blocker