Tech Policy Design Centre Hopeful on the Potential for Australia’s Artificial Intelligence

With the rapidly changing social and technical landscape of artificial intelligence (AI), the Tech Policy Design Centre’s podcast Tech Mirror hosted by Professor Johanna Weaver, today released episodeBeyond The Pause: Australia’s AI Opportunity, which features Bill Simpson-Young, CEO of Gradient Institute and Dr Tiberio Caetano, Chief Scientist at Gradient Institute – two of Australia’s leading independent AI technologists.

“Minister Ed Husic has expressed frustration about people conflating large language models with artificial intelligence. He is right; we must have a nuanced understanding of what we are regulating if we are going to regulate AI well.” Professor Weaver said.

By the end of the episode, listeners will be able to distinguish between AI and Machine Learning, frontier models, foundation models and large language models, as well as narrow and general intelligence.

Both signatories of the Pause Letter that was published in March 2023, Mr Simpson-Young and Dr Caetano reflects on the proliferation of AI into public consciousness and the steps that led to him signing the Pause Letter.

“At the time there was no widespread recognition of the potential issues of large language models. In my judgement, based on scientific research and based on the reaction of reputable colleagues who were legitimately concerned about the issue, it was very clear to me that this was a matter that needed to be brought to the centre of our political discourse. It was about sounding the alarm” Dr Caetano said.

At the time of signing, the Pause Letter was controversial and criticised for being too long-sighted when it was suggested the priority should be on the immediate, short-term risk.

The pause letter “was misreported quite a bit. I think it achieved its goal of raising attention about the risks. And whether or not it was ever pragmatic to pause it, I think it did a critical job of getting the getting the message out.” Mr Simpson-Young said.

“We shouldn’t let the current harms mean that we don’t focus on tomorrow’s harms.” Mr Simpson-Young said. Highlighting the work Gradient is doing to operationalising Australia’s Ethical AI principles, as well as Gradient’s engagement in advance of the UK AI Safety Summit as examples of how Gradient is focused on addressing both current and imminent risks.

Ahead of the UK AI Safety Summit 1-2 November at Bletchley Park, Mr Simpson-Young states how Australia is well-positioned to be a global leader when it comes to responsible AI development and regulation.

“There is an opportunity to do something now. The sooner it’s done, the easier it will be. We need the establishment of a global committee to continue on an ongoing basis to work on the outcomes of the summit and design the way forward. Australia is heavily involved in this and a southern hemisphere perspective can lead the way.” Mr Simpson-Young said.

Professor Weaver, formerly Australia’s chief cyber negotiator at the United Nations, reflects that “international agreements are negotiated when there is a mutual desire among nations to constrain.”

“Right now, my observation that the conditions are ripe for an international AI agreement and Australia is “Australia is well position to take a leadership role.  Geopolitically the US and China are seeking to constrain each other, technical experts are sounding the alarm, we have something to constrain (frontier AI, not all AI), and the public is demanding action.”

“Australia has significant scientific expertise in AI, but we are not considered to have ‘skin in the game’ commercially in the same way as the US, China, the EU or the UK. Australia has a well-established reputation as an honest broker in cyber diplomacy. We have politicians that understand the issues. We have experts (technical and diplomatic) with established relationships and the ear of great powers. Few countries have this combination – and many are rightly looking to Australia for leadership.” Professor Weaver said.

Liability is another element of the AI discussion that surprisingly has not had a strong uptake in the public conversation. ”Liability ensures organisations and developers are incentivised to build safe AI responsibly and will increasingly be seen as important tool in domestic and international regulation of AI.” Professor Weaver said.

While the conversation around AI often focuses the risk and the need for regulation, Mr Simpson-Young reiterates “I've been working in AI for large parts of my life and, and I want AI to be used. AI is an important technology. Foundation models are important and incredibly useful and world changing. We need to be helping people use AI well.”

“It's always important to remember that the reason we are having these difficult conversations about AI regulation and risk is so that we can capture the enormous opportunity and potential to solve some of the greatest challenges of our time by harnessing artificial intelligence.” Professor Weaver said.

The two-part episode can be found on Apple Music, Spotify, or wherever you get your podcasts.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.