Australians Lack Trust in Artificial Intelligence

Trust is an issue when it comes to artificial intelligence (AI) according to a University of Queensland study that found 72 per cent of people don’t trust it, with Australians leading the pack. 

Professor Nicole Gillespie presenting at UQ 's Trust, Ethics and Governance Alliance Symposium (2020). Image Credit: The University of Queensland

Trust experts from UQ Business School, Professor Nicole Gillespie, Dr Steve Lockey and Dr Caitlin Curtis led the study in partnership with KPMG, surveying more than 6000 people in Australia, the US, Canada, Germany and the UK to unearth attitudes about AI.

Professor Gillespie said trust in AI was low across the five countries, with one nation particularly concerned about its effect on employment. 

“Australians are especially mistrusting of AI when it comes to its impact on jobs, with 61 per cent believing AI will eliminate more jobs than it creates, versus 47 per cent overall," Professor Gillespie said. 

The research identified critical areas needed to build trust and acceptance of AI, including strengthening current regulations and laws, increasing understanding of AI, and embedding the principles of trustworthy AI in practice. 

The survey also revealed that people believe most organisations use AI for financial reasons – to cut labour costs rather than to benefit society.

It found that while people are comfortable with AI for task automation, only one in five believe it will create more jobs than it eliminates. 

One positive finding was that people have more confidence in universities and research institutions to develop, use and govern AI in the public’s best interests. 

Professor Gillespie said the research showed that distrust came from low awareness and understanding of when and how AI technology was used across all five countries.

“For example, our study found while 76 per cent of people report using social media, 59 per cent were unaware that social media uses AI,” she said. 

Professor Gillespie said despite the gap in understanding, 95 per cent of those surveyed across all countries expected organisations to uphold ethical principles of AI. 

“For people to embrace AI more openly, organisations must build trust with ethical AI practices, including increased data privacy, human oversight, transparency, fairness and accountability,” she said. 

“Putting in place mechanisms that reassure the community that AI is being developed and used responsibly, such as AI ethical review boards, and openly discussing how AI technologies impact the community, is vital in building trust.” 

​​​​​​​Professor Nicole Gillespie is the KPMG Chair of Organisational Trust and currently integrating the study findings for building trustworthy AI into the new UQ Master of Business Analytics program. The full research report is available online

​​​​​​​Source: https://www.uq.edu.au/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.