Ai in Australian workplaces

Forty percent of Australians trust AI in the workplace

A University of Queensland and KPMG Australia study found only 40 percent of Australians trust the use of artificial intelligence (AI) at work, such as tools like ChatGPT.

Overall, Australians’ attitudes towards AI mirrored other Western countries like the UK, Canada, and France, where concern about AI is dominant. However, more Australians have heard of AI and have more trust in AI in 2022 than they did in 2020.

The study, Trust in Artificial Intelligence: A Global Study 2023 found Australians, like many western countries, are among the least trusting across the world of the use of AI at work. Less than half of Australians are comfortable with the use of AI at work and only a minority of Australians believe the benefits of AI outweigh the risks.

However, there is a gap in perceptions of AI across age and education in Australia, with 65 percent of Gen X and Millennials trusting AI at work compared to 39 percent of older generation Australians. The study found there are similar numbers when comparing the university educated (56 percent) with those who don’t have a degree (40 percent).

Professor Nicole Gillespie, KPMG Chair for Organisational Trust at The University of Queensland School of Business said: “Most people are comfortable with AI use for the purpose of augmenting employee performance and decision-making, for example by providing feedback on how to improve performance and providing input for making decisions. However, people are notably less comfortable with AI use for human resource management, such as to monitor and evaluate employees, collect employee performance data, and support recruitment and selection processes.”

Results from the survey also found that only 35 percent of Australians believe there are enough safeguards and current laws or regulations in place to make AI use safe, with no improvement in the adequacy of regulation in the two years since the last survey was conducted. This aligns with previous surveys showing continual public dissatisfaction with the regulation of AI. Australians expect AI to be regulated with the preferred option indicating the nation needs a dedicated independent regulator to monitor AI usage across a variety of sectors. This highlights the importance of strengthening and communicating the regulatory and legal framework governing AI (including data privacy laws).

James Mabbott Lead Partner KPMG Futures said, “A key challenge is that a third of people have low confidence in government, technology, and commercial organisations to develop, use and govern AI in society’s best interest. Organisations can build trust in their use of AI by putting in place mechanisms that demonstrate responsible use, such as regularly monitoring accuracy and reliability, implementing AI codes of conduct, independent AI ethics reviews and certifications, and adhering to emerging international standards. “

Professor Gillespie says “Overall, the findings highlight the importance of developing adequate governance and regulatory mechanisms that safeguard people from the risks associated with AI use and public confidence in entities to enact these safeguards, as well as ensuring AI is designed and used in a human-centric way to benefit people.”

The research provides comprehensive global insights into the key drivers of trust, the perceived risks and benefits of AI use, community expectations of the governance of AI, and who is trusted to develop, use, and govern AI.

It also sheds light on current understanding and awareness of AI, the practices and principles expected of organisations deploying AI, and how people feel about the use of AI at work.

 

Tags:

Leave a Comment

Related posts