Government must harness new tech without stepping on civil rights

New technologies can threaten  human rights, equality, non-discrimination, privacy, safety from violence, access and more.

The Australian Human Rights Commission recently released report calls for far-reaching changes to ensure governments, companies and others safeguard human rights in the design, development and use of new technologies like artificial intelligence (AI).

The Human Rights and Technology final report, makes 38 recommendations to ensure human rights are upheld in Australia’s laws, policies, funding and education on AI.

Recommendations include stronger community protections against harmful uses of AI—especially when AI is used in high-risk areas such as policing, social security and banking—and the creation of a new AI Safety Commissioner to help lead Australia’s transition to an AI-powered world, said Edward Santow, Australia’s Human Rights Commissioner.

“New technology should give us what we want and need, not what we fear,” he said “Our country has always embraced innovation, but over the course of our Human Rights and Technology project, Australians have consistently told us that new technology needs to be fair and accountable.”

According Santow that’s why the Commission has recommended a moratorium on some high-risk uses of facial recognition technology, and on the use of ‘black box’ or opaque AI in decision making by corporations and by government.

“We’re also recommending measures to ensure that no-one is left behind as Australia continues its digital transformation—especially people with disability,” noted Santow. “We need to ensure that new technology facilitates the inclusive society Australians want to live in, and that innovation is consistent with our values.”

He said there are already laws that aim to protect people from being treated unfairly. However the report recommends ways to apply those laws more effectively, and some targeted reform that would bring our laws into the 21st century.

“Australians should be told when AI is used in decisions that affect them. The best way to rebuild public trust in the use of AI by government and corporations is by ensuring transparency, accountability and independent oversight, and a new AI Safety Commissioner could play a valuable role in this process,” he said.

“A clear national strategy and good leadership will give Australia a competitive advantage and technology that Australians can trust.”

The Human Rights and Technology final report is the culmination of three years of consultation with the tech industry, governments, civil society and communities across Australia. The project included a national survey, face-to-face consultations with national and international experts, roundtables and formal submissions.

 

 

Tags:

Leave a Comment

Related posts