Identifying algorithmic bias in AI

Newly released paper demonstrates helps to address problems with AI bias.

With companies increasingly using AI for decision making in everything from pricing to recruitment, addressing the problem of algorithmic bias explores how these decision-making systems can result in unfairness.

A recently released technical paper demonstrates how businesses can identify algorithmic bias in artificial intelligence (AI), and proposes steps they can take to address this problem.

The technical paper also offers practical guidance for companies to ensure that when they use AI systems, their decisions are fair, accurate and comply with human rights.

The paper is a collaboration between the Australian Human Rights Commission and Gradient Institute, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61.

Human rights should be considered whenever a company uses new technology like AI, to make important decisions, said Human Rights Commissioner, Edward Santow.

“Artificial intelligence promises better, smarter decision making, but it can also cause real harm,” he said. “Unless we fully address the risk of algorithmic bias, the great promise of AI will be hollow.”

Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool. It often results in customers and others being unfairly treated.

However algorithmic biases in AI systems can be identified and steps taken to address problems, said Bill Simpson-Young, chief executive of Gradient Institute.

“Responsible use of AI must start while a system is under development and certainly before it is used in a live scenario. We hope that this paper will provide the technical insight developers and businesses need to help their algorithms operate more ethically,” he said.

Businesses should proactively identify human rights and other risks to consumers when they’re using artificial intelligence systems. They need to take a responsible approach to AI, including the rigorous design and testing of the algorithms and datasets, but also of software development processes as well as proper training in ethical issues for technology professionals.

 

 

 

 

 

 

Leave a Comment

Related posts