Taiwan’s Ministry of Science and Technology and NTU enables AI explainability

xCos module can be plugged into any existing deep face verification models

Taiwan’s Ministry of Science and Technology (MOST) has developed xCos, a face verification module that can recognise and identify whether two face images are the same or not.

Developed by Professor Winston Hsu’s team at the MOST Joint Research Centre for AI Technology and All Vista Healthcare (AINTU), enabling explainability in AI will not only promote its technique but also enable people’s trust.According to the Minister of MOST, Dr Liang-Gee Chen, MOST has been supporting four AI Research Centres to develop different research domains since 2018 after the AI Science Strategies announced in 2017.

These centres focus on AI technology, biomedical technology, intelligent manufacturing, applied AI research, and humanities and social sciences. These centres are in National Taiwan University (NTU), National Tsing Hua University, National Chiao Tung University, and National Cheng Kung University.

Professor Hsu’s team — from NTU and focuses on AI technology – developed the xCos module to be plugged into any existing deep face verification models. Meanwhile, they are preparing to extend explainable AI (XAI) into different fields such as energy, medicine, and the manufacturing industry.

He stated xCos can “help people” understand how decisions were made and to assist developers to examine the insights for the deep neural networks.

“With explainable techniques, people will have more confidence in accepting AI’s decisions, and developers can adjust their programs to improve accuracy,” stated Professor Hsu.

 

Explainable AI is “an important issue” in AI development around the world. By this trend, MOST announced “AI R&D Guidelines” last September to emphasize “transparency and traceability” and “explainability.”

Minister Chen said in February 2020 the European Union published a white paper on AI – which points out “excellence and trust” must be established because, lack of “trust blocked broad AI use”.

“The future development of AI must make people trust AI by reinforcing transparency and explainability,” he stated.

Professor Hsu and his team have been working on face-recognition technology since 2011. Back then his team one of the first search engine of the human face on mobile devices.

They have also assisted many software companies in developing face recognition technologies through academia-industry collaboration during the past few years.

“The finding that some AI verification results are counterintuitive inspired them a lot to take time to build up xCos to explore the justification for AI decisions,” stated Professor Hsu. “This model can provide both the quantitative and qualitative reasons to explain why two face images are from the same person or not.”

According to the Professor if the two face images are viewed as the same person by the model, the team proposed method can clearly show which areas on the face are more representative than others via providing local similarity values and attention weights.

“The explainable module, xCos, can even work well with other common neural network face recognition backbones such as ArcFace, CosFace, etc,” he stated.According to PricewaterhouseCoopers, AI has a US$15 trillion market value potential, but it is difficult to know the rationale of how the algorithm arrived at its recommendation or decision – ‘Explainable AI’.

Currently, computer vision communities still lack an effective method to understand the working mechanism of deep learning models.

Due to their inborn non-linear structures and complicated decision-making process (so-called “black box”), whose unknown reasons could lead to serious security and privacy issues.

The problems will make users feel insecure about deep learning-based systems and make developers struggle to improve them.Minister Chen said technology always continuously progress is because people kept asking “why” and make efforts to find solutions.

“In the early stage, AI was run by a rule-based system. It was rather easy to trace how the system made its decision, and it was highly explainable,” he stated. “With an increasing amount of data, deep convolutional neural networks achieve higher accuracy for the task of face verification.”

However, people have noticed that AI can’t explain the decision-making process. Given exceptional performances of AI, deep face verification models need more interpretability so that we can trust the results they generate, he said.

 

 

Tags:

Leave a Comment

Related posts