South Korea’s MSIT releases plan to regulate AI

“Realise trustworthy AI for everyone”.

The Ministry of Science and Technology (MSIT) in South Korea, has announced a strategy to investigate artificial intelligence trustworthy for everyone.

While artificial intelligence is a leading innovation and being used and spread across all industries and society as a leading force in the fourth industry revolution, it is showing unexpected social issues and concerns following its widespread use. The concerns include the AI chatbot ‘Iruda’ (2021 January), former President Obama Deep Fake (2018 July), Psychopath AI developed by MIT (2018 June), etc.

These incidents have caused major countries to recognise that gaining social trust for artificial intelligence is the first step to using it for industries and society. As a result, active policy actions have been taken to realize artificial intelligence that can be trusted.

Around the world for example:

  • EU: Suggested AI bill to propose putting focused regulations to high-risk AI (imposing obligations to providers) (2021 Apr); Established a system where providers are obliged to notify users of the use of automated decision making and users may refuse and have rights to refuse it, ask for an explanation and raise objections (General Data Protection Regulation, 2018.); Provided trust self-checklist for the private sector, with three components of trustworthy AI. (2019)
  • US: Adopted ‘technically safe AI development’ in National AI R&D Strategic Plan (2019); Preparing AI development principles mostly for major businesses (IBM, MS, Google) promoted autonomous regulation to realise ethical use of AI; Announced a federal-government-level regulation guideline, with main goals to limit regulator overreach and focus on an risk-based approach. The guideline includes 10 principles to secure trust in AI (Transparency, fairness) (2020)
  • France: Concluded recommendations needed to realise AI for humans from an open discussion with 3,000 people from civil society and business (2018)
  • UK: Established 5 codes of ethics (2018 Apr), a guide to using AI in the public sector (2019 June.), a guideline for explainable AI (2020 May)
  • Japan: Announced social principles of 「human-centered AI」, introducing 7 basic rules for all stakeholders in AI (2018.March)

According to MIST amid such global trends, Korea’s strategy was prepared based on the understanding that policy support should be promptly made for realising an artificial intelligence that can be trusted for Korea to become a leader in AI, putting people at the centre.

The strategy has the vision of “realize trustworthy artificial intelligence for everyone” and will be implemented step by step until 2025, based on the three pillars of ‘technology, system, ethics’ and 10 action plans.

Key features include:

  • Create an environment to realise a reliable artificial intelligence
  • Lay the foundation for safe use of artificial intelligence
  • Spread AI ethics across society



Leave a Comment

Related posts