First regulations on artificial intelligence

EU proposes regulations to ensure safe and responsible use of AI

As part of its digital strategy, the European Union (EU) aims to regulate artificial intelligence (AI) to establish better conditions for the development and utilization of this innovative technology. AI has the potential to bring numerous benefits, including improved healthcare, safer transportation, more efficient manufacturing, and sustainable energy solutions.

In April 2021, the European Commission introduced the first regulatory framework for AI within the EU. The proposal involves the analysis and classification of AI systems based on the risks they pose to users. The level of risk determines the extent of regulation required. Once approved, these regulations will become the world’s first rules specifically addressing AI.

Key Objectives of AI Legislation According to Parliament

The European Parliament prioritizes the safety, transparency, traceability, non-discrimination, and environmental sustainability of AI systems deployed in the EU. It emphasizes the need for human oversight to prevent harmful outcomes, advocating for people to be in control rather than relying solely on automated decision-making.

Parliament also aims to establish a technology-neutral and uniform definition of AI that can be applied to future AI systems, ensuring clarity and consistency in its implementation.

AI Act: Tailored Regulations Based on Risk Levels

The newly proposed rules outline obligations for AI providers and users based on the risk levels associated with their AI systems. While many AI systems pose minimal risks, they still need to undergo assessment procedures.

Unacceptable Risk

AI systems considered to be an unacceptable risk will be prohibited. These include:

  • Cognitive behavioural manipulation of individuals or vulnerable groups, such as voice-activated toys encouraging dangerous behaviour in children.
  • Social scoring, which involves classifying individuals based on behaviour, socio-economic status, or personal characteristics.
  • Real-time and remote biometric identification systems, including facial recognition.

Certain exceptions may be allowed. For example, “post” remote biometric identification systems that identify individuals after a significant delay may be permitted for prosecuting serious crimes, but only with court approval.

High Risk

AI systems that have a negative impact on safety or fundamental rights will be categorized as high risk and will fall into two main groups:

  • AI systems used in products covered by the EU’s product safety legislation, such as toys, aviation, automobiles, medical devices, and lifts.
  • AI systems falling under eight specific areas that will require registration in an EU database, including biometric identification, management of critical infrastructure, education and vocational training, employment and worker management, access to essential services, law enforcement, migration and border control management, and legal interpretation and application of the law.

All high-risk AI systems will undergo thorough assessments before entering the market and throughout their lifecycle to ensure compliance and mitigate potential risks.

Generative AI

Generative AI, exemplified by models like ChatGPT, will be subject to transparency requirements, including:

  • Disclosing that the content was generated by AI.
  • Designing the model to prevent the generation of illegal content.
  • Publishing summaries of copyrighted data used for training.

Limited Risk

AI systems with limited risk will need to comply with minimal transparency requirements, enabling users to make informed decisions. After interacting with these applications, users can decide whether they wish to continue using them. Users should also be made aware when they are engaging with AI, particularly for systems that generate or manipulate image, audio, or video content (e.g., deepfakes).

The EU’s proposed regulations aim to strike a balance between fostering innovation and ensuring the responsible and ethical use of AI technology. As the legislation progresses, stakeholders across industries will closely monitor its development and implementation to understand its impact on AI practices within the EU and beyond.



Leave a Comment

Related posts