EU’s AI Act: Balancing Rights, Risks and Innovation

On 9th December 2023, the European Parliament and the Council reached a provisional agreement on the AI Act.

EU’s Artificial Intelligence (AI) Act became the first legislation regulating the usage of AI. The act aims AI to be ethical, safe, and trustworthy. In other words, fundamental rights, democracy, the rule of law, and environmental sustainability should be protected when AI is used.

Risk groups

The rules are made on a risk-based approach. Therefore, the more risks AI contains, the more it is limited by the Act. The AI systems can be separated according to their risk level. These are the minimal risk group, the limited risk group, and the high-risk group.

  • Minimal risk group: Applications like AI-enabled recommender systems, video games or spam filters, considered minimal risk, will be exempt from obligations and regulations under the EU’s AI Act, as they pose little to no risk to citizens’ rights or safety. In essence, the majority of AI systems, deemed low-risk, can continue to be used without being regulated by the EU’s AI Act.
  • Limited risk group: Systems in this group should meet the minimum transparency requirement. This means that users must be made aware of the fact that they are interacting with AI. Then, users can decide whether they want to use it or not. Examples are chatbots.
  • High-risk group: Systems that may potentially violate health, safety, fundamental rights, environment, democracy, and the rule of law are in this group. Clear obligations are set.

The mandatory fundamental rights impact assessment was one of the requirements that MEPs successfully included, and it is also applied in the banking and insurance industries. Those systems will be assessed before putting them on the market and also through every stage of their development.

People will be able to file complaints against AI systems and request information regarding choices made using high-risk AI systems that affect their legal rights.

  • Unacceptable risk group: AI systems that pose unacceptable dangers will not be allowed to be used in the European Union. In other words, they will be banned. This covers AI systems that alter human behavior to undermine users’ free will. Examples of these include voice-activated toys that encourage risky behavior in children, or programes that enable “social scoring” by businesses or governments. It also includes specific applications of predictive policing. Furthermore, certain applications of biometric systems – such as those for emotion recognition in the workplace, person classification, and real-time remote biometric identification for law enforcement in areas open to the public – will not be permitted.

Other goals of the AI Act: Beyond improving governance and the efficient application of current laws about fundamental rights and safety, the AI Act has further goals in consideration.  The goal of EU legislators is to accelerate the creation of a single market for AI applications while simultaneously encouraging investment and innovation in AI within the EU. The Act supports the EU’s coordinated plan on artificial intelligence, which aims to boost AI investment in Europe.

The Council and the Parliament have reached a provisional agreement on the AI Act. For the Act to become officially part of EU Law, the provisional agreement must be formally adopted by both the Council and the Parliament.