Europe imposes final regulations on artificial intelligence

| Reading Time: 2 Min

The European Parliament adopts final legislative measures targeting artificial intelligence to ensure safety and respect for fundamental human rights. This new law aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from potential threats arising from the use of high-risk artificial intelligence. This is according to the Bulgarian representation in the EU.

The regulation foresees the introduction of obligations for artificial intelligence, which should be based on the potential risks it carries.

Prohibited applications

The new rules prohibit the use of certain AI applications that pose a threat to citizens’ rights. These prohibited applications include biometric categorisation systems based on sensitive characteristics, as well as the reckless extraction of facial images from the internet or CCTV footage to create facial recognition databases.

Additionally, the new rules prohibit emotion recognition systems in workplaces and schools, social assessment systems, crime prediction (based on profiling or feature assessment) and artificial intelligence that manipulates human behaviour or exploits individuals’ vulnerabilities.

 

Exceptions in law enforcement

The use of biometric identification systems by law enforcement authorities is prohibited by default, except in exhaustively specified and detailed situations. Real-time biometric identification systems may only be used if strict safeguards are met, such as limited time and geographical scope, and subject to specific judicial or administrative approval.

They can be used, for example, for targeted searches for missing persons or to prevent terrorist attacks. Post-facto use of such systems) is considered a case of high-risk use of an AI system requiring judicial authorisation.

 

Transparency requirements

Artificial intelligence systems and the models on which they are based must meet certain transparency requirements, such as compliance with EC copyright legislation and publication of detailed summaries of the content used to train models. Models of more powerful general purpose AI systems that could pose systemic risks will be subject to additional requirements, including model evaluation, assessment, risk assessment and incident reporting.

In addition, artificial or manipulated images and audio or video content (deep fake) should be clearly marked as such.

Regulatory laboratories will be set up at national level to conduct real-world testing, accessible to public administrations and start-up companies, to develop and train new AI models before they are brought to market.

 

If you need accounting or legal services, we are here to help:

Contact:

TPA Bulgaria

+359 2 981 66 45/46/47
office@tpa-group.bg

ul. “G.S. Rakovski” 128, floor 2
1000 Sofia