The European Parliament has officially passed the “Artificial Intelligence Law,” aimed at ensuring the transparent, safe use of artificial intelligence (AI) and preventing the spread of disinformation. This legislation reflects the growing global trend of utilizing AI tools, like ChatGPT and Midjourney, for various content creation purposes.
These technologies, widely accessible to the public, have faced ongoing attempts at regulation since their inception. Notably, countries such as China, the United Kingdom, and the United States have been at the forefront of advocating for regulatory measures.
The EU’s recent enactment of the “Artificial Intelligence Act” marks a significant move toward establishing a framework for the secure application of AI, with the law being adopted by a notable majority of 523 votes.
The world’s first major AI regulations are coming
The legislation introducing the world’s first major regulations on artificial intelligence is anticipated to become law in May, following final approvals from some EU member states. The European Parliament has stated that each EU country will establish its own AI regulatory agency. Additionally, it is noted that all the regulations are expected to be fully implemented by mid-2026.
Biometric identification systems can only be used by law enforcement agencies
In the military domain, it will be ensured that AI-supported tools do not compromise the security capabilities of member states or their responsible institutions. Law enforcement agencies will also have the opportunity to utilize artificial intelligence.
Within this framework, with the necessary permissions, they will be able to deploy remote biometric identification systems in public spaces. Such systems may be employed to locate individuals like victims of human trafficking and sexual abuse, or to identify persons suspected of terrorism threats or criminal activities.
Images, videos and audios created with artificial intelligence will need to be labeled
Perhaps the most concerning aspect of artificial intelligence involves the creation of fake images, videos, and audios. The EU has stated that under the new law, all images, videos, and audio produced by artificial intelligence must be clearly labeled.
Risk classifications are also available.
Certain AI tools identified as high-risk will be permitted for use; however, these tools must meet specific requirements to enter the EU market, with data security being a key criterion.
However, some applications are deemed unacceptable and will be prohibited within the EU. In this context, practices such as cognitive manipulation, extracting facial information from internet or closed-circuit camera system images, detecting emotional signs in workplaces and educational institutions, social scoring, and biometric categorization for extracting sensitive information like sexual orientation or religious beliefs will be banned.
Some artificial intelligence applications will be banned outright
The new law will explicitly ban certain artificial intelligence applications that are described as infringing upon the “rights of citizens”. Examples of such prohibited applications include emotion recognition systems in schools and workplaces.
Transparency will need to be provided for general-facing AI tools.
The foundational models powering these tools must provide a certain level of transparency before they are allowed to enter the market. On its website, the EU highlights that highly complex, massive, and high-performance models can pose a risk. As a result, transparency will be required to a certain extent.
Of course, it is necessary to open a separate office for their control.
For this purpose, an artificial intelligence office has been established to oversee the most advanced AI models. This office, utilizing consultations from independent experts, will identify security risks and suggest improvements.
Additionally, a management team positioned at the helm of the office will play a crucial role in implementing regulations. An advisory forum, comprising leading figures in the industry, will also be a focal point for the management team.
And what will be the penalty if this law is not followed?
There are different scenarios if the law is violated. In the case of prohibited practices, there are penalties of 7 percent of the company’s annual revenue or 35 million euros, if the criteria are not met, 3 percent of the annual revenue or 15 million euros, and if false information is spread, 1.5 percent of the annual revenue or 7.5 million euros. In the parts we call “or”, it will be looked at which one is higher.
What are your thoughts on this law?
You may also like this content
Follow us on TWITTER (X) and be instantly informed about the latest developments…