The EU is inviting companies that create generative chatbots such as ChatGPT, Mistral, Gemini and Claude to sign a voluntary Code of Conduct on general-purpose AI.

By signing the code and adhering to the rules, they are deemed compliant with the AI Act, an EU law that came into force in 2024 and defines four risk levels for the use of AI, from minimal to unacceptable.

Companies that refuse to sign the code may face more stringent inspections and administrative burdens. Major players like OpenAI and Anthropic support the code, while others, like Meta, refuse to sign it.

“Since the drafting process began last September, Meta has been very critical of the code, saying that it stifles innovation,” said Cynthia Kroet, senior tech policy reporter at Euronews.

“They’ve rolled out a few tools that they cannot fully use in Europe, also because of data protection rules. In the end it doesn’t matter much if they sign or not because the AI Act will prevail anyway,” she added.

The AI Law will be implemented progressively through 2027. This month, rules for general-purpose AI models, such as the generative chatbots mentioned above, will come into effect and companies have two years to adapt.

However, future models entering the market will be required to comply with the law immediately and, in case of violation, the Commission may impose a fine of up to €15 million.

Regulation and investment are not opposed?

The code of conduct sets out suggestions on how to respect copyright, standards to avoid systemic risks from advanced AI models, and advice on filling out a form that encourages transparency on how they comply with the AI Law.

Some analysts argue that the EU is using the regulation position itself strategically as the most trusted AI provider globally. The US and China have less comprehensive regulatory approaches and are focusing primarily on attracting large investments into the sector.

However, Laura Lázaro Cabrera, advisor to the Center for Democracy and Technology, says the two need to go hand in hand.

“The EU has made great strides towards strengthening the financial support that it provides to AI development in Europe. Just this year, over €200 billion have been announced for AI investment,” Laura Lázaro Cabrera said.

“Finances are an important part of the equation, and indeed it is important for the EU to maintain a leadership role in the development of AI, but we think that that leadership has to be tied to a strong safety framework that promotes fundamental rights and that promotes people-centred AI systems,” the adviser added.

Deepfakes, theft of confidential data, suicides linked to the use of chatbots are some examples of the risks of generative AI.

Laura Lázaro Cabrera hoped that the obligations related to AI literacy for companies will also lead to EU-wide campaigns and training for citizens, helping them understand the benefits and risks of this revolutionary technology.

Watch the video here!

Journalist: Isabel Marques da Silva

Content production: Pilar Montero López

Video production: Zacharia Vigneron

Graphism: Loredana Dumitru

Editorial coordination: Ana Lázaro Bosch and Jeremy Fleming-Jones

Read the full article here

Share.
Leave A Reply

2025 © Prices.com LLC. All Rights Reserved.
Exit mobile version