In a world-first, the EU created legislation to regulate Artificial Intelligence, called the AI Act. But it now seems to be moving away from effective protection for those harmed by this technology by abandoning a proposed AI Liability Directive.

The Artificial Intelligence Act came into force in the EU in August 2024, defining four levels of risk for AI systems: unacceptable, high, limited and minimal. Eight practices are banned as unacceptable based on behaviour or personal characteristics, and these bans came into effect this month.

There are many other potential risks arising from AI to health, safety and fundamental rights, and a proposal was intended to create a harmonised legal approach across member states for those seeking compensation. But the specific directive is at serious risk.

“The European Commission published its work programme for 2025 a few weeks ago, and the directive was on the list to be withdrawn. They don’t think it has made enough progress and won’t make enough progress in the coming months,” explains Cynthia Kroet, who follows EU technology policy for euronews.

While some argue that consumers would still be able to invoke the Product Liability Directive, “there is a big difference because this directive only covers defective products, material damage. AI liability would cover errors made, for example, by an algorithm that would lead to discriminatory results from an AI system,” according to Kroet.

Citizens interviewed by euronews in Madrid and Budapest appeared to expect a legal safety net. “I think it’s an extremely interesting technology, but also very dangerous if it’s not properly regulated,” said a resident of the Spanish capital.

“We should certainly make legal decisions that prohibit, for example, a small child from harming or harassing another with artificial intelligence,” suggested a Budapest resident.

Does too much regulation affect competitiveness?

Dropping the AI ​​Accountability Directive could be a sign that the European Commission is listening to critics who say too much regulation hurts industrial competitiveness.

To address this, President Ursula von der Leyen announced a new fund during the AI ​​Global summit in Paris in early February. Called InvestAI, it will mobilise €200 billion to finance four future AI gigafactories in the EU. A dozen smaller units are also planned, allowing companies to test their AI models.

Brando Benifei, a centre-left Italian MEP, said withdrawal of the directive was a “disappointing choice because it creates legal uncertainty”. The AI ​​Law’s rapporteur does not include regulation in the list of factors that harm competitiveness.

“We have less access to capital for investment in the digital sector. We need more computing infrastructure and then we need simplified and clear rules. But we cannot give up on protecting our citizens, our businesses, our public institutions, our democracy from the risks of discrimination, disinformation, harm from the misuse of AI,” he told euronews.

While the European Commission is open to finding a solution, the legislator also believes that a specific directive on liability would be the best “way forward” and describes it as “light legislation that can create a common minimum standard”.

Benifei says that the “recommendations” alone would be ignored by some member states and changing Product Liability legislation could be complicated.

The AI ​​Act will be fully applicable by 2027, meanwhile the EU wants to keep ahead of the innovation race, but can the Union balance its desire to be an AI powerhouse while also protecting the rights of its citizens?

Watch the video here!

Journalist: Isabel Marques da Silva

Content production: Pilar Montero López

Video production: Zacharia Vigneron

Graphism: Loredana Dumitru

Editorial coordination: Ana Lázaro Bosch and Jeremy Fleming-Jones

Read the full article here

Share.
Leave A Reply

2025 © Prices.com LLC. All Rights Reserved.
Exit mobile version