News

EU AI Act: The New European Law on Artificial Intelligence and Its Global Impact

Article Highlights:
  • The EU AI Act is the world’s first comprehensive artificial intelligence law
  • It applies to all companies offering or using AI in the European market
  • Introduces a risk-based approach: banned, high risk, limited risk
  • Penalties up to €35 million or 7% of global turnover
  • Big tech reacted differently: Google signed the code, Meta criticized it
  • The law aims to promote trustworthy, rights-respecting AI
  • Compliance deadlines are staggered until 2027
  • Europe positions itself as a leader in AI regulation
EU AI Act: The New European Law on Artificial Intelligence and Its Global Impact

EU AI Act: The New European Law on Artificial Intelligence and Its Global Impact

What is the EU AI Act?

The EU AI Act is the world’s first comprehensive law on artificial intelligence, approved by the European Union to regulate the development and use of AI across its 27 member states. This regulation affects not only European companies but also any international entity offering or using AI systems in the European market.

Why was this law created?

The main goal of the EU AI Act is to establish a uniform legal framework that fosters innovation while ensuring the protection of fundamental rights, safety, and the environment. The law aims to promote “human-centric and trustworthy” AI, balancing technological growth with risk prevention.

How it works: a risk-based approach

The regulation adopts a risk-based approach, dividing AI applications into three categories:

  • Unacceptable risk: banned uses, such as indiscriminate collection of biometric data from the internet or surveillance cameras.
  • High risk: systems subject to strict rules, like those used in healthcare or finance.
  • Limited risk: lighter obligations for less critical applications.

Timeline and deadlines

The EU AI Act came into force on August 1, 2024, but its provisions will be applied gradually. The first bans have been active since February 2, 2025, while most rules will be fully operational by mid-2026.

From August 2, 2025, the law also applies to general-purpose AI models with “systemic risk,” involving giants like Google, Meta, OpenAI, and others, who have until 2027 to fully comply.

Penalties and enforcement

The penalties are severe: up to €35 million or 7% of global turnover for the most serious violations. Providers of general-purpose AI models also risk fines up to €15 million or 3% of turnover.

Reactions and debate among tech companies

The law has sparked mixed reactions among big tech. Google voluntarily signed the code of conduct, though it expressed concerns about potential slowdowns in European innovation. Meta, on the other hand, openly criticized the regulation, calling it excessive and a source of legal uncertainty.

“We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI.”

Kent Walker, President Global Affairs Google

Some European companies also requested a pause in implementation, but the European Commission confirmed the deadlines will be respected.

Conclusions: opportunities and challenges

The EU AI Act marks a historic turning point for AI regulation, placing Europe at the forefront of citizen protection and responsible innovation. However, balancing safety, rights, and technological development remains an open challenge that will shape the global future of AI.

EU AI Act: The New European Law on Artificial Intelligence and Its Global Impact What is the EU AI Act? The EU AI Act is the world’s first comprehensive [...] Evol Magazine