

The European Union has taken a historic step with the entry into force of the AI Act, the world's first comprehensive legislation on artificial intelligence. This groundbreaking act, which puts Europe at the forefront of AI governance, establishes a risk-based regulatory framework that aims to balance innovation and the protection of fundamental rights. However, the regulation also represents yet another manifestation of the so-called "Brussels Effect" - the EU's tendency to impose its regulations on a global scale through the power of its market, without necessarily driving technological innovation.
While the United States and China lead AI development with massive public and private investment (45 percent and 30 percent of global investment in 2024, respectively), Europe has attracted only 10 percent of global AI investment. In response, the EU seeks to compensate for its technological lag through regulation, imposing standards that end up affecting the entire global ecosystem.
The central question is: Is Europe creating an environment that promotes responsible innovation or is it simply exporting bureaucracy into an industry where it cannot compete?
The AI Act applies not only to European companies, but also to those that operate in the European market or whose AI systems impact EU citizens. This extraterritorial jurisdiction is particularly evident in the provisions on GPAI models, where recital 106 of the Act states that providers must respect EU copyright "regardless of the jurisdiction in which the training of the models takes place."
This approach has been strongly criticized by some observers, who see it as an attempt by the EU to impose its standards on companies that are not based in its territory. Critics say this could create a rift in the global technology ecosystem, with companies forced to develop separate versions of their products for the European market or adopt European standards for all markets to avoid additional compliance costs.
Multinational technology companies are thus in a difficult position: ignoring the European market is not a viable option, but complying with the AI Act requires significant investment and could limit opportunities for innovation. This effect is further amplified by the ambitious implementation timeline and the interpretive uncertainty of many provisions.
The AI Act went into effect on August 1, 2024, but its implementation will follow a staggered schedule:
The regulation takes a risk-based approach, classifying AI systems into four categories: unacceptable risk (banned), high risk (subject to strict requirements), limited risk (with transparency obligations), and minimal or no risk (free use). This categorization determines the specific obligations for developers, vendors and users.
One of the most significant innovations of the AI Act concerns transparency obligations, which aim to address the "black box" of AI systems. These obligations include:
These requirements, although designed to protect the rights of citizens, could place a significant burden on companies, particularly innovative startups and SMEs. The need for detailed documentation of development processes, training data, and decision-making logic could slow innovation cycles and increase development costs, putting European companies at a disadvantage compared to competitors in other regions with less stringent regulations.

The ruling in Case C-203/22 highlights how companies initially resist transparency mandates. The defendant, a telecommunications provider, argued that revealing the logic of its credit scoring algorithm would reveal trade secrets, jeopardizing its competitive advantage6 . The CJEU rejected this argument, stating that Article 22 of the GDPR entitles individuals to an explanation of the "criteria and logic" behind automated decisions, even if simplified6 .
Under the AI Act's two-tier system, most generative AI models fall under Level 1, requiring compliance with EU copyright and summaries of training data2 . To avoid claims of copyright infringement, companies such as OpenAI have switched to summary data or licensed content, but gaps in documentation persist.
The AI Act contains specific copyright-related provisions that extend the EU's regulatory influence far beyond its borders. GPAI model providers must:
Recital 106 of the AI Act states that providers must respect EU copyright law, "regardless of the jurisdiction in which model training takes place." This extraterritorial approach raises questions about compatibility with copyright territoriality principles and could create regulatory conflicts with other jurisdictions.
For global technology companies, the AI Act presents a key strategic choice: adapt to the "Brussels Effect" and comply with European standards globally, or develop differentiated approaches for different markets? Several strategies have emerged:
Some large technology companies are developing a "dual model" of operation:
This approach, although costly, makes it possible to maintain a European market presence without compromising innovation globally. However, this fragmentation could lead to a widening technology gap, with European users having access to less advanced technologies than those in other regions.
The European AI Act represents a turning point in AI regulation, but its complexity and interpretative ambiguities generate a climate of uncertainty that could negatively affect innovation and investment in the sector. Companies face several challenges:
The ever-changing regulatory landscape poses a significant risk to companies. The interpretation of key concepts such as "sufficiently detailed summary" or the classification of "high risk" systems remains ambiguous. This uncertainty could result in:

The "Brussels Effect" debate is part of the broader context of European technological sovereignty. The EU is in the difficult position of having to balance the need to promote domestic innovation with the need to regulate technologies developed primarily by non-European actors.
In 2024, European companies attracted only 10 percent of global investment in AI, while the United States and China dominated the sector with a combination of massive public and private investment, innovation-friendly policies and access to big data. Europe, with its linguistic, cultural and regulatory fragmentation, struggles to generate technology "champions" that can compete globally.
Critics argue that the European regulatory-focused approach risks further stifling innovation and deterring investment, while supporters believe that creating a reliable regulatory framework can actually spur the development of ethical and safe AI "by design," creating a long-term competitive advantage.
The AI Act's "Brussels Effect" highlights a fundamental tension in the European approach to technology: the ability to set global standards through regulation is not matched by corresponding leadership in technological innovation. This asymmetry raises questions about the long-term sustainability of this approach.
If Europe continues to regulate technologies it does not develop, it risks finding itself in a position of increasing technological dependence, where its rules may become less and less relevant in a rapidly evolving global ecosystem. In addition, non-European companies could gradually withdraw from the European market or offer limited versions of their products there, creating a "digital fortress Europe" increasingly isolated from global advances.
On the other hand, if the EU could balance its regulatory approach with an effective strategy of promoting innovation, it could effectively define a "third way" between U.S. capitalism and Chinese state control, putting human rights and democratic values at the center of technological development. Vaste programs would say in France.
The future of AI in Europe will depend not only on the effectiveness of the AI Act in protecting fundamental rights, but also on Europe's ability to accompany regulation with adequate investment in innovation and to simplify the regulatory framework to make it less oppressive. Otherwise, Europe risks finding itself in a paradoxical situation: a world leader in AI regulation, but marginal in its development and implementation.