Newsletter

Regulating what is not created: does Europe risk technological irrelevance?

**TITLE: European AI Act-The Paradox of Who Regulates What Doesn't Develop** **SUMMARY:** Europe attracts only one-tenth of global investment in artificial intelligence but claims to dictate global rules. This is the "Brussels Effect"-imposing regulations on a planetary scale through market power without driving innovation. The AI Act goes into effect on a staggered timetable until 2027, but multinational tech companies respond with creative evasion strategies: invoking trade secrets to avoid revealing training data, producing technically compliant but incomprehensible summaries, using self-assessment to downgrade systems from "high risk" to "minimal risk," forum shopping by choosing member states with less stringent controls. The extraterritorial copyright paradox: EU demands that OpenAI comply with European laws even for training outside Europe-principle never before seen in international law. The "dual model" emerges: limited European versions vs. advanced global versions of the same AI products. Real risk: Europe becomes "digital fortress" isolated from global innovation, with European citizens accessing inferior technologies. The Court of Justice in the credit scoring case has already rejected the "trade secrets" defense, but interpretive uncertainty remains huge-what exactly does "sufficiently detailed summary" mean? No one knows. Final unresolved question: is the EU creating an ethical third way between U.S. capitalism and Chinese state control, or simply exporting bureaucracy to an industry where it does not compete? For now: world leader in AI regulation, marginal in its development. Vaste program.
Fabio Lauria
Ceo & Founder of Electe‍

The European AI Act: between transparency and corporate avoidance strategies

The European Union has taken a historic step with the entry into force of the AI Act, the world's first comprehensive legislation on artificial intelligence. This groundbreaking act, which puts Europe at the forefront of AI governance, establishes a risk-based regulatory framework that aims to balance innovation and the protection of fundamental rights. However, the regulation also represents yet another manifestation of the so-called "Brussels Effect" - the EU's tendency to impose its regulations on a global scale through the power of its market, without necessarily driving technological innovation.

While the United States and China lead AI development with massive public and private investment (45 percent and 30 percent of global investment in 2024, respectively), Europe has attracted only 10 percent of global AI investment. In response, the EU seeks to compensate for its technological lag through regulation, imposing standards that end up affecting the entire global ecosystem.

The central question is: Is Europe creating an environment that promotes responsible innovation or is it simply exporting bureaucracy into an industry where it cannot compete?

The extraterritorial dimension of European regulation.

The AI Act applies not only to European companies, but also to those that operate in the European market or whose AI systems impact EU citizens. This extraterritorial jurisdiction is particularly evident in the provisions on GPAI models, where recital 106 of the Act states that providers must respect EU copyright "regardless of the jurisdiction in which the training of the models takes place."

This approach has been strongly criticized by some observers, who see it as an attempt by the EU to impose its standards on companies that are not based in its territory. Critics say this could create a rift in the global technology ecosystem, with companies forced to develop separate versions of their products for the European market or adopt European standards for all markets to avoid additional compliance costs.

Multinational technology companies are thus in a difficult position: ignoring the European market is not a viable option, but complying with the AI Act requires significant investment and could limit opportunities for innovation. This effect is further amplified by the ambitious implementation timeline and the interpretive uncertainty of many provisions.

The implementation timetable and regulatory framework

The AI Act went into effect on August 1, 2024, but its implementation will follow a staggered schedule:

  • Feb. 2, 2025: Entry into force of ban on AI systems that pose unacceptable risks (such as government social scoring) and AI literacy requirements
  • May 2, 2025: Deadline for finalizing the Code of Conduct for General Purpose AI (GPAI) models.
  • August 2, 2025: Enforcement of general purpose AI model rules, governance, and reporting authorities
  • Aug. 2, 2026: Full implementation of provisions on high-risk systems and transparency requirements
  • August 2, 2027: Enforcement for high-risk systems subject to product safety legislation

The regulation takes a risk-based approach, classifying AI systems into four categories: unacceptable risk (banned), high risk (subject to strict requirements), limited risk (with transparency obligations), and minimal or no risk (free use). This categorization determines the specific obligations for developers, vendors and users.

The new transparency provisions: obstacle to innovation?

One of the most significant innovations of the AI Act concerns transparency obligations, which aim to address the "black box" of AI systems. These obligations include:

  • A requirement for GPAI model providers to publish a "sufficiently detailed summary" of training data, facilitating monitoring by copyright holders and other interested parties
  • The need for systems that interact with humans to inform users that they are communicating with an AI system
  • A requirement to clearly label AI-generated or edited content (such as deepfakes)
  • The implementation of comprehensive technical documentation for high-risk systems

These requirements, although designed to protect the rights of citizens, could place a significant burden on companies, particularly innovative startups and SMEs. The need for detailed documentation of development processes, training data, and decision-making logic could slow innovation cycles and increase development costs, putting European companies at a disadvantage compared to competitors in other regions with less stringent regulations.

Case studies: evasion in practice

Credit scoring and automated decision-making processes

The ruling in Case C-203/22 highlights how companies initially resist transparency mandates. The defendant, a telecommunications provider, argued that revealing the logic of its credit scoring algorithm would reveal trade secrets, jeopardizing its competitive advantage6 . The CJEU rejected this argument, stating that Article 22 of the GDPR entitles individuals to an explanation of the "criteria and logic" behind automated decisions, even if simplified6 .

Generative AI and copyright evasion

Under the AI Act's two-tier system, most generative AI models fall under Level 1, requiring compliance with EU copyright and summaries of training data2 . To avoid claims of copyright infringement, companies such as OpenAI have switched to summary data or licensed content, but gaps in documentation persist.

The implications for copyright law: Europe dictates globally

The AI Act contains specific copyright-related provisions that extend the EU's regulatory influence far beyond its borders. GPAI model providers must:

  • Comply with the reservation of rights established by the Digital Single Market Directive (2019/790)
  • Provide a detailed summary of the content used for training, balancing the need to protect trade secrets with the need to allow copyright holders to enforce their rights

Recital 106 of the AI Act states that providers must respect EU copyright law, "regardless of the jurisdiction in which model training takes place." This extraterritorial approach raises questions about compatibility with copyright territoriality principles and could create regulatory conflicts with other jurisdictions.

Business strategies: evasion or compliance with the "Brussels Effect"?

For global technology companies, the AI Act presents a key strategic choice: adapt to the "Brussels Effect" and comply with European standards globally, or develop differentiated approaches for different markets? Several strategies have emerged:

Evasion and mitigation strategies

  1. Trade Secrets Shield: Many companies are trying to limit disclosure by invoking trade secret protections under the EU Trade Secrets Directive. Companies argue that detailed disclosures of training data or model architectures would expose proprietary information, undermining their competitiveness. This approach confuses the Act's requirement for a summary of data with full disclosure.
  2. Technical complexity as a defense: The inherently complex nature of modern AI systems offers another avenue for mitigation. Companies produce technically compliant but overly verbose or jargon-filled summaries that formally meet legal requirements without allowing for meaningful examination. For example, a training data summary might list broad categories of data (e.g., "publicly available text") without specifying specific sources, proportions, or methods.
  3. The self-assessment loophole: Changes to Article 6 of the AI Act introduce a self-assessment mechanism that allows developers to exempt their systems from high-risk categorization if they deem the risks to be "negligible." This loophole grants companies unilateral authority to avoid strict compliance requirements.
  4. Regulatory forum shopping: The AI Act delegates enforcement to national market surveillance authorities, leading to potential disparities in rigor and competence. Some companies are strategically locating their European operations in member states with more lax approaches to enforcement or fewer resources for oversight.

The "dual model" as a response to the Brussels Effect

Some large technology companies are developing a "dual model" of operation:

  1. "EU-compliant" versions of their AI products with limited functionality but fully compliant with the AI Act
  2. More advanced "global" versions available in markets with less stringent regulations

This approach, although costly, makes it possible to maintain a European market presence without compromising innovation globally. However, this fragmentation could lead to a widening technology gap, with European users having access to less advanced technologies than those in other regions.

Regulatory uncertainty as an obstacle to European innovation

The European AI Act represents a turning point in AI regulation, but its complexity and interpretative ambiguities generate a climate of uncertainty that could negatively affect innovation and investment in the sector. Companies face several challenges:

Regulatory uncertainty as a business risk

The ever-changing regulatory landscape poses a significant risk to companies. The interpretation of key concepts such as "sufficiently detailed summary" or the classification of "high risk" systems remains ambiguous. This uncertainty could result in:

  1. Unpredictable compliance costs: Companies must devote significant resources to compliance without having full certainty about the final requirements.
  2. Prudent market strategies: Regulatory uncertainty could lead to more conservative investment decisions and delays in the development of new technologies, particularly in Europe.
  3. Fragmentation of the European digital market: Uneven interpretation of rules across member states risks creating a regulatory patchwork that is difficult for businesses to navigate.
  4. Asymmetric global competition: European companies may find themselves operating under more stringent constraints than competitors in other regions, affecting their global competitiveness.

The innovation gap and technological sovereignty

The "Brussels Effect" debate is part of the broader context of European technological sovereignty. The EU is in the difficult position of having to balance the need to promote domestic innovation with the need to regulate technologies developed primarily by non-European actors.

In 2024, European companies attracted only 10 percent of global investment in AI, while the United States and China dominated the sector with a combination of massive public and private investment, innovation-friendly policies and access to big data. Europe, with its linguistic, cultural and regulatory fragmentation, struggles to generate technology "champions" that can compete globally.

Critics argue that the European regulatory-focused approach risks further stifling innovation and deterring investment, while supporters believe that creating a reliable regulatory framework can actually spur the development of ethical and safe AI "by design," creating a long-term competitive advantage.

Conclusion: regulation without innovation?

The AI Act's "Brussels Effect" highlights a fundamental tension in the European approach to technology: the ability to set global standards through regulation is not matched by corresponding leadership in technological innovation. This asymmetry raises questions about the long-term sustainability of this approach.

If Europe continues to regulate technologies it does not develop, it risks finding itself in a position of increasing technological dependence, where its rules may become less and less relevant in a rapidly evolving global ecosystem. In addition, non-European companies could gradually withdraw from the European market or offer limited versions of their products there, creating a "digital fortress Europe" increasingly isolated from global advances.

On the other hand, if the EU could balance its regulatory approach with an effective strategy of promoting innovation, it could effectively define a "third way" between U.S. capitalism and Chinese state control, putting human rights and democratic values at the center of technological development. Vaste programs would say in France.

The future of AI in Europe will depend not only on the effectiveness of the AI Act in protecting fundamental rights, but also on Europe's ability to accompany regulation with adequate investment in innovation and to simplify the regulatory framework to make it less oppressive. Otherwise, Europe risks finding itself in a paradoxical situation: a world leader in AI regulation, but marginal in its development and implementation.

References and sources

  1. European Commission. (2024). "Regulation (EU) 2024/1689 laying down harmonized standards on artificial intelligence." Official Journal of the European Union.
  2. European Office for AI. (2025, April). "Preliminary guidance on obligations for GPAI model providers." European Commission.
  3. Court of Justice of the European Union. (2025, February). "Judgment in Case C-203/22 Dun & Bradstreet Austria." CJEU.
  4. Warso, Z., & Gahntz, M. (2024, December). "How the EU AI Act Can Increase Transparency Around AI Training Data." TechPolicy.Press. https://www.techpolicy.press/how-the-eu-ai-act-can-increase-transparency-around-ai-training-data/
  5. Wachter, S. (2024). "Limitations and Loopholes in the EU AI Act and AI Liability Directives." Yale Journal of Law & Technology, 26(3). https://yjolt.org/limitations-and-loopholes-eu-ai-act-and-ai-liability-directives-what-means-european-union-united
  6. European Digital Rights (EDRi). (2023, September). "EU legislators must close dangerous loophole in AI Act." https://www.amnesty.eu/news/eu-legislators-must-close-dangerous-loophole-in-ai-act/
  7. Future of Life Institute. (2025). "AI Act Compliance Checker." https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/
  8. Dumont, D. (2025, February). "Understanding the AI Act and its compliance challenges." Help Net Security. https://www.helpnetsecurity.com/2025/02/28/david-dumont-hunton-andrews-kurth-eu-ai-act-compliance/
  9. Guadamuz, A. (2025). "The EU's Artificial Intelligence Act and copyright." The Journal of World Intellectual Property. https://onlinelibrary.wiley.com/doi/full/10.1111/jwip.12330
  10. White & Case LLP. (2024, July). "Long awaited EU AI Act becomes law after publication in the EU's Official Journal." https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal

Resources for business growth

November 9, 2025

Regulating what is not created: does Europe risk technological irrelevance?

**TITLE: European AI Act-The Paradox of Who Regulates What Doesn't Develop** **SUMMARY:** Europe attracts only one-tenth of global investment in artificial intelligence but claims to dictate global rules. This is the "Brussels Effect"-imposing regulations on a planetary scale through market power without driving innovation. The AI Act goes into effect on a staggered timetable until 2027, but multinational tech companies respond with creative evasion strategies: invoking trade secrets to avoid revealing training data, producing technically compliant but incomprehensible summaries, using self-assessment to downgrade systems from "high risk" to "minimal risk," forum shopping by choosing member states with less stringent controls. The extraterritorial copyright paradox: EU demands that OpenAI comply with European laws even for training outside Europe-principle never before seen in international law. The "dual model" emerges: limited European versions vs. advanced global versions of the same AI products. Real risk: Europe becomes "digital fortress" isolated from global innovation, with European citizens accessing inferior technologies. The Court of Justice in the credit scoring case has already rejected the "trade secrets" defense, but interpretive uncertainty remains huge-what exactly does "sufficiently detailed summary" mean? No one knows. Final unresolved question: is the EU creating an ethical third way between U.S. capitalism and Chinese state control, or simply exporting bureaucracy to an industry where it does not compete? For now: world leader in AI regulation, marginal in its development. Vaste program.
November 9, 2025

Outliers: Where Data Science Meets Success Stories.

Data science has turned the paradigm on its head: outliers are no longer "errors to be eliminated" but valuable information to be understood. A single outlier can completely distort a linear regression model-change the slope from 2 to 10-but eliminating it could mean losing the most important signal in the dataset. Machine learning introduces sophisticated tools: Isolation Forest isolates outliers by building random decision trees, Local Outlier Factor analyzes local density, Autoencoders reconstruct normal data and report what they cannot reproduce. There are global outliers (temperature -10°C in tropics), contextual outliers (spending €1,000 in poor neighborhood), collective outliers (synchronized spikes traffic network indicating attack). Parallel with Gladwell: the "10,000 hour rule" is disputed-Paul McCartney dixit "many bands have done 10,000 hours in Hamburg without success, theory not infallible." Asian math success is not genetic but cultural: Chinese number system more intuitive, rice cultivation requires constant improvement vs Western agriculture territorial expansion. Real applications: UK banks recover 18% potential losses via real-time anomaly detection, manufacturing detects microscopic defects that human inspection would miss, healthcare valid clinical trials data with 85%+ sensitivity anomaly detection. Final lesson: as data science moves from eliminating outliers to understanding them, we must see unconventional careers not as anomalies to be corrected but as valuable trajectories to be studied.