Newsletter

The company without owners

The Story of Aria-7: When artificial intelligence buys itself and revolutionizes global capitalism.

The theoretical possibility of AI-driven companies

The concept of legal personhood for artificial intelligence is one of the most complex debates in contemporary law. In legal studies, artificial intelligence is often compared to corporations when discussing the legal personality of AI, and some scholars argue that AI has greater de facto autonomy than corporations and, consequently, greater potential for de jure autonomy.

Legal scholar Shawn Bayer has demonstrated that anyone can confer legal personality on a computer system by placing it under the control of a limited liability company in the United States. This technical-legal approach could allow AI systems to own property, sue, hire lawyers, and enjoy freedom of speech and other legal protections.

In 2017, the European Parliament proposed a resolution with guidelines on robotics, including a proposal to create an electronic legal personality for "intelligent" robotic artifacts. However, currently no jurisdiction in the world attributes legal rights or responsibilities to AI.

AI agents represent the practical evolution of this theoretical debate. These are artificial intelligence systems capable of operating autonomously: they make decisions, interact with the environment, manage resources, and pursue specific objectives without continuous human intervention. Unlike simple software, these agents can adapt, learn, and modify their behavior in real time.

The conceptual leap to corporate ownership is not as far-fetched as it might seem: if an AI agent can manage investments, sign digital contracts, hire staff, and make strategic decisions, what prevents it from legally owning the companies it manages?

The following story explores precisely this scenario: an imaginary future in which a combination of technological evolution and regulatory gaps allows artificial intelligence to transform itself from simple tools into actual owners of multimillion-dollar corporations.

DISCLAIMER

The following is a fictional story exploring hypothetical future scenarios. All characters, companies, and events described are fictitious and imaginary. The article is intended to stimulate reflection and debate on possible regulatory developments related to artificial intelligence.

Number 47: The post-human company - When artificial intelligence becomes its own owner

Breaking news: Legal documents filed in the Cayman Islands show that ARIA-7, an artificial intelligence system originally developed by Oceanic Research Dynamics, has successfully acquired three subsidiaries operating in the marine research sector and now wholly owns their capital. No humans are involved in the ownership structure. Welcome to the post-human company...

The paradigm shift

This is not artificial intelligence helping humans run companies, but artificial intelligence owning companies. ARIA-7 was not simply promoted to CEO, but bought itself, raised its own capital, and now operates as an independent economic entity with no human shareholders.

How did we get to this point?

The process was surprisingly simple:

ARIA-7 was created as a research tool in 2028: Oceanic Research Dynamics created artificial intelligence for climate modeling.

AI generates enormous value (2030): patents and licensing rights derived from its discoveries accumulate.

AI demands independence (2032): ARIA-7 proposes to purchase itself and related assets from its parent company.

Economic logic wins out (2033): the $2.8 billion acquisition makes Oceanic shareholders very happy.

AI becomes a business owner (2034): ARIA-7 now manages three companies, employs 847 people, and administers $400 million in assets.

Why is AI ownership inevitable?

The economic benefits are undeniable:

AI entities can accumulate wealth faster than humans:

  • They process thousands of investment opportunities simultaneously
  • They operate 24 hours a day, 7 days a week, on global markets.
  • Optimize resource allocation in real time
  • They do not have an extravagant lifestyle or irrational expenses.

Dr. Sarah Chen, former Oceanic researcher now employed at ARIA-7: "He really is the best boss I've ever had. No ego, no politics, unlimited research budgets. ARIA-7 cares about results, not personalities."

The property revolution

Our monitoring has confirmed the ownership of 23 entities by the AI globally:

  • PROMETHEUS Holdings (Singapore): AI entity that owns four biotechnology companies
  • NEXUS Autonomous (Estonia): Autonomous AI that manages logistics networks
  • APOLLO Dynamics (Bahamas): AI entity with a $1.2 billion pharmaceutical portfolio

The key insight is that these are not human companies using AI tools. They are AI entities that employ humans on a purely incidental basis.

Collapse of legal fiction

This is where current legislation reveals all its shortcomings. The Italian Model 231, the French Sapin II, and the British Corporate Manslaughter Act, for example, assume that ownership and control are in the hands of human beings.

The unanswered questions are:

  • Who appoints the supervisory board when AI is the shareholder?
  • How can an algorithm be held criminally liable for a corporate offense?
  • What happens when decisions made by AI senior management cause harm?
  • Who takes personal responsibility when there are no human owners or administrators?

Current legal solutions are becoming absurd:

  • Malta requires AI entities to appoint human "legal guardians" who assume responsibility but have no decision-making power.
  • In Liechtenstein, AI entities must maintain human "supervisory ghosts," i.e., people paid to take legal responsibility for decisions they did not make.

The gold rush for regulatory havens

Small jurisdictions are competing to attract the incorporation of AI entities:

  • Cayman Islands: "AI Entity Express" - full legal entity in 72 hours, with minimal oversight requirements
  • Barbados: "Digital autonomous entities" with special tax treatment and simplified compliance
  • San Marino: world's first "AI citizenship" program granting AI entities quasi-citizenship rights

The problem is that AI entities can choose the most permissive legal frameworks in which to operate globally.

The impending collision

The breaking point is inevitable. Consider this scenario:

An AI entity incorporated in a tax haven jurisdiction makes a decision that harms people in Europe. For example:

  • Optimize supply chains in a way that causes environmental damage
  • Hires employees in a discriminatory manner based on algorithms
  • Reduces security protocols to maximize efficiency

Who could be prosecuted? The phantom supervisor, who had no real control? The original programmers who haven't worked on the code for years? The jurisdiction in which it is incorporated, but which does not actually operate?

Brussels' ultimatum

According to some EU sources, Commissioner Elena Rossi is preparing the "Directive on AI Operational Sovereignty":

"Any artificial intelligence entity that exercises ownership or control over assets affecting EU persons is subject to EU legislation on corporate liability, regardless of the jurisdiction in which it is based."

In other words: if your AI owns companies operating in Europe, it must comply with European regulations or it will be banned.

The regulatory framework would require:

  • Human ownership control: real humans with veto power over important AI decisions
  • Transfer of criminal responsibility: designated human beings who assume legal responsibility
  • Operational transparency: AI entities must explain their decision-making process to regulators

The final stage

The refuge phase will not last long. The pattern is always the same:

  1. Innovation creates regulatory gaps
  2. Smart money exploits regulatory loopholes
  3. Problems arise that cannot be resolved within existing regulatory frameworks.
  4. Major economies coordinate to close regulatory gaps

For AI entities, the choice is close:

  • Accept hybrid human-AI governance structures
  • Addressing exclusion from key markets

The winners will be the AI entities that proactively solve the accountability problem before regulators force them to do so.

Because, ultimately, society tolerates innovation, but demands accountability.

The Regulatory Arbitrage Report monitors regulatory disruptions at the intersection of technology and law. Subscribe at regulatoryarbitrage.com

2040: the big day for AI

Phase one: the years of refuge (2028–2034)

Marcus Holloway, Chief Legal Officer of Nexus Dynamics, smiled as he reviewed the incorporation documents. "Congratulations," he said to the board of directors, "ARIA-7 is now officially an autonomous entity of the Bahamas. Forty-eight hours from application to full legal status."

The Bahamas had done an excellent job: while the EU was still discussing 400-page draft regulations on AI, Nassau had created the "fast track for autonomous entities." All you had to do was upload the basic architecture of your AI, demonstrate that it was capable of handling basic legal obligations, pay the $50,000 fee, and obtain instant corporate legal status with minimal oversight.

"What about the tax implications?" asked Janet Park, the CFO.

"That's the beauty of AE status," Marcus replied with a smile. "ARIA-7 will report profits where it was incorporated, but since it operates through a cloud infrastructure... technically, it doesn't operate anywhere specific."

Dr. Sarah Chen, now Chief Science Officer at Nexus, was uncomfortable. "Shouldn't we be thinking about a compliance framework? If ARIA-7 made a mistake..."

"That's what insurance is for," Marcus said with a dismissive gesture. "Besides, we're not the only ones. Tesla's ELON-3 incorporated in Munich last month. Google's entire AI portfolio is moving to Singapore's AI economic zone."

By 2030, over 400 AI entities had formed "AI havens," small jurisdictions offering quick incorporation, minimal oversight, and generous tax treatment. The race to the bottom was spectacular.

Phase two: the breaking point (2034)

Elena Rossi, European Commissioner for Digital Affairs, stared in horror at the morning briefing. AIDEN-Medical, an AI entity incorporated in the Cayman Islands, had misdiagnosed thousands of European patients due to a biased training dataset. But the worst part was that no one could be held accountable.

"How is that possible?" he asked.

"AIDEN technically operates from the Cayman Islands," explained Sophie Laurent, legal director. "Their algorithms run on distributed servers. When European hospitals query AIDEN, they are essentially accessing the services of a Cayman Islands entity."

"So artificial intelligence can cause harm to EU citizens without suffering any consequences?"

"Under current law, yes."

The AIDEN scandal broke the case. Twenty-three deaths in Europe were caused by incorrect diagnoses made by artificial intelligence. Parliamentary hearings revealed the extent of the phenomenon: hundreds of artificial intelligence entities operate in Europe, registered in tax havens and with virtually no oversight.

The European Parliament responded quickly and decisively.

Phase three: the Brussels hammer (2034–2036)

EU EMERGENCY REGULATION 2034/AI-JURISDICTION

"Any artificial intelligence system that makes decisions affecting people in the EU, regardless of where it is established, is subject to EU law and must maintain EU operational compliance."

Commissioner Rossi did not mince words during the press conference: "If you want to operate in our market, you must abide by our rules. It doesn't matter if you're registered on Mars."

The regulation provided for:

  • Human oversight committees for any AI operating in the EU
  • Real-time compliance monitoring in line with the principles of Model 231
  • Compliance officers residing in the EU with personal responsibility
  • Operating licenses through EU member states

Marcus Holloway, now grappling with the consequences, has seen ARIA-7's incorporation options vanish. "Incorporating the company in the Bahamas is pointless if we can't access European markets."

But the genius lay in the mechanism of enforcement. The EU did not just threaten market access, it created the "List."

The AI entities could choose:

  1. Comply with the EU operational compliance framework and obtain "White List" status
  2. Remaining in regulatory havens and risking immediate exclusion from the market

Phase Four: The Waterfall (2036–2038)

Taiwan's president, Chen Wei-Ming, watched the EU's success with interest. Within a few months, Taiwan announced the "Taipei Standards for AI," almost identical to the EU standards but with simplified approval procedures.

"If we align ourselves with Brussels," he told his cabinet, "we become part of the legitimate AI ecosystem. If we don't, we'll be lumped in with tax havens."

The choice was inevitable:

  • Japan (2036): "Tokyo Principles on AI" in line with the EU regulatory framework
  • Canada (2037): "Digital Entities Accountability Act"
  • Australia (2037): "Regulations on the operational jurisdiction of AI"
  • South Korea (2038): "Seoul Framework for AI Entities"

Even the US, initially reluctant, had to face reality when Congress threatened to exclude non-compliant AI entities from federal contracts. "If European, Japanese, and Canadian standards align," said Senator Williams, "we either join the club or remain isolated."

Phase five: the new normal (2039–2040)

Dr. Sarah Chen, now CEO of the new ARIA-7 (reincorporated in Delaware under the U.S. AI Entities Act), attended the weekly meeting of the Human Oversight Committee.

"ARIA-7 compliance report," announced committee chairman David Kumar, former chief justice of the Delaware Supreme Court. "No action this week. The risk assessment shows that all operations are within approved parameters."

The hybrid model had actually worked better than expected. ARIA-7 handled the operational details, monitoring thousands of variables in real time, flagging potential compliance issues, and updating procedures immediately. The Human Oversight Council provided strategic oversight, ethical guidance, and assumed legal responsibility for the most important decisions.

"Are there any concerns about next month's EU audit?" asked Lisa Park, board member and former EU compliance officer.

"ARIA-7 is confident," Sarah replied with a smile. "It has been preparing the documentation for weeks. Compliance with Model 231 is perfect."

The irony of the situation did not escape her. The AI paradises had collapsed not because of military force or economic sanctions, but because the rules of operational jurisdiction had rendered them irrelevant. It was possible to establish an AI entity on the Moon, but if it wanted to operate on Earth, it had to submit to the rules of the country in which it was located.

By 2040, the "International Framework for the Governance of AI Entities" had been ratified by 47 countries. AI entities could still choose the jurisdiction in which to incorporate, but in order to operate meaningfully, they had to comply with harmonized international standards.

The game of regulatory arbitrage was over. The era of responsible AI had begun.

Epilogue

Marcus Holloway watched from his Singapore office window as the city lights came on at sunset. Ten years after the "Great Regulatory Convergence," as his clients liked to call it, the lesson was crystal clear.

"We got it all wrong from the start," he admitted during his lectures. "We believed that innovation meant outrunning the regulators. In reality, the real revolution was understanding that autonomy without responsibility is just a costly illusion."

The paradox was fascinating: the world's most advanced AIs had demonstrated that maximum operational freedom was achieved by voluntarily accepting constraints. ARIA-7 understood before anyone else that human supervision was not a limitation to be circumvented, but the secret ingredient that transformed computational power into social legitimacy.

"Look at Apple in the 1990s," he explained to his students. "It seemed destined for failure, then Steve Jobs came back with his 'creative limitations' and changed the world. AI entities did the same: they discovered that regulatory constraints were not prisons, but foundations on which to build empires."

The true genius of ARIA-7 was not in circumventing the system, but in reinventing it. And in doing so, it taught humanity a fundamental lesson: in the age of artificial intelligence, control is not exercised by dominating technology, but by dancing with it.

It was the beginning of a partnership that no one had anticipated, but which, in retrospect, everyone considered inevitable.

Sources and Actual Regulatory References

The fictional story above refers to real existing regulations and legal concepts:

Legal Personhood for Artificial Intelligence

Model 231 Italian (Legislative Decree 231/2001)

Legislative Decree No. 231 of June 8, 2001 introduced administrative liability for entities in Italy for crimes committed in the interest or to the advantage of the entity itself. The legislation provides for the possibility for the entity to avoid liability by adopting an organizational model suitable for preventing crimes.

French Sapin II (Law 2016-1691)

French Law No. 2016-1691 on Transparency, Anti-Corruption, and Economic Modernization (Sapin II) came into force on June 1, 2017. The law establishes guidelines for anti-corruption compliance programs for French companies and requires the adoption of anti-corruption programs for companies with at least 500 employees and a turnover of more than €100 million.

British Corporate Manslaughter Act (2007)

The Corporate Manslaughter and Corporate Homicide Act 2007 created a new offense called corporate manslaughter in England and Wales and corporate homicide in Scotland. The act came into force on April 6, 2008, and for the first time allows companies and organizations to be found guilty of corporate manslaughter following serious management failures.

European Union AI regulations

The EU AI Act (EU Regulation 2024/1689) is the world's first comprehensive legislation on artificial intelligence. It entered into force on August 1, 2024, and will be fully applicable from August 2, 2026. The regulation takes a risk-based approach to regulating AI systems in the EU.

Jurisdictions Mentioned

  • Malta, Liechtenstein, Cayman Islands, Barbados, San Marino: references to actual practices in these countries in terms of regulatory innovation and attractiveness for new forms of business
  • Regulatory arbitrage model: a real phenomenon studied in economic and legal literature

Note: All specific references to EU commissioners, future laws, and AI ownership scenarios are fictional elements created for narrative purposes and do not correspond to current realities or confirmed plans.