📧 **Updated Newsletter Article**
*This article was originally published in our weekly newsletter and subsequently updated with developments in 2025, including the landmark rulings Bartz v. Anthropic, Kadrey v. Meta, Disney v. Midjourney, and Thomson Reuters v. Ross Intelligence.*
**Last Update:** [July 2025].
The intersection of artificial intelligence and copyright law has become one of the most complex and rapidly evolving fields in the modern legal landscape. The year 2025 marked a historic turning point with the first substantive rulings that are redefining how AI-generated content is treated from a copyright perspective.
The Historical Judgments of 2025: A Fragmented Jurisprudence.
The Devastating Precedent: Thomson Reuters v. Ross Intelligence
February 11, 2025 marked a watershed date in AI law when Judge Stephanos Bibas issued the first ruling categorically rejecting the fair use defense in AI training.
In Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc. the court ruled as a matter of law that the use of copyrighted headnotes to train an AI system does not constitute fair use.
The heart of the decision: Ross Intelligence had used Westlaw's headnotes (Thomson Reuters' proprietary legal briefs) to train its own competing AI search engine. The court emphasized that Ross was creating a direct "market substitute" for Westlaw, weighing decisively against fair use.
As Judge Bibas wrote, "The public has no right to Thomson Reuters' legal analysis. Copyrights encourage people to develop things that help society, like good legal research tools."
The Twin Judgments of June 2025: A Legal Paradox
Just two days apart, in June 2025, two federal courts in California issued seemingly contradictory decisions that shook up the AI industry.
Bartz v. Anthropic (June 23, 2025): Judge William Alsup ruled that Claude's training on legally purchased books constituted fair use, calling the process "spectacularly transformative." However, he convicted Anthropic of downloading more than 7 million books from pirate sites such as LibGen and Pirate Library Mirror, ruling that this illegal acquisition is not protected by fair use. The decision creates a crucial distinction: training can be fair use, but only when the materials are obtained legally.
Kadrey v. Meta (June 25, 2025): Judge Vince Chhabria ruled that LLaMA's training of authors' books constituted fair use, but for different reasons than in Anthropic. The authors (including Sarah Silverman and Ta-Nehisi Coates) failed to prove that Meta's AI was actually replacing their works in the marketplace or causing concrete economic harm. In his decision, Judge Chhabria implicitly criticized Judge Alsup's emphasis on the "transformative" nature of AI, emphasizing instead that the crucial factor should be evidence of actual economic harm.
Hollywood Enters the Battle: Disney and Universal v. Midjourney
June 2025 also saw the entry of Hollywood giants into the AI-copyright legal war. Disney and Universal filed suit against Midjourney, marking the first time Hollywood majors have sued an AI company for copyright infringement.
The Weight of Giants: The 110-page lawsuit accuses Midjourney of stealing "countless" copyrighted works to train its software, including iconic characters such as Darth Vader, Homer Simpson and Shrek. As TIME reported, the importance of this case lies in the size, influence and resources of Disney and Universal: "The more these pillars of the American economy get into the fight, the harder it becomes to ignore the simple truth here."
"Virtual Distributing Machine": The lawsuit describes Midjourney as a "virtual distributing machine that generates endless, unauthorized copies" of Disney and Universal's works. With over 20 million registered users and $300 million in revenue by 2024, Midjourney represents one of the largest AI image generators in the world.
Andersen vs. Stability AI: The Evolution Continues
The artists' group led by Sarah Andersen continued to win significant victories when Judge William Orrick allowed their copyright infringement charges to proceed against companies such as Stability AI and Midjourney. The artists alleged that these companies had illegally stored copies of their artwork in training datasets without consent or compensation.
The fundamental contradiction: This case highlights the inherent paradox of generative AI: models are designed to mimic human creativity, but they can only do so by consuming human works.
Adobe's Ethical Approach: Licensing vs. Fair Use
While other tech giants face copyright infringement lawsuits, Adobe has attempted to position itself as the "ethical" alternative with its Firefly AI. Adobe has built its marketing strategy and product differentiation around the concept of "commercially safe AI," trained primarily on images licensed from Adobe Stock and public domain content.
The Promise of Ethics: Adobe differentiated Firefly from competitors such as Midjourney and DALL-E by emphasizing that its model is trained only on licensed content, avoiding controversial Internet scraping. The company also implemented technologies such as Content Credentials to allow creators to add a "Do Not Train" tag to their work.
Complex Reality: However, revelations by Bloomberg in April 2024 showed that about 5% of Firefly's training dataset included images generated by competing AI, including Midjourney. Within Adobe Stock, 57 million images are explicitly labeled as AI-generated, representing 14 percent of the total database.
Adobe's Defense: Adobe responded that all images in Adobe Stock, including those generated by AI, go through a "rigorous moderation process" to ensure that they do not include recognizable intellectual property, trademarks, or characters. The company argues that this approach remains more ethical than competitors who use completely unlicensed data.
The benefit to the end user: Adobe's approach results in the ability to use Firefly-generated content with less exposure to legal risks or copyright infringement. Even in an environment where contradictions and gray areas emerge, Adobe's commitment to transparency, content moderation, and respect for artists' rights is an added value.
The Jurisprudential Fragmentation of 2025
2025 revealed a deeply divided jurisprudence reflecting the inherent complexity of applying 20th-century laws to 21st-century technologies.
The Legal Acquisition Paradigm: All the rulings agree on a fundamental principle: the distinction between legal and illegal acquisition of training materials. Even when subsequent use might be fair use, downloading pirated materials remains illegal and can result in separate liability.
The Battle of the Fourth Factor: The decisions identified the fourth factor of fair use (market impact) as the new legal battleground. While Thomson Reuters won by demonstrating clear market substitution, the Bartz and Kadrey cases failed to demonstrate concrete economic harm.
The problem of probatio diabolica: A procedural paradox emerges: how can authors prove market damages from AI systems when the impact is widespread and difficult to quantify? We are witnessing the emergence of a system in which protection depends on the ability to prove mathematically what is often intuitively obvious.
Actors Facing the Digital Abyss.
The crisis of copyright in the age of AI particularly affects the world of acting, where the very identity of the performer is at the heart of the profession. The ability to clone likenesses, voices and acting styles is rapidly transforming the concept of "performance" from a unique creative act to a potential replicable template.
The dissolution of interpretation: When an actor can be digitally recreated, what is left of interpretive art? Studios have already demonstrated the ability to "resurrect" deceased actors and digitally manipulate existing interpretations. The key question is not so much whether it is technically possible, but whether it preserves the essence of what makes a performance meaningful.
The precedent of "Here": The film "Here," where full digital recreations of Tom Hanks and Robin Wright were used for lead roles, represents a model of authorized use. The production obtained explicit consent and paid the rights to the actors involved, thus setting a commercial precedent of consensual use. This highlights how the issue is not necessarily the technology itself, but the consent and compensation of the artists whose work and image are being used.
Disney's Agenda on Digital Replicas: Significantly, Disney is also among the supporters of the NO FAKES Act, the proposed federal legislation to protect voice and likeness actors from unauthorized AI replicas. This shows a coordinated strategy: protecting actors from unauthorized digital replicas while combating unauthorized use of existing intellectual property.
The paradox of inverse value: A peculiar economic phenomenon has emerged: the most famous actors with established careers (thus with ample material available for AI training) are paradoxically the most vulnerable to algorithmic substitution. Their very success makes them easy targets for unauthorized cloning, inverting the traditional artistic career value curve.
Europe As Normative Counterbalance: The AI Act in Action.
While the United States navigates the maze of fair use, Europe has chosen a radically different approach with the AI Act, which went into effect in August 2024 and is now being actively implemented.
The Mandatory Transparency Revolution: TheAI Act requires providers of general AI models to make public a "sufficiently detailed summary" of data used for training, including copyrighted materials. In January 2025, the European Commission published a template to assist providers in preparing the required summary.
The Pillars of the AI Act:
- Transparency: Companies must disclose the sources of their training data
- Copyright Compliance: Obligation to comply with EU copyright laws, regardless of where the training takes place
- Opt-out: Respecting the preferences of rights holders who express refusal
The Extraterritorial Effect: The AI Act applies to any vendor who places an AI model on the EU market, "regardless of the jurisdiction in which the copyright-relevant acts take place." This creates potential conflicts with U.S. fair use jurisprudence.
The New U.S. Copyright Office Report (2025)
In January 2025, theU.S. Copyright Office released Part 2 of its report on AI, providing crucial clarifications on the protectability of AI-generated works.
The Confirmed Fundamental Principles:
- Only works with expressive elements determined by a human author can be protected by copyright
- Merely providing prompts is not sufficient for copyright protection
- AI assistance in creation does not automatically prevent protectability
- Fully AI-generated works cannot be copyrighted
The Myth of Originality Revisited: The report confirms how artificial the concept of "originality" is in modern copyright law. What really distinguishes an artist selecting from thousands of AI outputs from a programmer selecting from thousands of algorithms? The legal distinction seems more ideological than practical, yet it remains crucial in determining what can be copyrighted.
International Perspectives: The Global Divergence
China: A Beijing court in November 2023 recognized copyright protection for an AI-generated image as long as it demonstrates originality and reflects human intellectual effort. This contrasts with the more restrictive approach in the United States.
Czech Republic: In 2024, a Czech court issued the first European ruling on AI-generated copyright, refusing protection for an image created via prompts, aligning with the position of the U.S. Copyright Office.
Global Legislative Hypocrisy: Interestingly, Western legal systems refuse to grant rights to AI-generated works while simultaneously allowing human works to be "devoured" by these same systems. We are witnessing a double standard: human works are considered sacred when created, but expendable when consumed by AI.
The Fair Use Debate: The New Frontier.
AI companies increasingly rely on the "transformative use" argument, but the 2025 judgments showed the limits of this strategy.
The Illusion of Transformation: The "transformative use" argument is proving to be a convenient legal fiction when applied on an industrial scale. The truth is that AIs do not "transform" works as much as they digest and recycle them. The courts are beginning to understand this distinction-as demonstrated in the Thomson Reuters case-when the commercial use is obvious and direct, but they still struggle to articulate why exactly human learning from protected works is acceptable while artificial learning is not.
The New Decisive Factors:
- Legal vs. illegal acquisition of training materials
- Direct market substitution vs. creation of new markets
- Concrete evidence of economic harm vs. theoretical harm
-
Liability Risks for End Users and Developers
The Andersen case established that end users could be liable if AI outputs look too much like training data, but the 2025 rulings further complicated this landscape.
The Impossible Burden of Knowledge Updated: How can an end user know the content of training datasets containing billions of images, especially when the AI Act now requires transparency but U.S. vendors may not comply? We are creating a system where the average user risks penalties for violations they can neither anticipate nor avoid, in an inconsistent cross-border regulatory environment.
P.S. - The Frankenstein Paradox Updated: As in the case of Dr. Frankenstein - who is the creator and not the creature, a common mistake among those who have not read Mary Shelley's work - we find ourselves in an amplified paradox: the user who uses AI is treated as the "monster" responsible for violations, while the real "doctors" who created and trained these systems on others' data often escape legal consequences. The 2025 rulings show that even when companies are held accountable, it is often only for the most egregious aspects (such as Anthropic's piracy), not for the systematic use of protected materials. Further evidence of how cultural shallowness is also reflected in our interpretation of liability in the digital age.
Implications for Industry and Future Directions.
The 2025 cases have accelerated demand for licensed training datasets. Major media companies are now negotiating revenue-sharing agreements that mirror the music industry's ASCAP/BMI model.
The Heterogenesis of Ends Confirmed: Paradoxically, lawsuits filed to protect individual creators are favoring large, structured companies that can afford complex licensing agreements. The 2025 rulings have shown that the ability to prove concrete economic damages-often beyond the means of individual creators-has become crucial to legal success. However, the entry of Disney and Universal changes the dynamics: these giants have both the resources to sustain lengthy legal battles and the influence to gain media and political attention.
The Expanding Licensing Market: Thomson Reuters, Getty Images, and other large content holders are now actively monetizing their archives as training data, creating a new market that could exclude smaller, independent creators. The entry of Disney and Universal will likely accelerate this trend, with the film industry likely to "actually accelerate its use of AI models built on licensed content" once it gains legal clarity.
The Adobe Lesson: The Adobe case demonstrates that even the most seemingly ethical approaches can be flawed. However, it represents a genuine attempt to strike a balance between AI innovation and respecting creators' rights. As Adobe stated, "Our goal is to build generative AI that allows creators to monetize their talents" -a principle that contrasts sharply with the "take first, ask later" approach of many competitors.
The Adobe vs. Competitors Model: While companies such as Anthropic and Meta defend themselves in court over the use of pirated content, Adobe has at least attempted to create a licensing framework. This approach, while imperfect, could serve as a model for future regulations requiring transparency and compensation for creators.
Conclusion: Navigating Post-2025 Uncertainty.
The future of human creativity in the post-sentencing era 2025: Current legal battles are not simply about intellectual property, but about the very meaning of human creativity in the age of AI. The 2025 rulings have attempted to preserve an increasingly artificial distinction between human and artificial creativity, but they have also revealed the practical limits of this approach.
Fragmentation As the New Normal: Instead of clarity, 2025 has produced a patchwork of jurisprudential decisions reflecting fundamentally different approaches. Convergence on some principles (illegality of piracy, importance of market impact) coexists with deep disagreements on fundamental issues.
The Real Emerging Problem: The 2025 rulings have shown that the issue is no longer whether AI can infringe copyright, but whether national legal systems can develop coherent frameworks quickly enough to govern an exponentially evolving technology. The European AI Act and American case law are creating incompatible standards that could fragment the global AI market. Disney's entry-with its lobbying power and political influence-could be the catalyst for more definitive U.S. federal legislation.
The Lesson from Disney: As one industry expert noted about the Disney-Universal case, "This will not be Hollywood trying to turn off generative AI. This is about compensation." This distinction is crucial: it is not about stopping innovation, but ensuring that creators are compensated for their work.
Contrasting Models: 2025 has highlighted three fundamentally different approaches: on the one hand, we have Disney using the court to protect high-value IP, and Adobe attempting to build an ethical (albeit imperfect) ecosystem; on the other, companies that prefer to risk lawsuits rather than restrict access to data; and finally, Europe imposing mandatory transparency through the AI Act. This contrast will likely define the future of AI regulation.
As we try to apply 20th-century laws to 21st-century technologies, we may find ourselves defending a system that not only no longer protects the interests it purports to protect, but actively hinders the emergence of new forms of creative expression that do not easily fit into existing categories. The year 2025 has shown that the road to the coexistence of human and artificial creativity will be much more complex and contradictory than initially anticipated.
Note: This updated article reflects significant 2025 developments in the AI-copyright field, including the first substantive rulings and the implementation of the European AI Act. For more updates on pending cases, see the comprehensive tracker of AI-copyright cases by BakerHostetler. The legal landscape continues to evolve rapidly, requiring constant monitoring of regulatory and case law developments.
Additional Resources:
- EU AI Act - Official Site
- U.S. Copyright Office AI Initiative
- MIT Technology Review - AI Copyright Analysis