Human vs. Artificial Creativity: Where the Difference Really Lies (And Why the Ghibli Style Teaches Us Something)
The debate over artificial intelligence and copyright has intensified dramatically in 2024-2025. These are no longer theoretical discussions: the New York Times sued OpenAI for copyright infringement (December 2023), Getty Images sued Stability AI, and thousands of artists filed class actions. AI companies respond that their systems "learn" just like humans-but is this really the case?
Human creativity has always developed through connections: Shakespeare was inspired by historical chronicles and folk tales, Van Gogh studied Japanese prints, the Beatles began by playing American rock. Artists always reinterpret earlier works. Artificial intelligence, tech companies say, does the same thing. But the case of "Ghibli style" reveals how simplistic this narrative is.
Type "Ghibli style" into Midjourney or DALL-E and you get images strikingly similar to Hayao Miyazaki's masterpieces: pastel colors, fluffy clouds, dreamlike landscapes, characters with big eyes. It is technically impressive. It is also deeply problematic.
Studio Ghibli took decades to develop that distinctive aesthetic: precise color palette choices, traditional animation techniques, artistic philosophy rooted in Japanese culture and Miyazaki's personal vision. When an AI model replicates that "style" in seconds, is it really "learning" as Miyazaki learned from Disney animation and Japanese manga? Or is it simply recombining visual patterns extracted from thousands of Ghibli frames without permission?
The difference is not philosophical-it is legal and economic. According to a Stanford analysis published in arXiv (Carlini et al., 2023), diffusion models such as Stable Diffusion can regenerate nearly identical images from the training set in about 3 percent of cases when prompted with specific prompts. It is not "inspiration," it is storage and reproduction.
Greg Rutkowski, a Polish digital artist, discovered that his name appeared in 1.2 million prompts on Stable Diffusion-unintentionally becoming one of the most requested "styles" without ever giving consent or receiving compensation. As he told MIT Technology Review, "I don't feel flattered. I feel like something I've been building for years has been stolen from me."
The scale of AI training has reached unprecedented scales. LAION-5B, one of the most widely used datasets for image models, contains 5.85 billion image-text pairs collected from the Internet-including copyrighted works. GPT-4 has been trained on massive portions of the Internet, including paid articles, books, and proprietary software code.
Ongoing major legal actions:
AI companies defend the practice by invoking "fair use" under U.S. law: they argue that the training is "transformative" and does not replace the original market. But several courts are challenging this interpretation.
Judge Katherine Forrest, in the Getty vs. Stability AI case, denied the motion to dismiss in January 2024, allowing the case to proceed, "The question of whether training AI models constitutes fair use is complex and requires thorough examination of the facts." Translation: AI companies can't just invoke fair use and call it quits.
Faced with legal pressure, AI companies have begun negotiating licenses. OpenAI has entered into agreements with:
Google has signed similar agreements with Reddit, Stack Overflow, and various publishers. Anthropic has negotiated with publishers for the use of books.
But these agreements cover only large publishers with negotiating power. Millions of individual creators-artists, photographers, freelance writers-remain uncompensated for works used in training already completed.
The "AI learns like humans" narrative is technically misleading. Let's look at the key differences:
Scale and speed: A human artist studies perhaps hundreds or thousands of works in a lifetime. GPT-4 has been trained on trillions of words. Stable Diffusion on billions of images. The scale is incomparable and exceeds any reasonable definition of "inspiration."
Semantic understanding: When Van Gogh studied Japanese prints, he did not mechanically copy the visual patterns-he understood the underlying aesthetic principles (use of negative space, asymmetrical composition, emphasis on nature) and reinterpreted them through his European post-impressionist vision. His works are conscious cultural syntheses.
AI patterns do not "understand" in the human sense. As Melanie Mitchell, professor at the Santa Fe Institute, explains in her "Artificial Intelligence: A Guide for Thinking Humans," "Deep learning systems excel at pattern recognition but do not possess causal understanding, abstract reasoning, or mental models of the world." Stable Diffusion does not "understand" what makes Ghibli distinctive-extracts statistical correlations among millions of pixels labeled "Ghibli style."
Creative intentionality: Human artists make intentional creative choices based on personal vision, message they want to communicate, emotions they want to evoke. Miyazaki incorporates environmentalist themes, pacifism, feminism into his films-conscious moral and artistic choices.
AI generates based on statistical probabilities, "given prompt X and training set Y, which pixel configuration is most likely?" There is no intentionality, no message, no vision. As Ted Chiang wrote in The New Yorker, "ChatGPT is a fuzzy jpeg of the web"-a lossy compression that loses exactly the qualities that make original content valuable.
Transformation vs. recombination: Pablo Picasso studied African masks but created Cubism-an entirely new artistic movement that reinvented spatial representation in painting. The transformation was radical and original.
Generative AI models operate by interpolation in latent space: they recombine elements of the training set into new configurations, but remain constrained to the statistical distribution of the data on which they were trained. They cannot invent genuinely new aesthetics that violate learned statistical regularities. As demonstrated by MIT research (Shumailov et al., 2023), models trained repeatedly on previous AI outputs degenerate progressively-phenomenon called "model collapse."
Here is the central paradox: AI can generate outputs that look original (no human has ever seen that specific Ghibli-style image before) but are statistically derivative (they are interpolations of existing patterns). It is a superficial form of originality without fundamental innovation.
This has profound implications. As philosopher John Searle argued in his famous "Chinese Room argument": simulating a cognitive process is not the same as owning it. AI can simulate creativity without being creative in the human sense of the word.
Faced with the controversy, various solutions are being developed:
Protective tools for artists:
Opt-out records:
Compensation framework:
Government regulations:
TheEU AI Act (effective August 2024) requires providers of generative AI models to publish detailed summaries of copyrighted training data used. It is the first regulatory attempt to impose transparency.
The Tennessee ELVIS Act (March 2024) specifically protects voice and likeness performers from unauthorized use in AI-first U.S. state with specific legislation for deep voice and visual fakes.
Proposals to the U.S. Congress include explicit opt-in requirements for copyrighted works (instead of opt-out) and creation of public registries of training datasets.
Two visions of the future confront each other:
Optimistic view (AI companies): AI is a tool that amplifies human creativity, like Photoshop or music synthesizers. Artists will use AI to accelerate workflows, explore variations, overcome creative blocks. Hybrid art forms will emerge where humans drive vision and AI performs technical parts.
Concrete examples already exist: the movie "The Frost" (2023) used AI to generate backgrounds and textures, with human artists guiding art direction. Musicians use Suno and Udio to generate backing tracks to improvise on. Writers use GPT as a "rubber duck" to discuss narrative ideas.
Pessimistic view (many creators): AI will commoditize creativity, eroding the economic value of creative work until only elites with exceptional abilities survive. "Average creativity" will be replaced by cheap generators, destroying the creative middle class-exactly as industrial automation eliminated artisans in the 19th century.
Preliminary evidence supports this concern: on freelance platforms such as Fiverr, requests for illustrators and copywriters dropped 21 percent in 2023 (Fiverr Q4 2023 data), while "AI art generation" offerings exploded. Greg Rutkowski has seen direct commissions drop 40% since his style became popular on Stable Diffusion.
The truth probably lies in the middle: some forms of creative work will be automated (generic stock illustrations, basic copy marketing), while highly original, conceptual, culturally rooted creativity will remain the human domain.
The distinction between human and AI content will become increasingly difficult. Already, without watermarks or disclosure, it is often impossible to distinguish GPT-4 text from human text, or Midjourney images from photographs. When Sora (OpenAI video generator) becomes public, the distinction will extend to video.
This raises profound questions about authenticity. If an AI-generated Ghibli-style image evokes the same emotions as the original, does it have the same value? Philosopher Walter Benjamin in his "The Work of Art in the Age of its Technical Reproducibility" (1935) argued that mechanical reproducibility erodes the "aura" of the original work-its spatio-temporal uniqueness and authenticity.
Generative AI takes this argument to the extreme: it does not reproduce existing works but generates endless variations that simulate the original without being one. It is the Baudrillardian simulacrum-the copy without an original.
Yet, there is something irreducibly human about the conscious creative act-the artist who chooses each brushstroke knowing what he or she wants to communicate, the writer who crafts each phrase to evoke specific emotions, the composer who builds tension and resolution with intentionality. AI can simulate the outcome but not the process-and perhaps it is in the process that the authentic value of creativity lies.
As the Ghibli studio wrote in a statement (November 2023), "The soul of our films lies not in the visual style that can be copied, but in the creative decisions we make frame by frame to serve the story we want to tell. That one cannot be automated."
The value of art, ultimately, comes from its ability to connect deeply with the human experience-to make us feel understood, challenged, transformed. Whether this can be achieved by AI remains an open question. But as long as art is made by humans for humans, speaking of the human condition, it will retain something that no algorithm can replicate: the authenticity of lived experience translated into aesthetic form.
Sources: