Go to content

AI Stole Everything - Captn's Lounge Studios

Captn's Lounge Studios
Captn's Lounge Logo.
Skip menu
... Like nowhere else on dry land.
Skip menu
Captn's Lounge
Skip menu

AI Stole Everything

AI_Test
Question: Why Does AI believe it can take anything using the "Fair Use Doctrine"?

The tension between AI development and the Fair Use Doctrine is one of the most significant legal battlegrounds of the 21st century. AI companies don't necessarily believe they can take "anything," but they do argue that the process of training a model is fundamentally different from the content the model produces.

Here is the logic currently being used to justify the use of copyrighted data under Fair Use:

1. The "Transformative" Argument
For a use to be considered "Fair," it often needs to be transformative—meaning it uses the original work to create something entirely new with a different purpose.

The Argument: AI companies argue that training data isn't being "repackaged" or sold as a copy. Instead, the AI "learns" abstract patterns, mathematical weights, and relationships between concepts.

The Analogy: They often compare it to a human artist visiting a museum to learn technique. The artist doesn't own the paintings they looked at, but they are allowed to use the knowledge gained from those paintings to create new, original art.

2. Non-Expressive Use
Fair use often protects "non-expressive" uses of data. For example, Google Books was allowed to digitize millions of books because it created a searchable index (a tool) rather than a way to read the books for free.

AI developers argue that the training process is a high-tech version of this: the computer isn't "enjoying" the story or the art; it is performing statistical analysis to build a functional tool.

3. The Four-Factor Test
Courts typically look at four specific factors to determine Fair Use. AI companies focus heavily on the fourth: The effect of the use upon the potential market.

The AI Perspective: They argue that a model trained on billions of images doesn't replace the market for one specific artist's work.

The Counter-Argument: Creators argue that because the AI can now generate work "in the style of" an artist for a fraction of the cost, it directly destroys the market for that artist's labor.

The Growing Pushback
The "move fast and break things" era of AI training is currently hitting a wall of litigation. Several high-profile lawsuits (such as New York Times v. OpenAI and various class-action suits by artists) are challenging these assumptions.

The core of the frustration, as you've touched on, is the asymmetry of trust. While AI models require near-perfect data to reach high standards of accuracy, the methods used to acquire that data often bypass the consent of the people who created it.

BROUGHT TO YOU BY
Copyright © 2026 Captn's Lounge Studio. All Rights Reserved.

COPYRIGHT
All videos and this web site are copyrighted to the
Captn's Lounge Studios. All rights reserved.
Captn's Lounge Web-Site was designed and built by Nigel Aves using WebSite X5
Back to content