Court Backs AI’s Right to Use Stolen Books for Training—Trial on Piracy Still Looms

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This sets a powerful precedent in favor of tech companies, despite ongoing questions about illegal downloads.

Key Facts:

  • Judge William Alsup ruled that Anthropic’s use of copyrighted books to train its AI models qualifies as “fair use.”
  • This is the first time a court has explicitly supported AI training on copyrighted works without creator consent.
  • The case, Bartz v. Anthropic, was brought by authors who claim their works were pirated and stored permanently by the company.
  • Alsup ruled the training itself was lawful but allowed a separate trial to proceed over Anthropic’s creation of a “central library” using pirated copies.
  • Dozens of similar lawsuits against companies like OpenAI, Meta, and Google may be influenced by this decision.

The Rest of The Story:

Judge Alsup’s ruling marks a significant legal win for artificial intelligence companies.

In his decision, he determined that Anthropic’s use of copyrighted books to train large language models qualifies as “fair use,” despite being done without the creators’ consent.

This case centers on the broader fair use doctrine, a complex section of copyright law that hasn’t been updated since 1976.

According to the lawsuit, Anthropic sought to build a massive and permanent digital collection of books—including those downloaded illegally from pirate websites.

While the judge sided with Anthropic on the issue of AI training, he drew a sharp line on the piracy accusation: “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for theft.”

A separate trial will determine damages related to the pirated materials used to build the so-called “central library.”

Commentary:

This ruling is a major setback for authors, artists, and publishers.

For decades, creators have relied on copyright protections to maintain control over their intellectual property.

With this decision, AI firms can now feed their models with entire libraries of copyrighted material—without permission or compensation.

Imagine writing a book over the course of years, only to have it quietly scraped by a corporation to train its artificial intelligence.

That same AI may then quote from it or mimic your voice.

In almost any other context, this would be considered plagiarism.

Yet under this decision, it is called innovation.

What’s even more troubling is the suggestion that pirated content may be part of the foundation for some AI training sets.

If a company stole physical books to train a machine, it would be criminal.

But somehow, doing it digitally under the fair use umbrella shields them from liability—at least partially.

This isn’t just about books.

We’ve already seen AI-generated deepfakes of celebrities—images and videos created without their consent.

People are profiting from likenesses, voices, and personalities they didn’t create, while the original individuals receive no recognition or payment.

This decision sends a signal: if you’re a creative, your work is free game for machines.

It guts the value of authorship and replaces it with a digital free-for-all where tech giants mine creativity like a raw material.

The law is lagging far behind the technology.

Unless there is a legislative fix or stronger rulings from other courts, this case could give companies carte blanche to plunder copyrighted material for “training” purposes.

That doesn’t just hurt writers—it erodes trust in the very concept of ownership.

The Bottom Line:

A federal judge has opened the door for AI firms to use copyrighted books for training without needing permission from creators.

While the ruling supports tech companies under the fair use doctrine, a separate trial will examine whether Anthropic broke the law by using pirated materials.

This decision could reshape the legal battlefield for creators fighting to protect their work.

Without stronger safeguards, authors may be left behind in the AI gold rush.

Sign Up For The TFPP Wire Newsletter

By signing up, you agree to our Privacy Policy and Terms of Use. You may opt out at any time.

Read Next

Shock Poll Shows The Communist, Hamas Loving Candidate as the Front Runner in Tomorrow’s NYC Mayor’s Contest

Cities on Heightened Alert, Deploying Additional Resources After Airstrikes on Iranian Nuclear Facilities

White House Orders Agencies to Depoliticize Research, Apply ‘Gold Standard Science’ Principles

GOP Lawmakers Seek to Federally Criminalize Blocking or Obstructing Traffic

LA County Sheriff’s Department Posts ‘Condolences’ to Those Affected by US Strike on Iran