Screw the Money — Anthropic’s $1.5B Copyright Settlement Hurts Writers

Anthropic has reached a $1.5 billion settlement with authors over the illegal use of copyrighted books to train its AI, Claude. The historic payout is the largest in U.S. copyright law history but offers little protection for writers, as it reinforces the tech industry's dominance in AI training using copyrighted works.

Sep 5, 2025 - 17:19
Sep 5, 2025 - 17:22
 0  6
Screw the Money — Anthropic’s $1.5B Copyright Settlement Hurts Writers

A historic $1.5 billion settlement in a class action lawsuit filed by authors against Anthropic will see around half a million writers receive at least $3,000 each. While this payout marks the largest in U.S. copyright law history, it doesn’t represent a true victory for authors — instead, it’s another win for the tech giants.

Companies like Anthropic, behind the AI model Claude, are rushing to gather as much written material as possible to train their large language models (LLMs), which power AI chat products such as ChatGPT and Claude. These products are increasingly threatening creative industries, even as their outputs remain relatively basic. The more data these AIs consume, the more sophisticated they become. However, after scraping vast amounts of content from the internet, these companies are nearing the point of data saturation.

This is where Anthropic comes in: the company pirated millions of books from so-called “shadow libraries” and used them to train its AI. The lawsuit Bartz v. Anthropic is one of many legal battles against Meta, Google, OpenAI, and Midjourney regarding the legality of using copyrighted material to train AI systems.

The $1.5 billion settlement, however, isn't a result of Anthropic's illegal use of copyrighted works in training its AI — rather, it serves as a costly slap on the wrist. Anthropic, which recently raised an additional $13 billion, chose to illegally download books instead of purchasing the rights to use them.

In June, federal judge William Alsup ruled in favor of Anthropic, declaring that it is legal to train AI on copyrighted material. The judge justified this by invoking the fair use doctrine, a part of copyright law that has not been updated since 1976. The ruling stated that the use of copyrighted material for AI training is “transformative,” meaning it doesn’t seek to replace or duplicate the original works but rather creates something new. “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” the judge explained.

It was the piracy, not the AI training itself, that led Judge Alsup to bring the case to trial. But with Anthropic’s settlement, a trial is no longer necessary. The settlement is designed to resolve the plaintiffs’ remaining claims, according to Aparna Sridhar, deputy general counsel at Anthropic. She added, “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

As dozens more cases regarding AI’s use of copyrighted works make their way through the courts, judges will now look to Bartz v. Anthropic as a reference. However, given the significant implications of these decisions, future rulings could potentially reach different conclusions on the matter.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 1
Angry Angry 0
Sad Sad 0
Wow Wow 0
TechAmerica.ai Staff TechAmerica.ai’s editorial team, consisting of expert editors, writers, and researchers, crafts accurate, clear, and valuable content focused on technology and education. We deliver in-depth technology news and analysis, with a special emphasis on founders and startup teams, covering funding trends, innovative startups, and entrepreneurial insights to empower our readers.