Is The California AI Fair Use’ Ruling A New Bright Line?

Scales of Justice with electronic overlay(Beth Turnage Blog) We Now Have a Legal Definition

A federal judge in California has handed down a ruling that reshapes our understanding of AI and writing. In Bartz v. Anthropic, Judge William Alsup declared that Anthropic’s use of copyrighted books to train its AI model, Claude, was “fair use”—essentially giving AI companies the green light to learn from our work without first obtaining permission.

Is this ruling a new bright line in AI use for writers?

What Happened

The first argument against AI is that the LLMs steal authors’ work by using copyrighted work in their training. There are people who stridently believe that AI output is plagarism. There are others who believe that if LLMs used their work for training, then they should get paid for the use.

Three authors, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, sued Anthropic for using their books to train Claude without permission. They claimed the company built a “multibillion-dollar business by stealing hundreds of thousands of copyrighted books.”

Judge Alsup disagreed with characterizing AI training as theft. He ruled that using books to teach AI how to generate new text was “quintessentially transformative” and compared it to “any reader aspiring to be a writer.” The AI wasn’t copying their work. It was a matter of learning from it, just as human writers learn by reading everything they can get their hands on.

However, Anthropic must face a civil trial for downloading pirated books from “shadow libraries.” The judge made it clear that while learning from legally obtained books is fair use, piracy is theft.

What the Headlines Don’t Say

After training on billions of words, AI defaults to the average result of the aggregate of the training data pool. Unless you prompt it on (and depending on the AI model) say, your personal writing style, what it churns out is not what creatives aspire to achieve.

If you are a below-average writer, you won’t know what works or doesn’t with AI writing. If you are an average writer, you’ll get average results. Pushing past average to good or exceptional requires an author’s time and effort to get to that level, with or without AI.

Why? The human creative process involves combining disparate, non-linear ideas to generate new and fresh ideas. The brains of highly creative people are wired with a neurological process that is distinctly different from those with average levels of creativity. Plus, people who achieve a master’s-level understanding of writing develop the use of a different part of the brain than those below a master’s level.

Exceptional writing is rare. Google’s AI reports that perhaps 1% of people possess truly exceptional writing abilities, and most of them never complete a book. Of those who do finish, even fewer achieve widespread recognition. AI doesn’t change those odds. It simply provides average writers with better tools to be average more efficiently. That’s a human problem, not an AI problem.

In terms of human authorship, AI will always require the transformative power of the human creative brain to elevate average to exceptional writing.

The Copyright Conundrum

While the California ruling removes the specter of thievery from the LLMs use of training material, there are still other issues to work through.

Now, here’s where it gets messy. The Copyright Office also says that works containing AI material can be copyrighted if there’s “sufficient human authorship and creativity.” But what does that actually mean?

If I use AI to generate a rough draft that I then rewrite extensively, is that transformative enough? What about AI-generated dialogue that I edit and integrate into scenes I plotted myself? Nobody knows. The Copyright Office says they’ll handle these “case-by-case,” which is about as helpful as a chocolate teapot when you’re trying to make business decisions.

This uncertainty might be worse than an outright ban. At least then, everyone would know the rules. Instead, we’re all making decisions about the use of AI without knowing whether we’re unintentionally undermining our copyright protection. It’s one more reason why we need clear guidelines, and fast. Not just for artistic integrity, but for basic legal protection.

The question becomes what constitutes significant transformation of the work? Let’s start with 100% AI generated content. Since it doesn’t contain human authorship, it makes sense that it cannot be copyrighted. What about a work that is 50% AI-generated? Yeah, not ideal. 25% AI-generated prose? That’s potentially 25% of your book that exists in a legal gray area. 10%? Is that too tight?

While as the California court ruled, AI generated text is not plagiarism, as an intellectual exercise, let’s apply the plagiarism standard in academic writing as a baseline. Plagiarism.com says that up to 20% of text in such a paper can contain 20% of someone else’s work and still be considered original work. Essays and research papers can have up to 25%. Is this a fair standard? If 75% of the work is transformative, can we then say the entire work is copyrightable?

This is a definition that needs to be made, either by legislation or the courts. Considering how courts rule, it will likely default to a legalese that states that if a work by a preponderance is substantially transformed by human authors, it is copyrightable, which may not be helpful to us at all. We need a bright line other than the current “any part that is generated, but not transformed, cannot be copyrighted.”

A copyright limitation might serve as a natural brake on AI overuse. Writers who want to maintain full legal control over their work have a powerful incentive to keep AI contributions minimal and clearly transformative. It’s one thing to use AI for brainstorming or structural help. It’s another to risk losing copyright protection for the final product. It’s another reason why the industry desperately needs clearer guidelines—not just for artistic integrity, but for basic legal and commercial protection.

The Implications for Writers

This ruling isn’t really about AI versus writers. It’s about recognizing that creativity builds on what came before. Every writer learns by reading other writers. Every story borrows elements from stories that came before it. AI performs this process faster and at scale.

The problem with AI creation is that machine pattern recognition is more systematic and does not replicate the human creative process. And AI can overlook context and nuance. An AI might identify that two texts have similar word patterns but completely miss that one is sincere and the other is deeply sarcastic.

What AI does extremely well is to give springboards to creatives in shaping, but not directing, the work. In the competitive and fast-paced publishing industry, AI can provide crucial support to authors.

The industry will adapt. America loves new industries, and we always go through the same cycle: moral panic, legal battles, then integration once the economic benefits become clear. We saw it with the printing press, photography, and recorded music. AI will follow the same pattern.

Is AI The Apocalypse of the Publishing Industry?

The judge essentially stated that AI can learn from published work in the same way humans do. That’s not theft, but how culture has always worked.

Telling writers not to use AI is just about as effective as the rhythm method for contraception. You’re relying on human reasoned judgment and restraint to not do something, when our most creative members create despite the rules. Eventually, nixing all AI in fiction creation is a mission fail from the outset. If you don’t believe me, go to Reddit, and you’ll find all sorts of new authors embracing AI like it’s their newest best friend.

If writers worry about AI companies profiting from their writing, then let’s advocate for reasonable licensing agreements instead of attempting to halt the technology. Let’s set standards for the use of generative AI in fiction.

Closing Thoughts

The question isn’t whether AI should be allowed to learn from our work. It already does—and always has, just like people do. The real question is how we protect and empower human creativity in an age of generative tools. We don’t need to vilify writers who use AI. We need usable, ethical standards that apply to everyone. Until we have that clarity, every author experimenting with AI is navigating a minefield without a map. Let’s fix it.

What do you think about the California ruling? Are you worried about AI’s impact on writing, or do you see opportunities? Share your thoughts in the comments.

Add a Comment

Your email address will not be published. Required fields are marked *