Hi, Luiza Jarovsky here. Welcome to our 185th edition, read by 56,900+ subscribers worldwide. It's great to have you here! Paid subscribers have full access to my timely analyses of AI's legal and ethical challenges. Don’t miss out!
For more: AI Governance Training | AI Book Club | Live Talks | Job Board
🪃 Anthropic's Fair Use Boomerang
Three days ago, Anthropic filed a summary judgment motion (it requested the judge to end the case, deciding in its favor) in an AI copyright lawsuit brought by book authors Andrea Bartz and others.
Having followed the wave of AI copyright lawsuits from 2022 to date, especially in the U.S., the arguments Anthropic raised in its latest move signal a potential path to resolve the AI copyright debate.
Interestingly, Anthropic's fair use arguments might actually backfire, establishing an extremely high standard that the company will have difficulty meeting, but that will be positive for content creators. I explain:
In its legal motion, Anthropic essentially says that:
Its AI model serves a different purpose from books, so it's a transformative use, encouraged by copyright law.
Its use of books to train AI is transformative and fair use because its AI systems don't show copyrighted works to end users.
Its use of books to train AI is fair use and, therefore, allowed by copyright law.
From the arguments above, Anthropic implies that:
If it shows copyrighted works to end users, its use of copyrighted works to train AI would likely no longer be transformative or considered fair use.
To keep its fair use claim, it's open to commit with near 100% certainty that no copyrighted works will be shown to end users.
If books are being shown to end users, the transformative or fair use arguments would become extremely weak.
To sustain its current fair use claim, Anthropic must implement extremely effective tools such as copyright filters or fine-tuning mechanisms that ensure a near-zero rate of copyright infringement.
I don't think these 100% effective tools exist. Even if extremely effective tools exist, they're likely susceptible to bypass and jailbreaks.
If Anthropic's tool is not effective enough, according to its own legal argument, it will have failed the fair use test.
It will need to find an alternative, such as AI training licensing deals, to ensure that consent and compensation are properly addressed, bearing in mind the risk of copyright infringement.
In my opinion, this is the legal argument that content creators and their lawyers should focus on. Two recent cases reflect this line of argument: