There Is Hope in AI Copyright · China's New Generative AI Transparency Law · And More
The news, papers, and ideas that will help you understand AI's legal and ethical challenges, and potential paths forward | Paid Edition #232
👋 Hi everyone, Luiza Jarovsky here. Welcome to our 232nd edition, trusted by over 77,000 subscribers worldwide. Learn and advance your career:
Join my AI Governance Training (yearly subscribers save $145)
Receive additional educational resources in our Learning Center
Find open roles in AI governance and privacy in our Job Board
Discover your next read in AI and beyond in our AI Book Club
⏰ Last Chance to Join My Training Program in 2025
If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, join the 25th cohort of my 15-hour live online AI Governance Training in November (the final cohort of the year).
Each cohort is limited to 30 people, and more than 1,300 professionals have already participated. Many described the experience as transformative and an important step in their career growth. *Yearly subscribers save $145!
There Is Hope in AI Copyright · China's New Generative AI Transparency Law · And More
The news, papers, and ideas that will help you understand AI's legal and ethical challenges, and potential paths forward:
1. News:
Anthropic has recently signed a $1.5 billion settlement (!) with book authors. With approximately 500,000 books covered by the lawsuit, authors are expected to receive around $3,000 per book, making it the largest publicly reported copyright recovery in history. Beyond the financial compensation, Anthropic will also have to destroy the LibGen and PiLiMi datasets.
Even though the monetary penalty will not itself be a major deterrent (Anthropic is now valued at $183 billion), this settlement conveys the important message that AI companies might have to pay thousands of dollars per work they illegally use for AI training.
This settlement is also a crucial sign of hope for authors in the bigger context of AI copyright litigation, especially given this year's earlier decision that sided with Meta.
Authors and copyright holders who believe Anthropic may have downloaded their books from LibGen or PiLiMi can use this website to obtain more information about potential claims.
Still on the topic of AI copyright, Warner Bros is suing Midjourney for copyright infringement. According to the lawsuit, “Midjourney thinks it is above the law. It sells a commercial subscription service, powered by AI, that was developed using illegal copies of Warner Bros Discovery’s copyrighted works. It lets subscribers pick iconic Warner Bros Discovery copyrighted characters and then reproduces, publicly displays and performs, and makes available for download infringing images and videos, and unauthorized derivatives, with every imaginable scene featuring those characters.”
China's new law on generative AI transparency entered into force on September 1, and it's surprisingly more detailed than the EU AI Act's provisions on the topic. As I wrote earlier in this newsletter, if lawmakers create transparency rules that are too vague and not context-specific, companies and individuals will simply bypass them through 'formalistic tricks.' Read more about its key obligations here.
President Trump has directly threatened the EU and referenced "Digital Services Legislation" (the EU Digital Services Act) and "Digital Markets Regulations" (the EU Digital Markets Act) as “designed to harm, or discriminate against, American technology.” The EU is under pressure (internal and external), and it will keep conceding to Washington as it has in recent months. Read my commentary here.
The Federal Trade Commission issued an order against the AI company Workado over false, misleading, or unsupported advertising. The company promoted its AI Content Detector as 98% accurate in detecting whether text was written by AI or a human; however, independent testing showed that the accuracy rate for general-purpose content was only 53%. AI enforcement is on the rise in the U.S. Read my commentary here.
The EU Commission opened a stakeholder consultation on guidelines and a code of practice for AI transparency, as per Article 50 of the EU AI Act. Interested parties can share their views by October 2.
Following recent “AI friend” scandals (read more about Meta's leaked document and the ChatGPT-supported suicide), both Meta and OpenAI announced changes to their internal standards and safeguards for AI chatbots. Among other changes, OpenAI announced it will introduce parental controls to ChatGPT within the next month. Meta, on the other hand, said it will not allow its chatbots to “engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations.”
The open-access book "AI and Fundamental Rights," written by a group of all-star authors, is now available for download. Don't miss it!
*If you would like to share a specific ethical or legal development in AI or your thoughts on a specific event, reply to this email or use this form.
2. Papers:
I. “Lower AI Literacy Predicts Greater AI Receptivity” by Stephanie M. Tully (link):
"People with lower AI literacy are typically more receptive to AI. (...) This link occurs because people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe"
II. “AI and the Future of Education” by UNESCO (link):
“This collection has underlined how AI futures in education require priority attention to the ways that entrenched inequalities of power, access and opportunity are being reshaped.”
III. “Why Language Models Hallucinate” by Adam Tauman Kalai et al/OpenAI (link):
“We then argue that hallucinations persist due to the way most evaluations are graded—language models are optimized to be good test-takers, and guessing when uncertain improves test performance.”
*If you are a researcher in AI ethics or AI law and would like to have your recently published paper featured here, reply to this email or use this form.
3. Ideas:
The creative industry is changing. From movies to visual art, books, songs, and every other type of creative expression, generative AI tools appear to be penetrating the creative production process, with no signs of slowing down.
There have been three main reactions to the emergence of various types of AI-generated works: