👋 Hi, Luiza Jarovsky here. Welcome to our 164th edition, read by 48,600+ subscribers in 160+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance & regulation. It's great to have you here!
🏎️ Governing the Global AI Race
2025 has just begun, and the global AI race is at full speed, as recent developments in the U.S., UK and Kenya show:
→ Yesterday was Donald Trump's first day as the 47th President of the United States, and he signed an Executive Order titled “Initial Rescissions of Harmful Executive Orders and Actions.” Among the revoked orders is Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI (introduced in 2023 by President Biden). This had been the most significant piece of AI regulation introduced at the federal level in the U.S. so far.
→ The U.S. Patent and Trademark Office published its AI strategy. Its vision is to “unleash America’s potential through the adoption of AI to drive and scale U.S. innovation, inclusive capitalism, and global competitiveness.”
→ Moving to the UK, it introduced an action plan to become a global leader in AI. Among its ambitious goals are the establishment of AI Growth Zones to facilitate the accelerated build-out of AI data centers and a requirement for regulators to publish annual reports on how they have enabled innovation and growth driven by AI in their respective sectors.
→ In Africa, Kenya launched its National AI Strategy 2025-2030, highlighting key concerns regarding AI in Kenya and the strategic importance of AI to the country. It also declared that it aims to be the leading AI hub in Africa.
→ In the context of the AI race, it doesn't look like AI governance is a priority anywhere outside of the EU. Despite that, California released two legal advisories on AI, highlighting AI-related practices that may be unlawful under California law. Examples of unlawful practices include falsely advertising the accuracy, quality, or utility of AI systems and using AI to foster or advance deception.
With the AI hype beginning to fade and major political shifts happening worldwide, it looks like 2025 will set the ground for the AI race. AI governance is unlikely to be a priority anywhere, making it our responsibility to start uncomfortable conversations, set the tone, and move things forward.
-
👉 Later this week, paid subscribers will receive a must-read edition on emerging challenges in AI governance: stay tuned!
🎓 AI Governance Careers Are Surging: Get Ahead
Are you aiming to become a leader in AI governance or responsible AI?
I invite you to join the 17th cohort of our AI Governance Training and gain the skills to meet the new challenges in this dynamic field. This 12-hour live online program goes beyond standard certifications and offers:
8 live sessions with me, curated self-study materials, and quizzes.
A training certificate and 13 CPE credits pre-approved by the IAPP.
A meet-and-greet session with peers and an optional office hours meeting to discuss your questions or career goals.
The 17th cohort begins on February 4th, and only 4 seats remain. Learn more about the program and hear from recent participants here.
Don't miss it: join 1,000+ professionals from 50+ countries who have advanced their careers through our programs. Save your spot:
*If cost is a concern, we offer discounts for students, NGO members, and individuals in career transition. To apply, please fill out this form.
🎯 The UK's AI Action Plan
The UK published an action plan to become a global leader in AI. These are some of its most ambitious goals:
1. Expand the capacity of the AI Research Resource (AIRR) by at least 20x by 2030 – starting within 6 months.
2. Establish ‘AI Growth Zones’ (AIGZs) to facilitate the accelerated build out of AI data centres.
3. Establish a copyright-cleared British media asset training data set, which can be licensed internationally at scale.
4. Explore how the existing immigration system can be used to attract graduates from universities producing some of the world’s top AI talent.
5. Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector.
6. A data-rich experimentation environment including a streamlined approach to accessing data sets, access to language models and necessary infrastructure like compute.
7. Development or procurement of a scalable AI tech stack that supports the use of specialist narrow and large language models for tens or hundreds of millions of citizen interactions across the UK.
8. Mandating infrastructure interoperability, code reusability and open sourcing.
9. Appoint AI Sector Champions in key industries like the life sciences, financial services and the creative industries to work with industry and government and develop AI adoption plans.
10. Create a new unit, UK Sovereign AI, with the power to partner with the private sector to deliver the clear mandate of maximizing the UK’s stake in frontier AI.
╰┈➤ According to the action plan, the UK government must:
→ "Invest in the foundations of AI: We need world-class computing and data infrastructure, access to talent and regulation;
→ Push hard on cross-economy AI adoption: The public sector should rapidly pilot and scale AI products and services and encourage the private sector to do the same. This will drive better experiences and outcomes for citizens and boost productivity.
→ Position the UK to be an AI maker, not an AI taker: As the technology becomes more powerful, we should be the best state partner to those building frontier AI. The UK should aim to have true national champions at critical layers of the AI stack so that the UK benefits economically from AI advancement and has influence on future AI’s values, safety and governance."
🚀 Daily AI Governance Resources
Thousands of people receive our daily emails with educational and professional resources on AI governance, along with updates about our live sessions and programs. Join our learning center here:
♟️ The USPTO's AI Strategy
The U.S. Patent and Trademark Office (USPTO) published its AI strategy, and it's a great resource for everyone interested in the intersection of AI and copyright. Key points:
╰┈➤ AI Vision
"Unleashing America’s potential through the adoption of AI to drive and scale U.S. innovation, inclusive capitalism, and global competitiveness."
╰┈➤ AI Missions
1. "Foster the research, development, and commercialization of AI in the domestic and global economy.
2. Leverage AI effectively and responsibly to empower our staff, optimize our operations, and deliver value to our stakeholders.
3. Empower current and future innovation and investment in the same through data and research."
╰┈➤ AI Focus Areas
1. Advance the development of IP policies that promote inclusive AI innovation and creativity.
2. Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product
development.
3. Promote the responsible use of AI within the USPTO and across the broader
innovation ecosystem.
4. Develop AI expertise within the USPTO’s workforce.
5. Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.
╰┈➤ Interesting excerpt about copyright, especially in the context of the growing number of AI copyright lawsuits in the U.S.:
"The development and use of AI systems implicates a variety of copyright law and policy considerations, including with respect to data ingested into and outputs generated by these systems. The USPTO proactively engages on these critical issues and will continue to do so, including by continuing to monitor relevant litigation in Federal courts—weighing in as appropriate—and by continuing to provide technical assistance to Congress as it develops legislation to address these issues.
The USPTO will also continue to carefully follow international developments on these topics and coordinate across government to engage with other countries, with a goal of potential international alignment on these issues.
Further, the USPTO recently conducted numerous listening sessions on AI and copyright to solicit views from a diverse spectrum of copyright stakeholders and will continue to conduct stakeholder outreach to inform this ongoing work.
Pursuant to the directives from section 5.2(c)(iii) of Executive Order 14110, the USPTO will also continue to consult with the U.S. Copyright Office and stakeholders as the USPTO develops policy recommendations for potential executive actions concerning copyright law’s intersection with AI technology."
🎬 Premiere: AI and Copyright Infringement
On Sunday, I announced the premiere of my conversation with Andres Guadamuz about AI & copyright infringement, and you can't miss it:
Andres is one of the world's most renowned experts in the field of Intellectual Property Law. He is an associate professor of Intellectual Property Law at the University of Sussex and editor-in-chief of the Journal of World Intellectual Property.
Sunday was the official premiere of our much-anticipated conversation (750+ people registered for the live event). We discussed topics such as:
copyright concerns in the context of AI training and deployment;
the applicability of the fair use exception;
the possibility of receiving copyright protection when using AI to create art or literary works;
the various ongoing copyright lawsuits in the U.S.;
and more.
The intersection of AI and copyright will remain a trending topic in 2025, with more lawsuits expected worldwide (although at a slower pace; read my 2025 forecast), an increase in licensing deals between media organizations and AI companies, and more regulatory and policy efforts to ensure copyright law is fit for the AI age.
Given these ongoing challenges, on Sunday, besides the recording of my conversation with Andres, I shared additional thoughts and essential resources on AI and copyright that will be helpful for AI governance professionals navigating the field.
Paid subscribers can read the full edition and watch the full recording, and free subscribers can see a preview here.
📚 AI Book Club: What Are You Reading?
📖 More than 2,100 people have already joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The 16th recommended book was “The Tech Coup: How to Save Democracy from Silicon Valley” by Marietje Schaake.
📖 Ready to discover your next favorite read? See our previous reads and join the AI book club:
🇰🇪 Kenya's National AI Strategy
Kenya aims to be the leading AI hub in Africa and has recently launched its National AI Strategy 2025-2030. It's a great read for everyone in AI governance and an inspiration for other countries. Key points:
╰┈➤ Key Concerns Regarding AI in Kenya
1. Labour Disruptions and Economic Impact
2. Digital Divide and Inclusive Development
3. Data Sovereignty and Privacy
4. Ethical AI, Human Rights and the Promotion of Public Trust
5. Regulatory Preparedness
6. Local Innovation and Competitiveness
7. Public Sector Efficiency and Service Delivery
8. Sustainable (AI) Development
╰┈➤ Strategic Importance of AI to Kenya
1. Economic Growth
2. Public Sector Efficiency
3. International Competitiveness
4. Protection against negative impacts of externally developed AI solutions
5. Job Creation and Skills Development
╰┈➤ Scope of Kenya's AI Strategy
1. “AI Digital Infrastructure: The strategy provides strategic options and initiatives that enable the development of the technological and supporting infrastructure needed to support local AI growth.
2. Data: The strategy addresses the need for a robust and sustainable data ecosystem framework as a critical input for developing contextual AI models and solutions.
3. Research and Development: Given Kenya's unique position as a potential provider of local AI solutions to address development challenges, this strategy includes options to foster a robust AI R&D ecosystem.
4. Talent: The strategy addresses the critical need for equitable access to AI through developing AI skills across all levels of society.
5. Governance: The strategy provides a roadmap for developing initial governance frameworks for responsible AI development and use.
6. Investment: A significant outlay of capital and investment is needed to establish an AI industry. The strategy addresses different options and avenues of financing its implementation.
7. Ethics, Equity, and Inclusion: The strategy addresses how Kenya can ensure that AI development is ethical, inclusive, and respectful of human rights.”
📢 Spread AI Governance Awareness
Enjoying this edition? Share it with friends and colleagues:
⚖️ California's Two Legal Advisories on AI
California released two legal advisories on AI, highlighting AI-related practices that may be unlawful under California law. Examples of practices:
1. "Falsely advertise the accuracy, quality, or utility of AI systems. This includes:
╰┈➤ claiming that an AI system has a capability that it does not;
╰┈➤ representing that a system is completely powered by AI when humans are responsible for performing some of its functions;
╰┈➤ representing that humans are responsible for performing some of a system’s functions when AI is responsible instead;
╰┈➤ claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias."
2. "Use AI to foster or advance deception. For example:
╰┈➤ the creation of deepfakes, chatbots, and voice clones that appear to represent people, events, and utterances that never existed or occurred would likely be deceptive.
╰┈➤ in many contexts it would likely be deceptive to fail to disclose that AI has been used to create a piece of media."
3. "Use AI to create and knowingly use another person’s name, voice, signature, photograph, or likeness without that person’s prior consent."
4. "Use AI to impersonate a real person for purposes of harming, intimidating, threatening, or defrauding another person."
5. "Use AI to impersonate a real person for purposes of receiving money or property."
6. "Use AI to impersonate a real person for any unlawful purpose."
7. "Use AI to impersonate a government official in the execution of official duties."
8. "Use AI in a manner that is unfair, including using AI in a manner that results in negative impacts that outweigh its utility, or in a manner that offends public policy, is immoral, unethical, oppressive, or unscrupulous, or causes substantial injury."
9. "Create, market, or disseminate an AI system that does not comply with federal or state laws, including the false advertising"
➡️ The advisory also summarizes new California AI laws that went into effect on January 1, 2025. Among the new laws are those covering:
→ Disclosure requirements for businesses;
→ Unauthorized use of likeness;
→ Use of AI in election and campaign materials;
→ Prohibition and reporting of exploitative uses of AI.
💼 The Mirage of AI Terms of Use Restrictions
The paper "The Mirage of AI Terms of Use Restrictions" by Peter Henderson and Mark Lemley is an excellent read for everyone in AI governance, especially lawyers trying to navigate the field. Key quotes:
"We argue that there is little basis for a company to claim IP rights in anything its generative AI delivers to its users. Generative AI companies are selling something—and people are buying. But it’s not obvious what it is they actually sell beyond access to the computer used to generate the responses to prompts. Nor is it clear what, if any, other legal rights they have in their models. The absence of IP rights for the model weights or model outputs makes questionable many state law claims due to copyright preemption doctrine. And other federal claims, like those based on the Digital Millenium Copyright Act (“DMCA”) or the Computer Fraud and Abuse Act (“CFAA”) may not pick up the slack."
"Even if they end up being legally unenforceable, attached terms may be useful to companies for other reasons. Though terms attached to open-weight models are most likely to face successful challenge, open-weight model creators may wish to use them as a signaling device. And for corporate users of the model, the terms may have more persuasive value; users may wish to avoid costly and drawn-out litigation. For closed-weight model providers—who can revoke access at any time—terms are more practically enforceable outside of court: access can be revoked through technical means by deleted a malicious users’ account. Efforts to bypass standard login mechanisms would lend themselves to more successful, and well-tested claims."
"AI terms of service are built on a house of sand, particularly if the model is released open source. The traditional basis for enforcing those terms—a condition imposed on the necessary license to access copyrighted content—doesn't work with AI. Indeed, courts in many circuits will preempt contracts that attempt to control what copyright law does not control."
"Nonetheless, AI companies may well want to prohibit certain uses of their models that governments in the US cannot constitutionally ban, such as hate speech. Enforcing terms of use might help with that goal, though at the risk of also enforcing more problematic clauses. The dubious enforceability of AI terms of service is both good and bad. But whether you like the result or not, it is important to recognize that AI terms of service are unlikely to do much of what the companies rely on them to do."
🔥 Job Opportunities in AI Governance
Below are 10 new AI Governance roles posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
🇺🇸 Xerox: Generative AI Governance & Process Manager - apply
🇳🇱 NXP Semiconductors: AI Governance Specialist - apply
🇺🇸 Latham & Watkins: Senior AI Governance Counsel - apply
🇰🇷 Coupang: Senior AI Counsel - apply
🇸🇰 Takeda: Responsible AI Manager - apply
🇬🇧 Changing Social: Head of Cloud and AI Governance - apply
🇺🇸 Contentful: Senior Legal Counsel, AI Governance & IP - apply
🇬🇧 Capgemini Invent: Vice President, AI Governance & Trust - apply
🇺🇸 Vertex Pharmaceuticals: Director AI Governance & Policy - apply
🇮🇹 BIP: AI Governance Specialist - apply
🔔 More job openings: Join thousands of professionals who subscribe to our AI governance & privacy job board and receive weekly job opportunities.
💡 Keep the Conversation Going
Thank you for reading! If you enjoyed this edition, here's how you can keep the conversation going:
1️⃣ Help spread AI governance awareness
→ Start a discussion on social media about this edition's topic;
→ Share this edition with friends, adding your critical perspective;
→ Share your thoughts in the comment section below.
2️⃣ Go beyond
→ Stay ahead in AI: upgrade your subscription and start receiving my exclusive weekly analyses on emerging challenges in AI governance;
→ Looking for an authentic gift? Surprise them with a paid subscription.
3️⃣ For organizations
→ Organizations promoting AI literacy can purchase 3+ subscriptions here and 3+ seats in our AI Governance Training here at a discount;
→ Organizations offering AI governance or privacy solutions can sponsor this newsletter and reach thousands of readers. Fill out this form to get started.
See you soon!
Luiza