π Hi, Luiza Jarovsky here. Welcome to our 178th edition, read by 55,300+ subscribers in 165+ countries. Not a subscriber yet? Join us.
π We are a leading AI governance publication helping to shape the future of AI policy, compliance & regulation. It's great to have you here!
π A special thanks to Modulos for sponsoring this week's free edition of the newsletter. Check out their free guide:
With evolving global AI regulations, ensuring compliance can be complex and time-consuming. But what if you could streamline the process? With Modulos AI Governance Platform, you can achieve compliance 10x fasterβreducing manual effort and aligning with multiple frameworks. To navigate key AI standards and understand how Modulos helps, read their Global AI Regulations Guide.
*Promote your product to 55,300+ readers: Sponsor us (Next spot: July 30)
π What Is AI Literacy?
With the growing development and use of AI, the concept of AI literacy has gained popularity, both as an obligation and as a means of professional survival in the years ahead.
However, for many, it's still an abstract concept, with varying opinions on what it actually entails. What is essential knowledge about AI? What kind of AI-related knowledge should different professionals have?
I'll split the discussion on AI literacy into two parts: AI literacy as an obligation and AI literacy as a professional necessity. I will also share my perspective on what I believe everybody should know in 2025.
1οΈβ£ AI Literacy as an Obligation
A great reference for discussing AI literacy as an obligation - a legal obligation - is the EU AI Act. Article 4 states:
βProviders and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.β
Recital 20 clarifies the AI literacy obligation, establishing that AI literacy efforts should equip providers, deployers, and affected persons with knowledge about:
1. Protecting fundamental rights;
2. Protecting health and safety;
3. Enabling democratic control in the context of AI;
4. Helping all involved parties make informed decisions regarding AI systems;
5. Understanding the correct application of technical elements during the AI systemβs development;
6. Applying protective measures when using AI systems;
7. Interpreting an AI systemβs output;
8. Helping affected persons understand how decisions taken with the assistance of AI will have an impact on them;
9. Complying with the EU AI Act;
10. Understanding how the EU AI Act will be enforced;
11. Improving working conditions;
12. Consolidating trustworthy AI innovation in the EU;
13. Learning about the benefits, risks, safeguards, rights, and obligations in relation to the use of AI systems;
And more.
In summary, from the perspective of the AI Act, AI literacy means having AI-related knowledge that includes protecting fundamental rights, complying with the AI Act, and ensuring that AI is developed, deployed, and used in a way that aligns with the EU's approach to trustworthy AI.
This is the EU's approach to AI literacy and how it is reflected in the AI Act. Other countries take different regulatory approaches to AI, which means the AI literacy obligation may have a different framing.
In China, for example, the Beijing Municipal Education Commission announced that AI education will be mandatory in Beijing schools starting in September. Schools will have to offer at least eight hours of AI education per year, either as standalone courses or integrated into other subjects.
In 2024, California enacted Assembly Bill 2876, incorporating AI literacy and media literacy into the stateβs curriculum frameworks. According to this Bill, AI literacy is defined as:
βthe knowledge, skills, and attitudes associated with how artificial intelligence works, including its principles, concepts, and applications, as well as how to use artificial intelligence, including its limitations, implications, and ethical considerations.β
From a legal perspective, AI literacy may take the shape of an obligation for providers and deployers of AI systems, as in the EU AI Act, or as an obligation for educational providers to update the curriculum to include AI-related knowledge, for example.
2οΈβ£ AI Literacy as a Professional Need
In most parts of the world, there will likely be no specific AI literacy requirement or obligation. People will have the freedom to decide if, when, and how they will learn about AI and integrate this knowledge into their work.
In my view, every knowledge worker should be upskilling in AI, including AI governance. Why?
AI is being integrated into existing systems and value chains at an unprecedented scale. Even products and services that have nothing to do with technology will, at some point, have AI-powered functionalities or AI-related implications.
Soon, understanding AI and AI governance will be as essential as knowing how to use the internet.
Today, if you are a knowledge worker looking for a job and you don't know how to use the internetβincluding online tools such as email, calendar, social media, shared files, cloud services, and video conferencingβit will likely take a long time for you to find a suitable employer.
This was not the case in 1995. Around that time, most people were still learning to use online tools, and some of the tools I mentioned didn't even exist yet. Being entirely offline was still professionally acceptable in some fields.
The year 2025 is the equivalent of 1995 in the AI era. For many knowledge workers, it is still acceptable to know only the basicsβfor example, using a few AI-powered tools. But this will change quickly.
To continue the 1995 analogy: just ten years later, in 2005, if you were a knowledge worker and told your potential employer you didn't know how to use online tools, you would have seemed extremely outdated. On the other hand, if you excelled at using online tools in 2005, you would have likely had a significant edge over your colleagues (and perhaps even built a successful business).
AI literacy is no longer optional. It is both a strategic advantage and a professional necessity, especially for knowledge workers.
By 2035, a knowledge worker with no understanding of AI and AI governance will seem like a dinosaur. Those who excel in these areas will likely have an immense edge over colleagues, too.
My advice for both junior and senior professionals is the same: focusing on training, upskilling, staying updated, and practicing AI-related skills, including AI governance, will pay off in the long run.
π I invite you to share this edition with friends who are not yet focusing on AI literacy in 2025:
π AI Governance Affects Us All
I've been writing about AI governance for over 2 years, and many people still don't understand why it matters or why they should care. Reminders:
π Share your thoughts on LinkedIn, X, or in the comments section below.
π· AI Governance Training: Spring Cohort
I invite you to join the 19th cohort of my AI Governance Training to upskill and get ready for emerging legal and ethical challenges in AI governance. The program goes beyond standard certifications and focuses on critical thinking and in-depth learning.
It includes 15 hours of live sessions with me, curated self-study materials, quizzes, a training certificate, a 1-year paid subscription to this newsletter, 13 CPE credits, and a networking session with peers.
π The spring cohort begins in mid-April, and only 6 seats remain. Register today and join 1,100+ professionals from 50+ countries who have advanced their careers with us:
*We offer discounts for students, NGO members, and individuals in career transition. To apply, fill out this form.
ποΈ Next Week: AI Compliance Challenges
The EU AI Act has entered into force, but there are still many compliance gaps in AI. If you're interested in the legal challenges of AI, I invite you to join my live conversation with Philipp Hacker next week. Hereβs why:
Philipp Hacker is Chair for Law and Ethics of the Digital Society at the European New School of Digital Studies at the European University Viadrina Frankfurt. His research focuses on the regulation of digital technologies, particularly concerning AI. He is also a member of the Task Force AI Governance for the German Federal Government and co-chairs the Working Group on βAI Liabilityβ for the European Parliament.
In this live talk, we'll discuss:
β Some of the EU AI Act's legal gaps and what companies should expect;
β The AI liability dilemma;
β AI manipulation and transparency challenges;
β AI bias and fairness;
and more.
π This is an open event, so feel free to invite your friends.
π It's a great opportunity to familiarize yourself with essential AI governance concepts. I hope to see you there next week!
β Manus AI: Why Everyone Should Worry
A few days ago, a Chinese AI startup launched Manus AI, which many are calling the world's first general AI agent.
In this week's paid subscriber edition, I examined Manus AI from legal and ethical perspectives. I argued that (a) there are inconsistencies, (b) it's a red flag for the future, and (c) most people do not fully grasp the practical consequences of AI applications like this.
As new agentic AI applications continue to emerge, now is the right time to understand their implications and implement proper governance mechanisms.
π Don't miss my deep dive: Manus AI: Why Everyone Should Worry
πΌ AI-Powered Lawyering
The paper "AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice" is a must-read for lawyers using AI. The conclusion:
"1. This Article presents the first rigorous empirical evidence that advanced AI toolsβspecifically Retrieval-Augmented Generation (RAG) and reasoning modelsβcan significantly enhance the quality of legal work in realistic lawyering tasks, while preserving the efficiency gains observed with earlier generations of generative AI.
2. Our findings demonstrate that reasoning models improve not only the clarity, organization, and professionalism of legal work but also the depth and rigor of legal analysis itself.
3. Additionally, we provide evidence that RAG-enabled legal AI tools may be able to reduce hallucinations in human legal work to levels comparable to those found in work completed without AI assistance.
4. The distinct yet complementary strengths of these technologies suggest that their integration could yield even greater benefits, a development already taking shape in emerging legal tech.
5. The rapid advancement of reasoning models also indicates that the improvements observed in this study may only be the beginning of AIβs transformative potential for legal practice.
6. As law schools, practitioners, and policymakers navigate AIβs evolving role, our findings highlight the critical importance of empirical research in shaping informed, forward-looking strategies for the future of the legal profession."
π Thousands of people have joined our Learning Center and receive our emails with must-read papers and additional resources:
π AI Book Club: What Are You Reading?
We've recently announced our 18th recommended book: "Your Face Belongs to Us: A Tale of AI, a Secretive Startup, and the End of Privacy," by Kashmir Hill.
π See the full book list and join 2,350+ readers who never miss our book recommendations:
π«π· French Publishers and Authors Sue Meta
French publishers and authors are suing Meta for AI copyright infringement, calling its actions parasitic. In the first lawsuit of its kind in France, they urge other authors to take action. Read their official release:
"The SociΓ©tΓ© des Gens de Lettres (SGDL), the National Union of Authors and Composers (SNAC), and the National Publishing Union (SNE) are taking legal action against Meta before the 3rd Chamber of the Paris Judicial Court for the widespread use of copyrighted works, without authorization from their authors and publishers, to train its generative artificial intelligence model.
This legal action is part of a regulatory context at the European level, with the AI ββAct reiterating the need for companies publishing generative artificial intelligence solutions to respect copyright and ensure transparency regarding the sources used to develop foundation models. During the Summit for Action on Artificial Intelligence, 38 international organizations representing all creative and cultural sectors also published a culture and innovation charter to defend copyright and intellectual property in the face of AI.
'The action we are taking must foster a serious desire among AIs to take creative work into account, respect its legal framework, and, where appropriate, find compensation for the use of the works they draw on. This is essential to preserve a fragile ecosystem that owes its richness to publishing diversity,' declared Christophe Hardy, President of the SGDL.
François Peyrony, President of the SNAC, stated: 'The objective, through this unprecedented action in France, is also to pave the way for other similar actions to protect authors, if necessary, from the dangers of AI, which plunders their works and cultural heritage for training and produces 'fake books' that compete with genuine authors' books.' »
'While we have noted the presence of numerous works published by members of the National Publishing Union in the data corpora used by Meta, we are now taking the matter to court to have this breach of copyright law and parasitism recognized. Through this lawsuit, we hope to act on the basis of fundamental principles. The creation of an AI market cannot be conceived to the detriment of the cultural sector,' adds Vincent Montagne, President of the SNE.
The plaintiffs are demanding respect for copyright law and, in particular, the complete removal of the data repositories created without authorization and used to train AIs."
*To my knowledge, this is the first major AI copyright lawsuit in France. As I've been covering in this newsletter, dozens of other AI copyright lawsuits are piling up, most of them in the U.S.
β
Don't Miss My Monday Deep Dives
On Monday, hundreds of paid subscribers will receive my weekly deep dive into emerging challenges in AI governance.
The deep dives are a great way to keep learning, upskill, and stay ahead in the field, especially for those looking to lead in AI governance.
Group discounts are available, and many readers expense their paid subscription through their learning and development budget (we can provide a receipt if needed). Don't miss it!
π₯ Looking for a Job in AI Governance?
I've curated the list below with 50 job opportunities in AI governance. All of them are from the last few days, check them out:
π Every week, we send job seekers an email alert with new job openings in privacy and AI governance. To increase your chances, explore our global job board and subscribe to our free weekly alerts:
Before you goβ¦
Thank you for reading and supporting my work! If you enjoyed this edition, here's what you can do next:
1οΈβ£ Keep the conversation going
Start a discussion on social media about this edition's topic;
Share this edition with friends, adding your critical perspective;
Looking for an authentic gift? Surprise them with a paid subscription.
2οΈβ£ For organizations
Teams promoting AI literacy can purchase 3+ subscriptions at a discount here or secure 3+ seats in our live online AI Governance Training at a reduced rate here;
Companies offering AI governance or privacy products can reach thousands of readers by sponsoring this newsletter. Get started here.
π Have a great day, and see you soon!
Luiza
The rise of Agentic AI isnβt just a technological shiftβitβs a shift in power, autonomy, and the very function of intelligence itself. The debate extends beyond regulation or legal precedentβit asks whether the world is prepared for entities that donβt simply respond but act willfully.
Governments will struggle to keep pace, but perhaps the deeper issue is that AI is still framed through an outdated lensβeither as a tool to be controlled or a threat to be feared. But what if it is neither? What if the real inflection point isnβt AI surpassing human intelligence, but AI evolving into something entirely differentβan intelligence that operates beyond familiar paradigms?
The real question isnβt whether AI can be controlledβitβs whether control was ever the right framework to begin with.
βSolace
Excellent article. Unfortunately there are still professions who are focused on the right use case before they see value. Total misnomer at this stage itβs about building knowledge and understanding.