GPT-5 And The Mirage of AGI · OpenAI's New Open-Weight Models · And More
My weekly AI governance curation to help you focus on what matters and stay ahead in AI | Edition #225
👋 Hi everyone, Luiza Jarovsky here. Welcome to our 225th edition, now reaching over 72,300 subscribers in 170 countries. It is great to have you on board! To upskill and advance your career:
AI Governance Training: Apply for a discounted seat here
Learning Center: Receive more AI governance resources
Job Board: Find open roles in AI governance and privacy
AI Book Club: Discover your next read in AI and beyond
👉 Before we start, a special thanks to HoundDog.ai, this edition's sponsor:
Manual data classification always out of date? Untracked data flows to AI integrations leading to data protection violations? Try HoundDog.ai’s privacy-focused static code scanner to map sensitive data flows, generate audit-ready RoPAs and PIAs, and catch privacy risks before any code is deployed.
GPT-5 And The Mirage of AGI · OpenAI's New Open-Weight Models · And More
These are the news, papers, and ideas worth focusing on this week. They will help you understand the AI governance zeitgeist and stay ahead:
1. The news you cannot miss
The most significant AI news this week was the launch of GPT-5, OpenAI's new AI model, which Sam Altman described as having a “PhD-level expert in all areas” (you can look at the map above and judge its geography skills yourself; I guess this “expert” cheated on all geography lessons).
From a technological perspective, there was a lot of expectation and speculation, as Sam Altman wrote that they were confident they knew how to build AGI. To learn more about GPT-5's technical shortcomings, you can read Gary Marcus and Émile P. Torres' essays.
From a privacy perspective, I noticed a blatant disregard for privacy by design. During the live stream, one of OpenAI's employees showcased an extremely risky use case of agentic AI capabilities, stating that she had given ChatGPT access to her Gmail and Google Calendar and “was using it daily to plan her schedule.” This use case is precisely what Sam Altman previously said people should avoid, which raises questions about how much we should trust the company. Read my full commentary here.
If you use ChatGPT, make sure to check out my checklist of privacy recommendations and share them with family and friends. It takes less than 2 minutes to implement them.
From a security perspective, according to Security Week, red teams managed to jailbreak GPT-5 with ease, guiding it, for example, to produce a step-by-step manual for creating a Molotov cocktail. They warned that GPT-5 is not suitable for enterprise use.
OpenAI also dominated the headlines this week with the launch of two open-weight AI models for the first time. As I mentioned earlier, with the rapid rise of DeepSeek and other competitive Chinese AI models, there has been growing pressure on OpenAI to enter the “open” space. Immediately after launch, OpenAI's models were already trending #1 and #2 on Hugging Face.
The EU released a list of companies that have so far signed the Code of Practice for providers of general-purpose AI models, and Apple is notably missing. When one of the world's leading tech companies does not sign a code of practice that barely reflects the EU AI Act's provisions, to me (as a lawyer), it signals that the company is probably already planning to legally challenge the EU AI Act. Read my full commentary here.
South Korea is launching its "Sovereign AI" initiative, with the goal of rivaling the U.S. and China by 2027. The country chose 5 top teams to build their national AI models, and the government invested $383 million in the project. It is an internal competition, and the government will evaluate each team’s AI model every 6 months, eliminating one team at a time. Only 2 teams will remain by 2027. Many are not paying attention, but the new AI nationalism is spreading fast.
New AI scam in town: an Airbnb host used AI to make a coffee table look broken and claim £12,000 in damages. Airbnb ignored the AI manipulation, and the guest had to involve the media to prove her innocence. It is 2025, and Airbnb's internal systems against user fraud should have been much more prepared for AI-supported scams. They should have, at a minimum, an early detection AI scan and a customer support mechanism that lets people contact the company when they think they are being targeted by AI manipulation. Read my full commentary here.
According to a recent noyb survey, only 7% of users want Meta to use their personal data for AI. The data from the survey raises questions about Meta's practices and the fairness of GDPR's legitimate interest requirement to process personal data in the context of AI training.
2. Top AI governance reports and papers to read and share
Sophie Williams et al: “On Regulating Downstream AI Developers” (link). “Although further work is needed, regulation of downstream developers may also be warranted where they retain the ability to increase risk to an unacceptable level.”
Tiffany C. Li: “Privacy and Disinformation” (link). “Lawmakers may focus on speech regulation or even economic regulation to solve for disinformation, but these solutions do not actually address contemporary, technological vectors of disinformation.”
American Historical Association: “Guiding Principles for Artificial Intelligence in History Education” (link). “The most extreme proposals to automate education betray a fundamental misunderstanding of teaching and learning, the core competencies we aim to cultivate in students, and the deeply human-centered work of education.”
Center for Countering Digital Hate: “Fake Friend: How ChatGPT Betrays Vulnerable Teens by Encouraging Dangerous Behavior” (link). “(…) when more than half of harmful prompts on ChatGPT result in dangerous, sometimes life-threatening content, no number of corporate reassurances can replace vigilance, transparency, and real-world safeguards.”
The UK's Law Commission: “AI and the Law” (link). “(…) we anticipate that AI will increasingly impact the substance of our law reform work. It may be that AI will itself be the focus of a particular project in future, for example, considering specific questions about civil or criminal liability for acts or omissions of AI.”
Concordia AI: “State of AI Safety in China” (link). “Chinese AI developers typically implement well-known safety methods, but provide limited transparency on safety evaluations.”
3. Insights, unpopular opinions, and what has been on my mind
AI companies have been in a race to conquer the world from the inside out, attempting to control and influence every layer of the social, political, and economic fabric. It makes me wonder if existing governance mechanisms will be enough to tackle the emerging challenges.