OpenAI's Privacy Fail · China's Global AI Governance Plan · And More
My weekly AI governance curation to help you stay ahead | Edition #223
👋 Hi everyone, Luiza Jarovsky here. Welcome to our 223rd edition, featuring my weekly curation of essential papers, reports, news, and ideas on AI governance, now reaching over 71,600 subscribers in 170 countries. It's great to have you on board! To upskill and advance your career:
AI Governance Training: Apply for a discounted seat here
Learning Center: Receive free AI governance resources
Job Board: Find open roles in AI governance and privacy
AI Book Club: Discover your next read in AI and beyond
🎓 Join the 24th cohort in October
If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, I invite you to join the 24th cohort of my 15-hour live online AI Governance Training, starting in October.
Cohorts are limited to 30 people (the September cohort sold out a month in advance), and over 1,250 professionals have already participated. Many described the experience as transformative and an important step in their career growth. [Apply for a discounted seat here].
OpenAI's Privacy Fail · China's Global AI Governance Plan · And More
This week's essential news, papers, reports, and ideas on AI governance:
1. The news you can't miss:
Through the “share” feature, people were unknowingly making their ChatGPT interactions public and indexable by search engines (you can see a screenshot I took here, in addition to my privacy recommendations). As Business Insider reported, a few hours after my post on the topic went viral, OpenAI's CISO announced it changed ChatGPT's interface, removing the option to make conversations indexable.
After this incident, I wrote a reflection on AI, privacy, and design, noting that: a) a product used by hundreds of millions of people (many of whom don't understand how tech works) should not have a checkbox that makes a potentially intimate conversation indexable by search engines; and b) it's the company's responsibility to design interfaces with built-in privacy features that are tailored to its public.
The White House now has an AI policy website where you can find the U.S. recent Executive Orders on AI and additional AI policy links.
According to user reports, ChatGPT Agent clicks the 'I'm not a bot' button because "this step is necessary to prove I'm not a bot."
Humans for the win: a programmer made history after beating ChatGPT at the world coding competition: "I'm completely exhausted.... I had 10h of sleep in the last 3 days and I'm barely alive."
Additional tech companies have publicly announced that they intend to sign the EU AI Act's code of practice for providers of general-purpose AI models, including Google (read my commentary), Microsoft, and xAI (it signed the safety and security chapter; see my comments).
Two adult film companies are suing Meta for copyright infringement, alleging that it used their content (over 2,396 movies) to train its AI models. Meta does not want anything to do with adult film companies. Will porn (!) help solve AI copyright issues once and for all? Read my commentary.
The UK's Joint Committee on Human Rights has launched a new inquiry to examine the threats and opportunities AI offers for human rights in the UK, and is potentially considering changes to the legal and regulatory framework.
This week, Zuckerberg announced his vision (video here) for a “personal superintelligence.” I wrote about some of the inconsistencies of Zuckerberg's announcement, as well as what Meta's AI-related plans actually look like, based on what he told investors during the last earnings call.
2. Must-read academic papers
“Sentencing the Brussels effect: The Limits of the EU’s AI Rulebook” (link) by Robert Mahari and Gabriele Mazzini. The paper offers an interesting analysis of the EU AI Act's application to criminal sentencing systems, highlighting some of its limitations, inconsistencies, and ambiguities.
“Beyond the Supply Chain: Artificial Intelligence’s Demand Side” (link) by Alicia Solow-Niederman. A thoughtful article on AI governance, highlighting the need to consider the contextual ways people interact with AI systems and their legal implications (focusing on a privacy law perspective).
“Working with AI: Measuring the Occupational Implications of Generative AI” (link) by Kiran Tomlinson et al. The paper proposes an AI applicability score for various occupations, considering work activities where people seek AI assistance and measurements of task success. It also shows how this observed usage compares to predictions of occupational AI impact.
3. New reports and relevant documents
Just 3 days after the U.S. released its AI Action Plan, China unveiled its own "Global AI Governance Action Plan." I wrote a commentary on the Chinese plan, highlighting that many are misinterpreting it, as China's goal is, in fact, to be the world's AI leader by 2030. This has been its explicit focus since 2017, when its State Council released the "New Generation AI Development Plan".